<buttonid="sidebar-toggle"class="icon-button"type="button"title="Toggle Table of Contents"aria-label="Toggle Table of Contents"aria-controls="sidebar">
<inputtype="search"id="searchbar"name="searchbar"placeholder="Search this book ..."aria-controls="searchresults-outer"aria-describedby="searchresults-header">
<li>domain - This is the domain you "own". It is the highest level entity. An example would be <code>example.com</code> (since you do not own <code>.com</code>).</li>
<li>subdomain - A subdomain is a domain name space under the domain. A subdomains of <code>example.com</code> are <code>a.example.com</code> and <code>b.example.com</code>. Each subdomain can have further subdomains.</li>
<li>domain name - This is any named entity within your domain or its subdomains. This is the umbrella term, referring to all entities in the domain. <code>example.com</code>, <code>a.example.com</code>, <code>host.example.com</code> are all valid domain names with the domain <code>example.com</code>.</li>
<li>origin - An origin defines a URL with a protocol scheme, optional port number and domain name components. An example is <code>https://host.example.com</code></li>
<li>effective domain - This is the extracted domain name from an origin excluding port and scheme.</li>
<li>trust - A trust is when two Kanidm domains have a relationship to each other where accounts can be used between the domains. The domains retain their administration boundaries, but allow cross authentication.</li>
<li>replication - This is the process where two or more Kanidm servers in a domain can synchronise their database content.</li>
<li>UAT - User Authentication Token. This is a token issue by Kanidm to an account after it has authenticated.</li>
<li>SPN - Security Principal Name. This is a name of an account comprising it's name and domain name. This allows distinction between accounts with identical names over a trust boundary</li>
<td>Incorrect choice of the domain name may have security impacts on your Kanidm instance, not limited to credential phishing, theft, session leaks and more. It is critical you follow the advice in this chapter.</td>
<p>While the production instance has a valid and well defined subdomain that doesn't conflict, because the
dev instance is a subdomain of production, it allows production cookies to leak to dev. Dev instances
may have weaker security controls in some cases which can then allow compromise of the production instance.</p>
<divstyle="break-before: page; page-break-before: always;"></div><h1id="preparing-for-your-deployment"><aclass="header"href="#preparing-for-your-deployment">Preparing for your Deployment</a></h1>
<p><strong>NOTE</strong> Our preferred deployment method is in containers, and this documentation assumes you're running in docker. Kanidm will alternately run as a daemon/service, and server builds are available for multiple platforms if you prefer this option.</p>
</blockquote>
<p>We provide docker images for the server components. They can be found at:</p>
<p>You'll need a volume where you can place configuration, certificates, and the database:</p>
<pre><code>docker volume create kanidmd
</code></pre>
<p>You should have a chain.pem and key.pem in your kanidmd volume. The reason for requiring
Transport Layer Security (TLS, which replaces the deprecated Secure Sockets Layer, SSL) is explained in <ahref="./why_tls.html">why tls</a>. In summary, TLS is our root of trust between the
server and clients, and a critical element of ensuring a secure system.</p>
<p>The key.pem should be a single PEM private key, with no encryption. The file content should be
similar to:</p>
<pre><code>-----BEGIN RSA PRIVATE KEY-----
MII...<base64>
-----END RSA PRIVATE KEY-----
</code></pre>
<p>The chain.pem is a series of PEM formatted certificates. The leaf certificate, or the certificate
that matches the private key should be the first certificate in the file. This should be followed
by the series of intermediates, and the final certificate should be the CA root. For example:</p>
<pre><code>-----BEGIN CERTIFICATE-----
<leaf certificate>
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
<intermediate certificate>
-----END CERTIFICATE-----
[ more intermediates if needed ]
-----BEGIN CERTIFICATE-----
<ca/croot certificate>
-----END CERTIFICATE-----
</code></pre>
<blockquote>
<p><strong>HINT</strong>
If you are using Let's Encrypt the provided files "fullchain.pem" and "privkey.pem" are already
correctly formatted as required for Kanidm.</p>
</blockquote>
<p>You can validate that the leaf certificate matches the key with the command:</p>
<p>You need a configuration file in the volume named <code>server.toml</code>. (Within the container it should be <code>/data/server.toml</code>) Its contents should be as follows:</p>
<pre><code># The webserver bind address. Will use HTTPS if tls_*
# is provided. If set to 443 you may require the
# NET_BIND_SERVICE capability.
# Defaults to "127.0.0.1:8443"
bindaddress = "[::]:8443"
#
# The read-only ldap server bind address. The server
# will use LDAPS if tls_* is provided. If set to 636
# you may require the NET_BIND_SERVICE capability.
# Defaults to "" (disabled)
# ldapbindaddress = "[::]:3636"
#
# HTTPS requests can be reverse proxied by a loadbalancer.
# To preserve the original IP of the caller, these systems
# will often add a header such as "Forwarded" or
# "X-Forwarded-For". If set to true, then this header is
# respected as the "authoritive" source of the IP of the
# connected client. If you are not using a load balancer
# then you should leave this value as default.
# Defaults to false
# trust_x_forward_for = false
#
# The path to the kanidm database.
db_path = "/data/kanidm.db"
#
# If you have a known filesystem, kanidm can tune sqlite
# to match. Valid choices are:
# [zfs, other]
# If you are unsure about this leave it as the default
# (other). After changing this
# value you must run a vacuum task.
# - zfs:
# * sets sqlite pagesize to 64k. You must set
# recordsize=64k on the zfs filesystem.
# - other:
# * sets sqlite pagesize to 4k, matching most
# filesystems block sizes.
# db_fs_type = "zfs"
#
# The number of entries to store in the in-memory cache.
# Minimum value is 256. If unset
# an automatic heuristic is used to scale this.
# db_arc_size = 2048
#
# TLS chain and key in pem format. Both must be present
tls_chain = "/data/chain.pem"
tls_key = "/data/key.pem"
#
# The log level of the server. May be default, verbose,
# perfbasic, perffull
# Defaults to "default"
# log_level = "default"
#
# The DNS domain name of the server. This is used in a
# number of security-critical contexts
# such as webauthn, so it *must* match your DNS
# hostname. It is used to create
# security principal names such as `william@idm.example.com`
# so that in a (future)
# trust configuration it is possible to have unique Service
# Principal Names (spns) throughout the topology.
# ⚠️ WARNING ⚠️
# Changing this value WILL break many types of registered
# credentials for accounts
# including but not limited to webauthn, oauth tokens, and more.
# If you change this value you *must* run
# `kanidmd domain_name_change` immediately after.
domain = "idm.example.com"
#
# The origin for webauthn. This is the url to the server,
# with the port included if
# it is non-standard (any port except 443). This must match
# or be a descendent of the
# domain name you configure above. If these two items are
# not consistent, the server WILL refuse to start!
# origin = "https://idm.example.com"
origin = "https://idm.example.com:8443"
#
# The role of this server. This affects available features
# and how replication may interact.
# Valid roles are:
# - WriteReplica
# This server provides all functionality of Kanidm. It
# allows authentication, writes, and
# the web user interface to be served.
# - WriteReplicaNoUI
# This server is the same as a WriteReplica, but does NOT
# offer the web user interface.
# - ReadOnlyReplica
# This server will not writes initiated by clients. It
# supports authentication and reads,
# and must have a replication agreement as a source of
# its data.
# Defaults to "WriteReplica".
# role = "WriteReplica"
#
# [online_backup]
# The path to the output folder for online backups
# path = "/var/lib/kanidm/backups/"
# The schedule to run online backups (see https://crontab.guru/)
# every day at 22:00 UTC (default)
# schedule = "00 22 * * *"
# four times a day at 3 minutes past the hour, every 6th hours
# schedule = "03 */6 * * *"
# Number of backups to keep (default 7)
# versions = 7
#
</code></pre>
<p>This example is located in <ahref="https://github.com/kanidm/kanidm/blob/master/examples/server_container.toml">examples/server_container.toml</a>.</p>
<td>You MUST set the `domain` name correctly, aligned with your `origin`, else the server may refuse to start or some features (e.g. webauthn, oauth) may not work correctly!</td>
</tr>
</table>
<h3id="check-the-configuration-is-valid"><aclass="header"href="#check-the-configuration-is-valid">Check the configuration is valid.</a></h3>
<p>You should test your configuration is valid before you proceed.</p>
<pre><code>docker run --rm -i -t -v kanidmd:/data \
<h3id="run-the-server"><aclass="header"href="#run-the-server">Run the Server</a></h3>
<p>Now we can run the server so that it can accept connections. This defaults to using <code>-c /data/server.toml</code></p>
<pre><code>docker run -p 443:8443 -v kanidmd:/data kanidm/server:latest
</code></pre>
<h3id="using-the-net_bind_service-capability"><aclass="header"href="#using-the-net_bind_service-capability">Using the NET_BIND_SERVICE capability</a></h3>
<p>If you plan to run without using docker port mapping or some other reverse proxy, and your bindaddress
or ldapbindaddress port is less than <code>1024</code> you will need the <code>NET_BIND_SERVICE</code> in docker to allow
these port binds. You can add this with <code>--cap-add</code> in your docker run command.</p>
<pre><code>docker run --cap-add NET_BIND_SERVICE --network [host OR macvlan OR ipvlan] \
-v kanidmd:/data kanidm/server:latest
</code></pre>
<divstyle="break-before: page; page-break-before: always;"></div><h2id="updating-the-server"><aclass="header"href="#updating-the-server">Updating the Server</a></h2>
<h3id="preserving-the-previous-image"><aclass="header"href="#preserving-the-previous-image">Preserving the Previous Image</a></h3>
<p>You may wish to preserve the previous image before updating. This is useful if an issue is encountered
in upgrades.</p>
<pre><code>docker tag kanidm/server:latest kanidm/server:<DATE>
docker tag kanidm/server:latest kanidm/server:2022-10-24
</code></pre>
<h3id="update-your-image"><aclass="header"href="#update-your-image">Update your Image</a></h3>
<p>Pull the latest version of Kanidm that matches your CPU profile</p>
<pre><code>docker pull kanidm/server:latest
docker pull kanidm/server:x86_64_latest
</code></pre>
<h3id="perform-a-backup"><aclass="header"href="#perform-a-backup">Perform a backup</a></h3>
<p>See <ahref="backup_restore.html">backup and restore</a></p>
<h3id="update-your-instance"><aclass="header"href="#update-your-instance">Update your Instance</a></h3>
<td>It is not always guaranteed that downgrades are possible. It is critical you know how to backup and restore before you proceed with this step.</td>
</tr>
</table>
<p>Docker updates by deleting and recreating the instance. All that needs to be preserved in your
<p>At startup Kanidm will warn you if the environment it is running in is suspicious or
has risks. For example:</p>
<pre><code>kanidmd server -c /tmp/server.toml
WARNING: permissions on /tmp/server.toml may not be secure. Should be readonly to running uid. This could be a security risk ...
WARNING: /tmp/server.toml has 'everyone' permission bits in the mode. This could be a security risk ...
WARNING: /tmp/server.toml owned by the current uid, which may allow file permission changes. This could be a security risk ...
WARNING: permissions on ../insecure/ca.pem may not be secure. Should be readonly to running uid. This could be a security risk ...
WARNING: permissions on ../insecure/cert.pem may not be secure. Should be readonly to running uid. This could be a security risk ...
WARNING: permissions on ../insecure/key.pem may not be secure. Should be readonly to running uid. This could be a security risk ...
WARNING: ../insecure/key.pem has 'everyone' permission bits in the mode. This could be a security risk ...
WARNING: DB folder /tmp has 'everyone' permission bits in the mode. This could be a security risk ...
</code></pre>
<p>Each warning highlights an issue that may exist in your environment. It is not possible for us to
prescribe an exact configuration that may secure your system. This is why we only present
possible risks.</p>
<h3id="should-be-read-only-to-running-uid"><aclass="header"href="#should-be-read-only-to-running-uid">Should be Read-only to Running UID</a></h3>
<p>Files, such as configuration files, should be read-only to the UID of the Kanidm daemon. If an attacker is
able to gain code execution, they are then unable to modify the configuration to write, or to over-write
files in other locations, or to tamper with the systems configuration.</p>
<p>This can be prevented by changing the files ownership to another user, or removing "write" bits
from the group.</p>
<h3id="everyone-permission-bits-in-the-mode"><aclass="header"href="#everyone-permission-bits-in-the-mode">'everyone' Permission Bits in the Mode</a></h3>
<p>This means that given a permission mask, "everyone" or all users of the system can read, write or
execute the content of this file. This may mean that if an account on the system is compromised the
attacker can read Kanidm content and may be able to further attack the system as a result.</p>
<p>This can be prevented by removing "everyone: execute bits from parent directories containing the
configuration, and removing "everyone" bits from the files in question.</p>
<h3id="owned-by-the-current-uid-which-may-allow-file-permission-changes"><aclass="header"href="#owned-by-the-current-uid-which-may-allow-file-permission-changes">Owned by the Current UID, Which May Allow File Permission Changes</a></h3>
<p>File permissions in UNIX systems are a discretionary access control system, which means the
named UID owner is able to further modify the access of a file regardless of the current
settings. For example:</p>
<pre><code>[william@amethyst 12:25] /tmp > touch test
[william@amethyst 12:25] /tmp > ls -al test
-rw-r--r-- 1 william wheel 0 29 Jul 12:25 test
[william@amethyst 12:25] /tmp > chmod 400 test
[william@amethyst 12:25] /tmp > ls -al test
-r-------- 1 william wheel 0 29 Jul 12:25 test
[william@amethyst 12:25] /tmp > chmod 644 test
[william@amethyst 12:26] /tmp > ls -al test
-rw-r--r-- 1 william wheel 0 29 Jul 12:25 test
</code></pre>
<p>Notice that even though the file was set to "read only" to william, and no permission to any
other users, user "william" can change the bits to add write permissions back or permissions
for other users.</p>
<p>This can be prevent by making the file owner a different UID than the running process for kanidm.</p>
<td>Kanidm frequently uses new Rust versions and features, however Fedora and Centos frequently are behind in Rust releases. As a result, they may not always have the latest Kanidm versions available.</td>
</tr>
</table>
<p>Fedora has limited support through the development repository. You need to add the repository
<p>Groups represent a collection of entities. This generally is a collection of persons or service accounts.
Groups are commonly used to assign privileges to the accounts that are members of a group. This allows
easier administration over larger systems where privileges can be assigned to groups in a logical
manner, and then only membership of the groups need administration, rather than needing to assign
privileges to each entity directly and uniquely.</p>
<p>Groups may also be nested, where a group can contain another group as a member. This allows hierarchies
to be created again for easier administration.</p>
<h2id="default-accounts-and-groups"><aclass="header"href="#default-accounts-and-groups">Default Accounts and Groups</a></h2>
<p>Kanidm ships with a number of default service accounts and groups. This is to give you the best
out-of-box experience possible, as well as supplying best practice examples related to modern
Identity Management (IDM) systems.</p>
<p>There are two builtin system administration accounts.</p>
<p><code>admin</code> is the default service account which has privileges to configure and administer kanidm as a whole.
This account can manage access controls, schema, integrations and more. However the <code>admin</code> can not
manage persons by default to seperate the priviliges. As this is a service account is is intended
for limited use.</p>
<p><code>idm_admin</code> is the default service account which has privileges to create persons and to manage these
accounts and groups. They can perform credential resets and more.</p>
<p>Both the <code>admin</code> and the <code>idm_admin</code> user should <em>NOT</em> be used for daily activities - they exist for initial
system configuration, and for disaster recovery scenarios. You should delegate permissions
as required to named user accounts instead.</p>
<p>The majority of the builtin groups are privilige groups that provide rights over Kanidm
administrative actions. These include groups for account management, person management (personal
and sensitive data), group management, and more.</p>
<h2id="recovering-the-initial-admin-accounts"><aclass="header"href="#recovering-the-initial-admin-accounts">Recovering the Initial Admin Accounts</a></h2>
<p>By default the <code>admin</code> and <code>idm_admin</code> accounts have no password, and can not be accessed. They need
to be "recovered" from the server that is running the kanidmd server.</p>
<td>Persons may change their own displayname, name, and legal name at any time. You MUST NOT use these values as primary keys in external systems. You MUST use the `uuid` attribute present on all entries as an external primary key.</td>
</tr>
</table>
<h2id="resetting-person-account-credentials"><aclass="header"href="#resetting-person-account-credentials">Resetting Person Account Credentials</a></h2>
<p>Members of the <code>idm_account_manage_priv</code> group have the rights to manage person and service
accounts security and login aspects. This includes resetting account credentials.</p>
<p>You can perform a password reset on the demo_user, for example as the idm_admin user, who is
a default member of this group. The lines below prefixed with <code>#</code> are the interactive credential
kanidm service-account get demo_service --name admin
</code></pre>
<h2id="using-api-tokens-with-service-accounts"><aclass="header"href="#using-api-tokens-with-service-accounts">Using API Tokens with Service Accounts</a></h2>
<p>Service accounts can have api tokens generated and associated with them. These tokens can be used for
identification of the service account, and for granting extended access rights where the service
account may previously have not had the access. Additionally service accounts can have expiry times
and other auditing information attached.</p>
<p>To show api tokens for a service account:</p>
<pre><codeclass="language-shell">kanidm service-account api-token status --name admin ACCOUNT_ID
kanidm service-account api-token status --name admin demo_service
</code></pre>
<p>By default api tokens are issued to be "read only", so they are unable to make changes on behalf of the
service account they represent. To generate a new read only api token:</p>
<h2id="resetting-service-account-credentials-deprecated"><aclass="header"href="#resetting-service-account-credentials-deprecated">Resetting Service Account Credentials (Deprecated)</a></h2>
<p>To unset or remove these values the following can be used, where <code>any|clear</code> means you may use either <code>any</code> or <code>clear</code>.</p>
<p>These validity settings impact all authentication functions of the account (kanidm, ldap, radius).</p>
<h3id="allowing-people-accounts-to-change-their-mail-attribute"><aclass="header"href="#allowing-people-accounts-to-change-their-mail-attribute">Allowing people accounts to change their mail attribute</a></h3>
<p>By default, Kanidm allows an account to change some attributes, but not their
mail address.</p>
<p>Adding the user to the <code>idm_people_self_write_mail</code> group, as shown
below, allows the user to edit their own mail.</p>
<pre><code>kanidm group add_members idm_people_self_write_mail_priv demo_user --name idm_admin
</code></pre>
<h2id="why-cant-i-change-admin-with-idm_admin"><aclass="header"href="#why-cant-i-change-admin-with-idm_admin">Why Can't I Change admin With idm_admin?</a></h2>
<p>As a security mechanism there is a distinction between "accounts" and "high permission
accounts". This is to help prevent elevation attacks, where say a member of a
service desk could attempt to reset the password of idm_admin or admin, or even a member of
HR or System Admin teams to move laterally.</p>
<p>Generally, membership of a "privilege" group that ships with Kanidm, such as:</p>
<ul>
<li>idm_account_manage_priv</li>
<li>idm_people_read_priv</li>
<li>idm_schema_manage_priv</li>
<li>many more ...</li>
</ul>
<p>...indirectly grants you membership to "idm_high_privilege". If you are a member of
this group, the standard "account" and "people" rights groups are NOT able to
alter, read or manage these accounts. To manage these accounts higher rights
are required, such as those held by the admin account are required.</p>
<p>Further, groups that are considered "idm_high_privilege" can NOT be managed
by the standard "idm_group_manage_priv" group.</p>
<p>Management of high privilege accounts and groups is granted through the
the "hp" variants of all privileges. A non-conclusive list:</p>
<ul>
<li>idm_hp_account_read_priv</li>
<li>idm_hp_account_manage_priv</li>
<li>idm_hp_account_write_priv</li>
<li>idm_hp_group_manage_priv</li>
<li>idm_hp_group_write_priv</li>
</ul>
<p>Membership of any of these groups should be considered to be equivalent to
system administration rights in the directory, and by extension, over all network
resources that trust Kanidm.</p>
<p>All groups that are flagged as "idm_high_privilege" should be audited and
monitored to ensure that they are not altered.</p>
<h3id="enabling-posix-attributes-on-accounts"><aclass="header"href="#enabling-posix-attributes-on-accounts">Enabling POSIX Attributes on Accounts</a></h3>
<p>To enable POSIX account features and IDs on an account, you require the permission
<code>idm_account_unix_extend_priv</code>. This is provided to <code>idm_admins</code> in the default database.</p>
<p>You can then use the following command to enable POSIX extensions on a person or service account.</p>
<pre><code>kanidm [person OR service-account] posix set --name idm_admin <account_id> [--shell SHELL --gidnumber GID]
kanidm person posix set --name idm_admin demo_user
kanidm person posix set --name idm_admin demo_user --shell /bin/zsh
kanidm person posix set --name idm_admin demo_user --gidnumber 2001
kanidm service-account posix set --name idm_admin demo_account
kanidm service-account posix set --name idm_admin demo_account --shell /bin/zsh
kanidm service-account posix set --name idm_admin demo_account --gidnumber 2001
</code></pre>
<p>You can view the accounts POSIX token details with:</p>
<pre><code>kanidm person posix show --name anonymous demo_user
kanidm service-account posix show --name anonymous demo_account
</code></pre>
<h3id="enabling-posix-attributes-on-groups"><aclass="header"href="#enabling-posix-attributes-on-groups">Enabling POSIX Attributes on Groups</a></h3>
<p>To enable POSIX group features and IDs on an account, you require the permission <code>idm_group_unix_extend_priv</code>.
This is provided to <code>idm_admins</code> in the default database.</p>
<p>You can then use the following command to enable POSIX extensions:</p>
<pre><code>kanidm group posix set --name idm_admin <group_id> [--gidnumber GID]
kanidm group posix set --name idm_admin demo_group
kanidm group posix set --name idm_admin demo_group --gidnumber 2001
</code></pre>
<p>You can view the accounts POSIX token details with:</p>
<pre><code>kanidm group posix show --name anonymous demo_group
</code></pre>
<p>POSIX-enabled groups will supply their members as POSIX members to clients. There is no
special or separate type of membership for POSIX members required.</p>
<h2id="troubleshooting-common-issues"><aclass="header"href="#troubleshooting-common-issues">Troubleshooting Common Issues</a></h2>
<h3id="subuid-conflicts-with-podman"><aclass="header"href="#subuid-conflicts-with-podman">subuid conflicts with Podman</a></h3>
<p>Due to the way that Podman operates, in some cases using the Kanidm client inside non-root containers
with Kanidm accounts may fail with an error such as:</p>
<pre><code>ERRO[0000] cannot find UID/GID for user NAME: No subuid ranges found for user "NAME" in /etc/subuid
</code></pre>
<p>This is a fault in Podman and how it attempts to provide non-root containers, when UID/GIDs
are greater than 65535. In this case you may manually allocate your users GID number to be
between 1000 - 65535, which may not trigger the fault.</p>
<td>The recycle bin is a best effort - when recovering in some cases not everything can be "put back" the way it was. Be sure to check your entries are valid once they have been revived.</td>
</tr>
</table>
<h2id="where-is-the-recycle-bin"><aclass="header"href="#where-is-the-recycle-bin">Where is the Recycle Bin?</a></h2>
<p>The recycle bin is stored as part of your main database - it is included in all
backups and restores, just like any other data. It is also replicated between
all servers.</p>
<h2id="how-do-things-get-into-the-recycle-bin"><aclass="header"href="#how-do-things-get-into-the-recycle-bin">How do Things Get Into the Recycle Bin?</a></h2>
<p>Any delete operation of an entry will cause it to be sent to the recycle bin. No
configuration or specification is required.</p>
<h2id="how-long-do-items-stay-in-the-recycle-bin"><aclass="header"href="#how-long-do-items-stay-in-the-recycle-bin">How Long Do Items Stay in the Recycle Bin?</a></h2>
<p>Currently they stay up to 1 week before they are removed.</p>
<h2id="managing-the-recycle-bin"><aclass="header"href="#managing-the-recycle-bin">Managing the Recycle Bin</a></h2>
<p>You can display all items in the Recycle Bin with:</p>
<pre><code>kanidm recycle-bin list --name admin
</code></pre>
<p>You can show a single item with:</p>
<pre><code>kanidm recycle-bin get --name admin <uuid>
<h2id="will-you-implement--insert-protocol-here-"><aclass="header"href="#will-you-implement--insert-protocol-here-">Will you implement -insert protocol here-</a></h2>
<p>Probably, on an infinite time-scale! As long as it's not Kerberos. Or involves SSL or STARTTLS. Please log an issue and start the discussion!</p>
<h2id="why-do-the-crabs-have-knives"><aclass="header"href="#why-do-the-crabs-have-knives">Why do the crabs have knives?</a></h2>
<p>Don't <ahref="https://www.youtube.com/watch?v=0QaAKi0NFkA">ask</a>. They just <ahref="https://www.youtube.com/shorts/WizH5ae9ozw">do</a>.</p>
<h2id="why-wont-you-take-this-faq-thing-seriously"><aclass="header"href="#why-wont-you-take-this-faq-thing-seriously">Why won't you take this FAQ thing seriously?</a></h2>
<p>Look, people just haven't asked many questions yet.</p>
<li>you've successfully connected to a host (10.0.0.14),</li>
<li>TLS worked</li>
<li>Received the status response "true"</li>
</ol>
<p>If you see something like this:</p>
<pre><code>➜ curl -v https://idm.example.com:8443
* Trying 10.0.0.1:8443...
* connect to 10.0.0.1 port 8443 failed: Connection refused
* Failed to connect to idm.example.com port 8443 after 5 ms: Connection refused
* Closing connection 0
curl: (7) Failed to connect to idm.example.com port 8443 after 5 ms: Connection refused
</code></pre>
<p>Then either your DNS is wrong (it's pointing at 10.0.0.1) or you can't connect to the server for some reason.</p>
<p>If you get errors about certificates, try adding <code>-k</code> to skip certificate verification checking and just test connectivity:</p>
<pre><code>curl -vk https://idm.example.com:8443
</code></pre>
<h2id="server-things-to-check"><aclass="header"href="#server-things-to-check">Server things to check</a></h2>
<ul>
<li>Has the config file got <code>bindaddress = "127.0.0.1:8443"</code> ? Change it to <code>bindaddress = "[::]:8443"</code>, so it listens on all interfaces.</li>
<li>Is there a firewall on the server?</li>
<li>If you're running in docker, did you expose the port? (<code>-p 8443:8443</code>)</li>
</ul>
<h2id="client-things-to-check"><aclass="header"href="#client-things-to-check">Client things to check</a></h2>
<p>Try running commands with <code>RUST_LOG=debug</code> to get more information:</p>
<p>You can create a supplemental scope map with:</p>
<pre><code>kanidm system oauth2 update_sup_scope_map <name><kanidm_group_name> [scopes]...
kanidm system oauth2 update_sup_scope_map nextcloud nextcloud_admins admin
</code></pre>
<p>Once created you can view the details of the resource server.</p>
<pre><code>kanidm system oauth2 get nextcloud
---
class: oauth2_resource_server
class: oauth2_resource_server_basic
class: object
displayname: Nextcloud Production
oauth2_rs_basic_secret: <secret>
oauth2_rs_name: nextcloud
oauth2_rs_origin: https://nextcloud.example.com
oauth2_rs_token_key: hidden
</code></pre>
<h3id="configure-the-resource-server"><aclass="header"href="#configure-the-resource-server">Configure the Resource Server</a></h3>
<p>On your resource server, you should configure the client ID as the "oauth2_rs_name" from
Kanidm, and the password to be the value shown in "oauth2_rs_basic_secret". Ensure that
the code challenge/verification method is set to S256.</p>
<p>You should now be able to test authorisation.</p>
<h2id="resetting-resource-server-security-material"><aclass="header"href="#resetting-resource-server-security-material">Resetting Resource Server Security Material</a></h2>
<p>In the case of disclosure of the basic secret, or some other security event where you may wish
to invalidate a resource servers active sessions/tokens, you can reset the secret material of
the server with:</p>
<pre><code>kanidm system oauth2 reset_secrets
</code></pre>
<p>Each resource server has unique signing keys and access secrets, so this is limited to each
resource server.</p>
<h2id="extended-options-for-legacy-clients"><aclass="header"href="#extended-options-for-legacy-clients">Extended Options for Legacy Clients</a></h2>
<p>Not all resource servers support modern standards like PKCE or ECDSA. In these situations
it may be necessary to disable these on a per-resource server basis. Disabling these on
one resource server will not affect others.</p>
<p>To disable PKCE for a resource server:</p>
<pre><code>kanidm system oauth2 warning_insecure_client_disable_pkce <resource server name>
<ahref="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_authentication_and_authorization_in_rhel/configuring-user-authentication-using-authselect_configuring-authentication-and-authorization-in-rhel#creating-and-deploying-your-own-authselect-profile_configuring-user-authentication-using-authselect">create a new profile</a>.</p>
<h3id="check-posix-status-of-group-and-configuration"><aclass="header"href="#check-posix-status-of-group-and-configuration">Check POSIX-status of Group and Configuration</a></h3>
<p>If authentication is failing via PAM, make sure that a list of groups is configured in <code>/etc/kanidm/unixd</code>:</p>
<p>For the unixd daemon, you can increase the logging with:</p>
<pre><code>systemctl edit kanidm-unixd.service
</code></pre>
<p>And add the lines:</p>
<pre><code>[Service]
Environment="RUST_LOG=kanidm=debug"
</code></pre>
<p>Then restart the kanidm-unixd.service.</p>
<p>The same pattern is true for the kanidm-unixd-tasks.service daemon.</p>
<p>To debug the pam module interactions add <code>debug</code> to the module arguments such as:</p>
<pre><code>auth sufficient pam_kanidm.so debug
</code></pre>
<h3id="check-the-socket-permissions"><aclass="header"href="#check-the-socket-permissions">Check the Socket Permissions</a></h3>
<p>Check that the <code>/var/run/kanidm-unixd/sock</code> has permissions mode 777, and that non-root readers can see it with
ls or other tools.</p>
<p>Ensure that <code>/var/run/kanidm-unixd/task_sock</code> has permissions mode 700, and
that it is owned by the kanidm unixd process user.</p>
<h3id="verify-that-you-can-access-the-kanidm-server"><aclass="header"href="#verify-that-you-can-access-the-kanidm-server">Verify that You Can Access the Kanidm Server</a></h3>
<p>You can check this with the client tools:</p>
<pre><code>kanidm self whoami --name anonymous
</code></pre>
<h3id="ensure-the-libraries-are-correct"><aclass="header"href="#ensure-the-libraries-are-correct">Ensure the Libraries are Correct</a></h3>
<p>You should have:</p>
<pre><code>/usr/lib64/libnss_kanidm.so.2
/usr/lib64/security/pam_kanidm.so
</code></pre>
<p>The exact path <em>may</em> change depending on your distribution, <code>pam_unixd.so</code> should be co-located
with pam_kanidm.so. Look for it with the find command:</p>
<pre><code>find /usr/ -name 'pam_unix.so'
</code></pre>
<p>For example, on a Debian machine, it's located in <code>/usr/lib/x86_64-linux-gnu/security/</code>.</p>
<p>RADIUS offers many different types of tunnels and authentication mechanisms.
However, most client devices "out of the box" only attempt a single type when
a WPA2-Enterprise network is selected: MSCHAPv2 with PEAP. This is a
challenge-response protocol that requires clear text or Windows NT LAN
Manager (NTLM) credentials.</p>
<p>As MSCHAPv2 with PEAP is the only practical, universal RADIUS-type supported
on all devices with minimal configuration, we consider it imperative
that it MUST be supported as the default. Esoteric RADIUS types can be used
as well, but this is up to administrators to test and configure.</p>
<p>Due to this requirement, we must store the RADIUS material as clear text or
NTLM hashes. It would be silly to think that NTLM is secure as it relies on
the obsolete and deprecated MD4 cryptographic hash, providing only an
illusion of security.</p>
<p>This means Kanidm stores RADIUS credentials in the database as clear text.</p>
<p>We believe this is a reasonable decision and is a low risk to security because:</p>
<ul>
<li>The access controls around RADIUS secrets by default are strong, limited
to only self-account read and RADIUS-server read.</li>
<li>As RADIUS credentials are separate from the primary account credentials and
have no other rights, their disclosure is not going to lead to a full
account compromise.</li>
<li>Having the credentials in clear text allows a better user experience as
clients can view the credentials at any time to enroll further devices.</li>
</ul>
<h3id="service-accounts-do-not-have-radius-access"><aclass="header"href="#service-accounts-do-not-have-radius-access">Service Accounts Do Not Have Radius Access</a></h3>
<p>Due to the design of service accounts, they do not have access to radius for credential assignemnt.
If you require RADIUS usage with a service account you <em>may</em> need to use EAP-TLS or some other
<h2id="deploying-a-radius-container"><aclass="header"href="#deploying-a-radius-container">Deploying a RADIUS Container</a></h2>
<p>We provide a RADIUS container that has all the needed integrations.
This container requires some cryptographic material, with the following files being in <code>/etc/raddb/certs</code>. (Modifiable in the configuration)</p>
<p>Traefik is a flexible HTTP reverse proxy webserver that can be integrated with Docker to allow dynamic configuration
and to automatically use LetsEncrypt to provide valid TLS certificates.
We can leverage this in the setup of Kanidm by specifying the configuration of Kanidm and Traefik in the same <ahref="https://docs.docker.com/compose/">Docker Compose configuration</a>.</p>
<p>Create a new directory and copy the following YAML file into it as <code>docker-compose.yml</code>.
Edit the YAML to update the LetsEncrypt account email for your domain and the FQDN where Kanidm will be made available.
Ensure you adjust this file or Kanidm's configuration to have a matching HTTPS port; the line <code>traefik.http.services.kanidm.loadbalancer.server.port=8443</code> sets this on the Traefik side.</p>
<blockquote>
<p><strong>NOTE</strong> You will need to generate self-signed certificates for Kanidm, and copy the configuration into the <code>kanidm_data</code> volume. Some instructions are available in the "Installing the Server" section of this book.</p>
<li>Create a Kanidm account. Please see the section <ahref="examples/../accounts_and_groups.html">Creating Accounts</a>.</li>
<li>Give the account a password. Please see the section <ahref="examples/../accounts_and_groups.html">Resetting Account Credentials</a>.</li>
<li>Make the account a person. Please see the section <ahref="examples/../accounts_and_groups.html">People Accounts</a>.</li>
<li>Create a Kanidm group. Please see the section <ahref="examples/../accounts_and_groups.html">Creating Accounts</a>.</li>
<li>Add the account you created to the group you create. Please see the section <ahref="examples/../accounts_and_groups.html">Creating Accounts</a>.</li>
</ol>
</li>
<li>
<p>Create a Kanidm OAuth2 resource:</p>
<ol>
<li>Create the OAuth2 resource for your domain. Please see the section <ahref="examples/../oauth2.html">Create the Kanidm Configuration</a>.</li>
<li>Add a scope mapping from the resource you created to the group you create with the openid, profile, and email scopes. Please see the section <ahref="examples/../oauth2.html">Create the Kanidm Configuration</a>.</li>
</ol>
</li>
<li>
<p>Create a <code>Cookie Secret</code> to for the placeholder <code><COOKIE_SECRET></code> in step 4:</p>
<pre><codeclass="language-shell">docker run -ti --rm python:3-alpine python -c 'import secrets,base64; print(base64.b64encode(base64.b64encode(secrets.token_bytes(16))).decode("utf-8"));'
</code></pre>
</li>
<li>
<p>Create a file called <code>k8s.kanidm-nginx-auth-example.yaml</code> with the block below. Replace every <code><string></code> (drop the <code><></code>) with appropriate values:</p>
<ol>
<li><code><FQDN></code>: The fully qualified domain name with an A record pointing to your k8s ingress.</li>
<li><code><KANIDM_FQDN></code>: The fully qualified domain name of your Kanidm deployment.</li>
<li><code><COOKIE_SECRET></code>: The output from step 3.</li>
<li><code><OAUTH2_RS_NAME></code>: Please see the output from step 2.1 or <ahref="examples/../oauth2.html">get</a> the OAuth2 resource you create from that step.</li>
<li><code><OAUTH2_RS_BASIC_SECRET></code>: Please see the output from step 2.1 or <ahref="examples/../oauth2.html">get</a> the OAuth2 resource you create from that step.</li>
</ol>
<p>This will deploy the following to your cluster:</p>
<ul>
<li><ahref="https://github.com/modem7/docker-starwars">modem7/docker-starwars</a> - An example web site.</li>
<li><ahref="https://oauth2-proxy.github.io/oauth2-proxy/">OAuth2 Proxy</a> - A OAuth2 proxy is used as an OAuth2 client with NGINX <ahref="https://docs.nginx.com/nginx/admin-guide/security-controls/configuring-subrequest-authentication/">Authentication Based on Subrequest Result</a>.</li>
<p>Check your deployment succeeded by running the following commands:</p>
<pre><codeclass="language-shell">kubectl -n kanidm-example get all
kubectl -n kanidm-example get ingress
kubectl -n kanidm-example get Certificate
</code></pre>
<p>You may use kubectl's describe and log for troubleshooting. If there are ingress errors see the Ingress NGINX documentation's <ahref="https://kubernetes.github.io/ingress-nginx/troubleshooting/">troubleshooting page</a>. If there are certificate errors see the CertManger documentation's <ahref="https://cert-manager.io/docs/faq/troubleshooting/">troubleshooting page</a>.</p>
<p>Once it has finished deploying, you will be able to access it at <code>https://<FQDN></code> which will prompt you for authentication.</p>
<p>There's a powershell script in the root directory of the repository which, in concert with <code>openssl</code> will generate a config file and certs for testing.</p>
<p>If you are asked to rebase your change, follow these steps:</p>
<pre><code>git checkout master
git pull
git checkout <feature-branch-name>
git rebase master
</code></pre>
<p>Then be sure to fix any merge issues or other comments as they arise. If you
have issues, you can always stop and reset with:</p>
<pre><code>git rebase --abort
</code></pre>
<h3id="development-server-quickstart-for-interactive-testing"><aclass="header"href="#development-server-quickstart-for-interactive-testing">Development Server Quickstart for Interactive Testing</a></h3>
<p>After getting the code, you will need a rust environment. Please investigate
<ahref="https://rustup.rs">rustup</a> for your platform to establish this.</p>
<p>Once you have the source code, you need encryption certificates to use with the server,
because without certificates, authentication will fail. </p>
<p>We recommend using <ahref="https://letsencrypt.org">Let's Encrypt</a>, but if this is not
possible, please use our insecure certificate tool (<code>insecure_generate_tls.sh</code>). </p>
<p><strong>NOTE:</strong> Windows developers can use <code>insecure_generate_tls.ps1</code>, which puts everything (including a templated confi gfile) in <code>$TEMP\kanidm</code>. Please adjust paths below to suit.</p>
<p>The insecure certificate tool creates <code>/tmp/kanidm</code> and puts some self-signed certificates there.</p>
<p>You can now build and run the server with the commands below. It will use a database
in <code>/tmp/kanidm.db</code>.</p>
<p>Create the initial database and generate an <code>admin</code> username:</p>
<pre><code>cargo run --bin kanidmd recover_account -c ./examples/insecure_server.toml admin
<snip>
Success - password reset to -> Et8QRJgQkMJu3v1AQxcbxRWW44qRUZPpr6BJ9fCGapAB9cT4
</code></pre>
<p>Record the password above, then run the server start command:</p>
<pre><code>cd kanidmd/daemon
cargo run --bin kanidmd server -c ../../examples/insecure_server.toml
</code></pre>
<p>(The server start command is also a script in <code>kanidmd/daemon/run_insecure_dev_server.sh</code>)</p>
<p>In a new terminal, you can now build and run the client tools with:</p>
<tr><td><code>IMAGE_BASE</code></td><td>Base location of the container image.</td><td><code>kanidm</code></td></tr>
<tr><td><code>IMAGE_VERSION</code></td><td>Determines the container's tag.</td><td>None</td></tr>
<tr><td><code>CONTAINER_TOOL_ARGS</code></td><td>Specify extra options for the container build tool.</td><td>None</td></tr>
<tr><td><code>IMAGE_ARCH</code></td><td>Passed to <code>--platforms</code> when the container is built.</td><td><code>linux/amd64,linux/arm64</code></td></tr>
<tr><td><code>CONTAINER_BUILD_ARGS</code></td><td>Override default ARG settings during the container build.</td><td>None</td></tr>
<tr><td><code>CONTAINER_TOOL</code></td><td>Use an alternative container build tool.</td><td><code>docker</code></td></tr>
<tr><td><code>BOOK_VERSION</code></td><td>Sets version used when building the documentation book.</td><td><code>master</code></td></tr>
<p>Build a <code>kanidm</code> container using <code>podman</code>:</p>
<pre><code>CONTAINER_TOOL=podman make build/kanidmd
</code></pre>
<p>Build a <code>kanidm</code> container and use a redis build cache:</p>
<pre><code>CONTAINER_BUILD_ARGS='--build-arg "SCCACHE_REDIS=redis://redis.dev.blackhats.net.au:6379"' make build/kanidmd
</code></pre>
<h4id="automatically-built-containers"><aclass="header"href="#automatically-built-containers">Automatically Built Containers</a></h4>
<p>To speed up testing across platforms, we're leveraging GitHub actions to build
containers for test use.</p>
<p>Whenever code is merged with the <code>master</code> branch of Kanidm, containers are automatically
built for <code>kanidmd</code> and <code>radius</code>. Sometimes they fail to build, but we'll try to
keep them avilable.</p>
<p>To find information on the packages,
<ahref="https://github.com/orgs/kanidm/packages?repo_name=kanidm">visit the Kanidm packages page</a>.</p>
<p>An example command for pulling and running the radius container is below. You'll
need to
<ahref="https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry#authenticating-to-the-container-registry">authenticate with the GitHub container registry first</a>.</p>
<p>Access controls are critical for a project like Kanidm to determine who can access what on other
entries. Our access controls have to be dynamic and flexible as administrators will want to define
their own access controls. In almost every call in the server, they are consulted to determine if
the action can be carried out. We also supply default access controls so that out of the box we are
a complete and useful IDM.</p>
<p>The original design of the access control system was intended to satisfy our need for flexibility,
but we have begun to discover a number of limitations. The design incorporating filter queries makes
them hard to administer as we have not often publicly talked about the filter language and how it
internally works. Because of their use of filters it is hard to see on an entry "what" access controls
will apply to entries, making it hard to audit without actually calling the ACP subsystem. Currently
the access control system has a large impact on performance, accounting for nearly 35% of the time taken
in a search operation.</p>
<p>Additionally, the default access controls that we supply have started to run into limits and rough cases
due to changes as we have improved features. Some of this was due to limited design with user cases
in mind during development.</p>
<p>To resolve this a number of coordinating features need implementation to improve this situation. These
features will be documented <em>first</em>, and the use cases <em>second</em> with each use case linking to the
features that satisfy it.</p>
<h2id="required-features-to-satisfy"><aclass="header"href="#required-features-to-satisfy">Required Features to Satisfy</a></h2>
<h3id="refactor-of-default-access-controls"><aclass="header"href="#refactor-of-default-access-controls">Refactor of default access controls</a></h3>
<p>The current default privileges will need to be refactored to improve seperation of privilege
and improved delegation of finer access rights.</p>
<h3id="access-profiles-target-specifiers-instead-of-filters"><aclass="header"href="#access-profiles-target-specifiers-instead-of-filters">Access profiles target specifiers instead of filters</a></h3>
<p>Access profiles should target a list of groups for who the access profile applies to, and who recieves
the access it is granting.</p>
<p>Alternately an access profile could target "self" so that self-update rules can still be expressed.</p>
<p>An access profile could target an oauth2 definition for the purpose of allowing reads to members
of a set of scopes that can access the service.</p>
<p>The access profile receiver would be group based only. This allows specifying that "X group of members
can write self" meaning that any member of that group can write to themself and only themself.</p>
<p>In the future we could also create different target/receiver specifiers to allow other extended management
and delegation scenarioes. This improves the situation making things more flexible from the current
filter system. It also may allow filters to be simplified to remove the SELF uuid resolve step in some cases.</p>
<h3id="filter-based-groups"><aclass="header"href="#filter-based-groups">Filter based groups</a></h3>
<p>These are groups who's members are dynamicly allocated based on a filter query. This allows a similar
level of dynamic group management as we have currently with access profiles, but with the additional
ability for them to be used outside of the access control context. This is the "bridge" allowing us to
move from filter based access controls to "group" targetted.</p>
<p>A risk of filter based groups is "infinite churn" because of recursion. This can occur if you
had a rule such a "and not memberof = self" on a dynamic group. Because of this, filters on
dynamic groups may not use "memberof" unless they are internally provided by the kanidm project so
that we can vet these rules as correct and without creating infinite recursion scenarioes.</p>
<h3id="access-rules-extracted-to-aci-entries-on-targets"><aclass="header"href="#access-rules-extracted-to-aci-entries-on-targets">Access rules extracted to ACI entries on targets</a></h3>
<p>The access control profiles are an excellent way to administer access where you can specific whom
has access to what, but it makes it harder for the reverse query which is "who has access to this
specific entity". Since this is needed for both search and auditing, by specifying our access profiles
in the current manner, but using them to generate ACE rules on the target entry will allow the search
and audit paths to answer the question of "who has access to this entity" much faster.</p>
<p>A <code>modify</code> profile defines the following limits:</p>
<ul>
<li>a filter for which objects can be modified,</li>
<li>a set of attributes which can be modified.</li>
</ul>
<p>A <code>modify</code> profile defines a limit on the <code>modlist</code> actions. </p>
<p>For example: you may only be allowed to ensure <code>presence</code> of a value. (Modify allowing purge, not-present, and presence).</p>
<p>Content requirements (see <ahref="developers/designs/access_profiles_and_security.html#create-requirements">Create Requirements</a>) are out of scope at the moment.</p>
<p>An example:</p>
<blockquote>
<p>Alice should only be able to modify a user's password if that user is a member of the
students group.</p>
</blockquote>
<p><strong>Note:</strong><code>modify</code> does not imply <code>read</code> of the attribute. Care should be taken that we don't disclose
the current value in any error messages if the operation fails.</p>
<p>The <code>target</code> of an access profile should be a filter defining the objects that this applies to.</p>
<p>The filter limit for the profiles of what they are acting on requires a single special operation
which is the concept of "targeting self". </p>
<p>For example: we could define a rule that says "members of group X are allowed self-write to the <code>mobilePhoneNumber</code> attribute".</p>
<p>An extension to the filter code could allow an extra filter enum of <code>self</code>, that would allow this
to operate correctly, and would consume the entry in the event as the target of "Self". This would
be best implemented as a compilation of <code>self -> eq(uuid, self.uuid)</code>.</p>
<p>Delete is similar to search, however there is the risk that the user may say something like:</p>
<pre><code>Pres("class").
</code></pre>
<p>Were we to approach this like search, this would then have "every thing the identified user
is allowed to delete, is deleted". A consideration here is that <code>Pres("class")</code> would delete "all"
objects in the directory, but with the access control present, it would limit the deletion to the
set of allowed deletes.</p>
<p>In a sense this is a correct behaviour - they were allowed to delete everything they asked to
delete. However, in another it's not valid: the request was broad and they were not allowed access
to delete everything they requested.</p>
<p>The possible abuse vector here is that an attacker could then use delete requests to enumerate the
existence of entries in the database that they do not have access to. This requires someone to have
the delete privilege which in itself is very high level of access, so this risk may be minimal.</p>
<p>So the choices are:</p>
<ol>
<li>Treat it like search and allow the user to delete what they are allowed to delete,
but ignore other objects</li>
<li>Deny the request because their delete was too broad, and they must specify a valid deletion request.</li>
</ol>
<p>Option #2 seems more correct because the <code>delete</code> request is an explicit request, not a request where
you want partial results. Imagine someone wants to delete users A and B at the same time, but only
has access to A. They want this request to fail so they KNOW B was not deleted, rather than it
succeed and have B still exist with a partial delete status.</p>
<p>However, a possible issue is that Option #2 means that a delete request of
<code>And(Eq(attr, allowed_attribute), Eq(attr, denied))</code>, which is rejected may indicate presence of the
<code>denied</code> attribute. So option #1 may help in preventing a security risk of information disclosure.</p>
<!-- TODO
@yaleman: not always, it could indicate that the attribute doesn't exist so it's an invalid filter, but
that would depend if the response was "invalid" in both cases, or "invalid" / "refused"
-->
<p>This is also a concern for modification, where the modification attempt may or may not
fail depending on the entries and if you can/can't see them.</p>
<p><strong>IDEA:</strong> You can only <code>delete</code>/<code>modify</code> within the read scope you have. If you can't
read it (based on the read rules of <code>search</code>), you can't <code>delete</code> it. This is in addition to the filter
rules of the <code>delete</code> applying as well. So performing a <code>delete</code> of <code>Pres(class)</code>, will only delete
in your <code>read</code> scope and will never disclose if you are denied access.</p>
<!-- TODO
@yaleman: This goes back to the commentary on Option #2 and feels icky like SQL's `DELETE FROM <table>` just deleting everything. It's more complex from the client - you have to search for a set of things to delete - then delete them.
Explicitly listing the objects you want to delete feels.... way less bad. This applies to modifies too. 😁
<td>Here begins some early notes on the REST interface - much better ones are in the repository's designs directory.</td>
</tr>
</table>
<p>There's an endpoint at <code>/<api_version>/routemap</code> (for example, https://localhost/v1/routemap) which is based on the API routes as they get instantiated.</p>
<p>It's <em>very, very, very</em> early work, and should not be considered stable at all.</p>
<p>An example of some elements of the output is below:</p>
<li>asyncio methods for all calls, leveraging <ahref="https://pypi.org/project/aiohttp/">aiohttp</a></li>
<li>every class and function is fully python typed (test by running <code>make test/pykanidm/mypy</code>)</li>
<li>test coverage for 95% of code, and most of the missing bit is just when you break things</li>
<li>loading configuration files into nice models using <ahref="https://pypi.org/project/pydantic/">pydantic</a></li>
<li>basic password authentication</li>
<li>pulling RADIUS tokens</li>
</ul>
<p>TODO: a lot of things.</p>
<h2id="setting-up-your-dev-environment"><aclass="header"href="#setting-up-your-dev-environment">Setting up your dev environment.</a></h2>
<p>Setting up a dev environment can be a little complex because of the mono-repo.</p>
<ol>
<li>Install poetry: <code>python -m pip install poetry</code>. This is what we use to manage the packages, and allows you to set up virtual python environments easier.</li>
<li>Build the base environment. From within the <code>pykanidm</code> directory, run: <code>poetry install</code> This'll set up a virtual environment and install all the required packages (and development-related ones)</li>
<li>Start editing!</li>
</ol>
<p>Most IDEs will be happier if you open the kanidm_rlm_python or pykanidm directories as the base you are working from, rather than the kanidm repository root, so they can auto-load integrations etc.</p>
<h2id="building-the-documentation"><aclass="header"href="#building-the-documentation">Building the documentation</a></h2>
<p>Setting up a dev environment has some extra complexity due to the mono-repo design.</p>
<ol>
<li>Install poetry: <code>python -m pip install poetry</code>. This is what we use to manage the packages, and allows you to set up virtual python environments easier.</li>
<li>Build the base environment. From within the kanidm_rlm_python directory, run: <code>poetry install</code></li>
<li>Install the <code>kanidm</code> python library: <code>poetry run python -m pip install ../pykanidm</code></li>
<li>Start editing!</li>
</ol>
<p>Most IDEs will be happier if you open the <code>kanidm_rlm_python</code> or <code>pykanidm</code> directories as the base you are working from, rather than the <code>kanidm</code> repository root, so they can auto-load integrations etc.</p>
<h2id="running-a-test-radius-container"><aclass="header"href="#running-a-test-radius-container">Running a test RADIUS container</a></h2>
<p>From the root directory of the Kanidm repository:</p>
<ol>
<li>Build the container - this'll give you a container image called <code>kanidm/radius</code> with the tag <code>devel</code>:</li>
<li>Once the process has completed, check the container exists in your docker environment:</li>
</ol>
<pre><codeclass="language-shell">➜ docker image ls kanidm/radius
REPOSITORY TAG IMAGE ID CREATED SIZE
kanidm/radius devel 5dabe894134c About a minute ago 622MB
</code></pre>
<p><em>Note:</em> If you're just looking to play with a pre-built container, images are also automatically built based on the development branch and available at <code>ghcr.io/kanidm/radius:devel</code></p>
<olstart="3">
<li>Generate some self-signed certificates by running the script - just hit enter on all the prompts if you don't want to customise them. This'll put the files in <code>/tmp/kanidm</code>:</li>
<p>This happens in Docker currently, and here's some instructions for doing it for Ubuntu:</p>
<ol>
<li>Start in the root directory of the repository.</li>
<li>Run <code>./platform/debian/ubuntu_docker_builder.sh</code> This'll start a container, mounting the repository in <code>~/kanidm/</code>.</li>
<li>Install the required dependencies by running <code>./platform/debian/install_deps.sh</code>.</li>
<li>Building packages uses make, get a list by running <code>make -f ./platform/debian/Makefile help</code></li>
</ol>
<pre><code>➜ make -f platform/debian/Makefile help
debs/kanidm:
build a .deb for the Kanidm CLI
debs/kanidmd:
build a .deb for the Kanidm daemon
debs/kanidm-ssh:
build a .deb for the Kanidm SSH tools
debs/kanidm-unixd:
build a .deb for the Kanidm UNIX tools (PAM/NSS, unixd and related tools)
debs/all:
build all the debs
</code></pre>
<olstart="5">
<li>So if you wanted to build the package for the Kanidm CLI, run <code>make -f ./platform/debian/Makefile debs/kanidm</code>.</li>
<li>The package will be copied into the <code>target</code> directory of the repository on the docker host - not just in the container.</li>
</ol>
<h2id="adding-a-package"><aclass="header"href="#adding-a-package">Adding a package</a></h2>
<p>There's a set of default configuration files in <code>packaging/</code>; if you want to add a package definition, add a folder with the package name and then files in there will be copied over the top of the ones from <code>packaging/</code> on build.</p>
<p>You'll need two custom files at minimum:</p>
<ul>
<li><code>control</code> - a file containing information about the package.</li>
<li><code>rules</code> - a makefile doing all the build steps.</li>
</ul>
<p>There's a lot of other files that can go into a .deb, some handy ones are:</p>
<divclass="table-wrapper"><table><thead><tr><th>Filename</th><th>What it does</th></tr></thead><tbody>
<tr><td>preinst</td><td>Runs before installation occurs</td></tr>
<tr><td>postrm</td><td>Runs after removal happens</td></tr>
<tr><td>prerm</td><td>Runs before removal happens - handy to shut down services.</td></tr>
<tr><td>postinst</td><td>Runs after installation occurs - we're using that to show notes to users</td></tr>
<li><ahref="https://www.debian.org/doc/manuals/maint-guide/dreq.en.html">DH reference</a> - Explains what needs to be done for packaging (mostly).</li>
<li><ahref="https://www.debian.org/doc/debian-policy/ch-controlfields">Reference for what goes in control files</a></li>