kanidm/docs/v1.0.0rc7/print.html
2022-07-22 03:20:57 +00:00

3237 lines
195 KiB
HTML

<!DOCTYPE HTML>
<html lang="en" class="sidebar-visible no-js light">
<head>
<!-- Book generated using mdBook -->
<meta charset="UTF-8">
<title>Kanidm Administration</title>
<meta name="robots" content="noindex" />
<!-- Custom HTML head -->
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
<meta name="description" content="">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="theme-color" content="#ffffff" />
<link rel="shortcut icon" href="favicon.png">
<link rel="stylesheet" href="css/variables.css">
<link rel="stylesheet" href="css/general.css">
<link rel="stylesheet" href="css/chrome.css">
<link rel="stylesheet" href="css/print.css" media="print">
<!-- Fonts -->
<link rel="stylesheet" href="FontAwesome/css/font-awesome.css">
<link rel="stylesheet" href="fonts/fonts.css">
<!-- Highlight.js Stylesheets -->
<link rel="stylesheet" href="highlight.css">
<link rel="stylesheet" href="tomorrow-night.css">
<link rel="stylesheet" href="ayu-highlight.css">
<!-- Custom theme stylesheets -->
</head>
<body>
<!-- Provide site root to javascript -->
<script type="text/javascript">
var path_to_root = "";
var default_theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "navy" : "light";
</script>
<!-- Work around some values being stored in localStorage wrapped in quotes -->
<script type="text/javascript">
try {
var theme = localStorage.getItem('mdbook-theme');
var sidebar = localStorage.getItem('mdbook-sidebar');
if (theme.startsWith('"') && theme.endsWith('"')) {
localStorage.setItem('mdbook-theme', theme.slice(1, theme.length - 1));
}
if (sidebar.startsWith('"') && sidebar.endsWith('"')) {
localStorage.setItem('mdbook-sidebar', sidebar.slice(1, sidebar.length - 1));
}
} catch (e) { }
</script>
<!-- Set the theme before any content is loaded, prevents flash -->
<script type="text/javascript">
var theme;
try { theme = localStorage.getItem('mdbook-theme'); } catch(e) { }
if (theme === null || theme === undefined) { theme = default_theme; }
var html = document.querySelector('html');
html.classList.remove('no-js')
html.classList.remove('light')
html.classList.add(theme);
html.classList.add('js');
</script>
<!-- Hide / unhide sidebar before it is displayed -->
<script type="text/javascript">
var html = document.querySelector('html');
var sidebar = 'hidden';
if (document.body.clientWidth >= 1080) {
try { sidebar = localStorage.getItem('mdbook-sidebar'); } catch(e) { }
sidebar = sidebar || 'visible';
}
html.classList.remove('sidebar-visible');
html.classList.add("sidebar-" + sidebar);
</script>
<nav id="sidebar" class="sidebar" aria-label="Table of contents">
<div class="sidebar-scrollbox">
<ol class="chapter"><li class="chapter-item expanded "><a href="intro.html"><strong aria-hidden="true">1.</strong> Introduction to Kanidm</a></li><li class="chapter-item expanded "><a href="installing_the_server.html"><strong aria-hidden="true">2.</strong> Installing the Server</a></li><li><ol class="section"><li class="chapter-item expanded "><a href="server_configuration.html"><strong aria-hidden="true">2.1.</strong> Server Configuration</a></li><li class="chapter-item expanded "><a href="security_hardening.html"><strong aria-hidden="true">2.2.</strong> Security Hardening</a></li></ol></li><li class="chapter-item expanded "><a href="client_tools.html"><strong aria-hidden="true">3.</strong> Client Tools</a></li><li><ol class="section"><li class="chapter-item expanded "><a href="installing_client_tools.html"><strong aria-hidden="true">3.1.</strong> Installing client tools</a></li></ol></li><li class="chapter-item expanded "><a href="accounts_and_groups.html"><strong aria-hidden="true">4.</strong> Accounts and Groups</a></li><li class="chapter-item expanded "><a href="administrivia.html"><strong aria-hidden="true">5.</strong> Administrative Tasks</a></li><li><ol class="section"><li class="chapter-item expanded "><a href="monitoring.html"><strong aria-hidden="true">5.1.</strong> Monitoring the platform</a></li><li class="chapter-item expanded "><a href="password_quality.html"><strong aria-hidden="true">5.2.</strong> Password Quality and Badlisting</a></li><li class="chapter-item expanded "><a href="posix_accounts.html"><strong aria-hidden="true">5.3.</strong> POSIX Accounts and Groups</a></li><li class="chapter-item expanded "><a href="ssh_key_dist.html"><strong aria-hidden="true">5.4.</strong> SSH Key Distribution</a></li><li class="chapter-item expanded "><a href="recycle_bin.html"><strong aria-hidden="true">5.5.</strong> The Recycle Bin</a></li><li class="chapter-item expanded "><a href="why_tls.html"><strong aria-hidden="true">5.6.</strong> Why TLS?</a></li></ol></li><li class="chapter-item expanded "><li class="part-title">For Developers</li><li class="chapter-item expanded "><a href="DEVELOPER_README.html"><strong aria-hidden="true">6.</strong> Developer Guide</a></li><li class="chapter-item expanded "><div><strong aria-hidden="true">7.</strong> Design Documents</div></li><li><ol class="section"><li class="chapter-item expanded "><a href="developers/designs/access_profiles_and_security.html"><strong aria-hidden="true">7.1.</strong> Access Profiles</a></li><li class="chapter-item expanded "><a href="developers/designs/rest_interface.html"><strong aria-hidden="true">7.2.</strong> REST Interface</a></li></ol></li><li class="chapter-item expanded "><a href="developers/python.html"><strong aria-hidden="true">8.</strong> Python Module</a></li><li class="chapter-item expanded "><a href="developers/radius.html"><strong aria-hidden="true">9.</strong> RADIUS Integration</a></li><li class="chapter-item expanded affix "><li class="part-title">Integrations</li><li class="chapter-item expanded "><a href="integrations/oauth2.html"><strong aria-hidden="true">10.</strong> Oauth2</a></li><li class="chapter-item expanded "><a href="integrations/pam_and_nsswitch.html"><strong aria-hidden="true">11.</strong> PAM and nsswitch</a></li><li class="chapter-item expanded "><a href="integrations/radius.html"><strong aria-hidden="true">12.</strong> RADIUS</a></li><li class="chapter-item expanded "><a href="integrations/ldap.html"><strong aria-hidden="true">13.</strong> LDAP</a></li><li class="chapter-item expanded affix "><li class="part-title">Integration Examples</li><li class="chapter-item expanded "><a href="examples/k8s_ingress_example.html"><strong aria-hidden="true">14.</strong> Kubernetes Ingress</a></li><li class="chapter-item expanded affix "><li class="part-title">Packaging</li><li class="chapter-item expanded "><a href="packaging.html"><strong aria-hidden="true">15.</strong> Packaging</a></li><li class="chapter-item expanded "><a href="packaging_debs.html"><strong aria-hidden="true">16.</strong> Debian/Ubuntu</a></li></ol>
</div>
<div id="sidebar-resize-handle" class="sidebar-resize-handle"></div>
</nav>
<div id="page-wrapper" class="page-wrapper">
<div class="page">
<div id="menu-bar-hover-placeholder"></div>
<div id="menu-bar" class="menu-bar sticky bordered">
<div class="left-buttons">
<button id="sidebar-toggle" class="icon-button" type="button" title="Toggle Table of Contents" aria-label="Toggle Table of Contents" aria-controls="sidebar">
<i class="fa fa-bars"></i>
</button>
<button id="theme-toggle" class="icon-button" type="button" title="Change theme" aria-label="Change theme" aria-haspopup="true" aria-expanded="false" aria-controls="theme-list">
<i class="fa fa-paint-brush"></i>
</button>
<ul id="theme-list" class="theme-popup" aria-label="Themes" role="menu">
<li role="none"><button role="menuitem" class="theme" id="light">Light (default)</button></li>
<li role="none"><button role="menuitem" class="theme" id="rust">Rust</button></li>
<li role="none"><button role="menuitem" class="theme" id="coal">Coal</button></li>
<li role="none"><button role="menuitem" class="theme" id="navy">Navy</button></li>
<li role="none"><button role="menuitem" class="theme" id="ayu">Ayu</button></li>
</ul>
<button id="search-toggle" class="icon-button" type="button" title="Search. (Shortkey: s)" aria-label="Toggle Searchbar" aria-expanded="false" aria-keyshortcuts="S" aria-controls="searchbar">
<i class="fa fa-search"></i>
</button>
</div>
<h1 class="menu-title">Kanidm Administration</h1>
<div class="right-buttons">
<a href="print.html" title="Print this book" aria-label="Print this book">
<i id="print-button" class="fa fa-print"></i>
</a>
<a href="https://github.com/kanidm/kanidm" title="Git repository" aria-label="Git repository">
<i id="git-repository-button" class="fa fa-github"></i>
</a>
</div>
</div>
<div id="search-wrapper" class="hidden">
<form id="searchbar-outer" class="searchbar-outer">
<input type="search" id="searchbar" name="searchbar" placeholder="Search this book ..." aria-controls="searchresults-outer" aria-describedby="searchresults-header">
</form>
<div id="searchresults-outer" class="searchresults-outer hidden">
<div id="searchresults-header" class="searchresults-header"></div>
<ul id="searchresults">
</ul>
</div>
</div>
<!-- Apply ARIA attributes after the sidebar and the sidebar toggle button are added to the DOM -->
<script type="text/javascript">
document.getElementById('sidebar-toggle').setAttribute('aria-expanded', sidebar === 'visible');
document.getElementById('sidebar').setAttribute('aria-hidden', sidebar !== 'visible');
Array.from(document.querySelectorAll('#sidebar a')).forEach(function(link) {
link.setAttribute('tabIndex', sidebar === 'visible' ? 0 : -1);
});
</script>
<div id="content" class="content">
<main>
<h1 id="introduction-to-kanidm"><a class="header" href="#introduction-to-kanidm">Introduction to Kanidm</a></h1>
<p>Kanidm is an identity management server, acting as an authority on account information, authentication
and authorisation within a technical environment.</p>
<p>The intent of the Kanidm project is to:</p>
<ul>
<li>Provide a single truth source for accounts, groups and privileges.</li>
<li>Enable integrations to systems and services so they can authenticate accounts.</li>
<li>Make system, network, application and web authentication easy and accessible.</li>
</ul>
<table>
<tr>
<td rowspan=2><img src="/images/kani-warning.png" alt="Kani Warning" /></td>
<td><strong>NOTICE</strong></td>
</tr>
<tr>
<td>This is a pre-release project. While all effort has been made to ensure no data loss or security flaws, you should still be careful when using this in your environment.</td>
</tr>
</table>
<h2 id="library-documentation"><a class="header" href="#library-documentation">Library documentation</a></h2>
<p>Looking for the <code>rustdoc</code> documentation for the libraries themselves? <a href="https://kanidm.com/documentation/">Click here!</a></p>
<h2 id="why-do-i-want-kanidm"><a class="header" href="#why-do-i-want-kanidm">Why do I want Kanidm?</a></h2>
<p>Whether you work in a business, a volunteer organisation, or are an enthusiast who manages
their personal services, you need methods of authenticating and identifying
to your systems, and subsequently, ways to determine what authorisation and privileges you have
while accessing these systems.</p>
<p>We've probably all been in workplaces where you end up with multiple accounts on various
systems - one for a workstation, different SSH keys for different tasks, maybe some shared
account passwords. Not only is it difficult for people to manage all these different credentials
and what they have access to, but it also means that sometimes these credentials have more
access or privilege than they require.</p>
<p>Kanidm acts as a central authority of accounts in your organisation and allows each account to associate
many devices and credentials with different privileges. An example of how this looks:</p>
<pre><code> ┌──────────────────┐
┌┴─────────────────┐│
│ ││
┌───────────────┬───▶│ Kanidm │◀─────┬─────────────────────────┐
│ │ │ ├┘ │ │
│ │ └──────────────────┘ │ Verify
Account Data │ ▲ │ Radius
References │ │ │ Password
│ │ │ │ │
│ │ │ │ ┌────────────┐
│ │ │ │ │ │
│ │ │ Verify │ RADIUS │
┌────────────┐ │ Retrieve SSH Application │ │
│ │ │ Public Keys Password └────────────┘
│ Database │ │ │ │ ▲
│ │ │ │ │ │
└────────────┘ │ │ │ ┌────────┴──────┐
▲ │ │ │ │ │
│ │ │ │ │ │
┌────────────┐ │ ┌────────────┐ ┌────────────┐ ┌────────────┐ ┌────────────┐
│ │ │ │ │ │ │ │ │ │ │
│ Web Site │ │ │ SSH │ │ Email │ │ WIFI │ │ VPN │
│ │ │ │ │ │ │ │ │ │ │
└────────────┘ │ └────────────┘ └────────────┘ └────────────┘ └────────────┘
▲ │ ▲ ▲ ▲ ▲
│ │ │ │ │ │
│ │ │ │ │ │
│ Login To │ │ │ │
SSO/Oauth Oauth/SSO SSH Keys Application Radius Radius
│ │ │ Password Password Password
│ │ │ │ │ │
│ │ │ │ │ │
│ │ │ │ │ │
│ │ ┌──────────┐ │ │ │
│ │ │ │ │ │ │
└──────────────┴────────│ Laptop │──────────┴───────────────┴───────────────┘
│ │
└──────────┘
┌──────────┐
│ You │
└──────────┘
</code></pre>
<p>A key design goal is that you authenticate with your device in some manner, and then your device will
continue to authenticate you in the future. Each of these different types of credentials, from SSH keys,
application passwords, to RADIUS passwords and others, are &quot;things your device knows&quot;. Each password
has limited capability, and can only access that exact service or resource.</p>
<p>This helps improve security; a compromise of the service or the network transmission does not
grant you unlimited access to your account and all its privileges. As the credentials are specific
to a device, if a device is compromised you can revoke its associated credentials. If a
specific service is compromised, only the credentials for that service need to be revoked.</p>
<p>Due to this model, and the design of Kanidm to centre the device and to have more per-service credentials,
workflows and automation are added or designed to reduce human handling. An example of this
is the use of QR codes with deployment profiles to automatically enroll wireless credentials.</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="installing-the-server"><a class="header" href="#installing-the-server">Installing the Server</a></h1>
<blockquote>
<p><strong>NOTE</strong> Our preferred deployment method is in containers, the documentation assumes you're running in docker. Kanidm will run in traditional compute, and server builds are available for multiple platforms, or you can build the binaries yourself if you prefer this option.</p>
</blockquote>
<p>Currently we have docker images for the server components. They can be found at:</p>
<ul>
<li><a href="https://hub.docker.com/r/kanidm/server">https://hub.docker.com/r/kanidm/server</a></li>
<li><a href="https://hub.docker.com/r/kanidm/radius">https://hub.docker.com/r/kanidm/radius</a></li>
</ul>
<p>You can fetch these by running the commands:</p>
<pre><code>docker pull kanidm/server:latest
docker pull kanidm/radius:latest
</code></pre>
<p>If you wish to use an x86_64 cpu-optimised version (See System Requirements CPU), you should use:</p>
<pre><code>docker pull kanidm/server:x86_64_latest
</code></pre>
<p>You may need to adjust your example commands throughout this document to suit.</p>
<h2 id="development-version"><a class="header" href="#development-version">Development Version</a></h2>
<p>If you are interested in running the latest code from development, you can do this by changing the
docker tag to <code>kanidm/server:devel</code> or <code>kanidm/server:x86_64_v3_devel</code> instead.</p>
<h2 id="system-requirements"><a class="header" href="#system-requirements">System Requirements</a></h2>
<h4 id="cpu"><a class="header" href="#cpu">CPU</a></h4>
<p>If you are using the x86_64 cpu-optimised version, you must have a CPU that is from 2013 or newer
(Haswell, Ryzen). The following instruction flags are used.</p>
<pre><code>cmov, cx8, fxsr, mmx, sse, sse2, cx16, sahf, popcnt, sse3, sse4.1, sse4.2, avx, avx2,
bmi, bmi2, f16c, fma, lzcnt, movbe, xsave
</code></pre>
<p>Older or unsupported CPUs may raise a SIGIL (Illegal Instruction) on hardware that is not supported
by the project.</p>
<p>In this case, you should use the standard server:latest image.</p>
<p>In the future we may apply a baseline of flags as a requirement for x86_64 for the server:latest
image. These flags will be:</p>
<pre><code>cmov, cx8, fxsr, mmx, sse, sse2
</code></pre>
<h4 id="memory"><a class="header" href="#memory">Memory</a></h4>
<p>Kanidm extensively uses memory caching, trading memory consumption to improve parallel throughput.
You should expect to see 64KB of ram per entry in your database, depending on cache tuning and settings.</p>
<h4 id="disk"><a class="header" href="#disk">Disk</a></h4>
<p>You should expect to use up to 8KB of disk per entry you plan to store. At an estimate 10,000 entry
databases will consume 40MB, 100,000 entry will consume 400MB.</p>
<p>For best performance, you should use non-volatile memory express (NVME), or other Flash storage media.</p>
<h2 id="tls"><a class="header" href="#tls">TLS</a></h2>
<p>You'll need a volume where you can place configuration, certificates, and the database:</p>
<pre><code>docker volume create kanidmd
</code></pre>
<p>You should have a chain.pem and key.pem in your kanidmd volume. The reason for requiring
Transport Layer Security (TLS, which replaces the deprecated Secure Sockets Layer, SSL) is explained in <a href="./why_tls.html">why tls</a>. In summary, TLS is our root of trust between the
server and clients, and a critical element of ensuring a secure system.</p>
<p>The key.pem should be a single PEM private key, with no encryption. The file content should be
similar to:</p>
<pre><code>-----BEGIN RSA PRIVATE KEY-----
MII...&lt;base64&gt;
-----END RSA PRIVATE KEY-----
</code></pre>
<p>The chain.pem is a series of PEM formatted certificates. The leaf certificate, or the certificate
that matches the private key should be the first certificate in the file. This should be followed
by the series of intermediates, and the final certificate should be the CA root. For example:</p>
<pre><code>-----BEGIN CERTIFICATE-----
&lt;leaf certificate&gt;
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
&lt;intermediate certificate&gt;
-----END CERTIFICATE-----
[ more intermediates if needed ]
-----BEGIN CERTIFICATE-----
&lt;ca/croot certificate&gt;
-----END CERTIFICATE-----
</code></pre>
<blockquote>
<p><strong>HINT</strong>
If you are using Let's Encrypt the provided files &quot;fullchain.pem&quot; and &quot;privkey.pem&quot; are already
correctly formatted as required for Kanidm.</p>
</blockquote>
<p>You can validate that the leaf certificate matches the key with the command:</p>
<pre><code># openssl rsa -noout -modulus -in key.pem | openssl sha1
d2188932f520e45f2e76153fbbaf13f81ea6c1ef
# openssl x509 -noout -modulus -in chain.pem | openssl sha1
d2188932f520e45f2e76153fbbaf13f81ea6c1ef
</code></pre>
<p>If your chain.pem contains the CA certificate, you can validate this file with the command:</p>
<pre><code>openssl verify -CAfile chain.pem chain.pem
</code></pre>
<p>If your chain.pem does not contain the CA certificate (Let's Encrypt chains do not contain the CA
for example) then you can validate with this command.</p>
<pre><code>openssl verify -untrusted fullchain.pem fullchain.pem
</code></pre>
<blockquote>
<p><strong>NOTE</strong> Here &quot;-untrusted&quot; flag means a list of further certificates in the chain to build up
to the root is provided, but that the system CA root should be consulted. Verification is NOT bypassed
or allowed to be invalid.</p>
</blockquote>
<p>If these verifications pass you can now use these certificates with Kanidm. To put the certificates
in place you can use a shell container that mounts the volume such as:</p>
<pre><code>docker run --rm -i -t -v kanidmd:/data -v /my/host/path/work:/work opensuse/leap:latest /bin/sh -c &quot;cp /work/* /data/&quot;
</code></pre>
<p>OR for a shell into the volume:</p>
<pre><code>docker run --rm -i -t -v kanidmd:/data opensuse/leap:latest /bin/sh
</code></pre>
<h1 id="continue-on-to-configuring-the-server"><a class="header" href="#continue-on-to-configuring-the-server">Continue on to <a href="server_configuration.html">Configuring the Server</a></a></h1>
<div style="break-before: page; page-break-before: always;"></div><h2 id="configuring-the-server"><a class="header" href="#configuring-the-server">Configuring the Server</a></h2>
<h3 id="configuring-servertoml"><a class="header" href="#configuring-servertoml">Configuring server.toml</a></h3>
<p>You need a configuration file in the volume named <code>server.toml</code>. (Within the container it should be <code>/data/server.toml</code>.) Its contents should be as follows:</p>
<pre><code># The webserver bind address. Will use HTTPS if tls_*
# is provided.
# Defaults to &quot;127.0.0.1:8443&quot;
bindaddress = &quot;[::]:8443&quot;
#
# The read-only ldap server bind address. The server
# will use LDAPS if tls_* is provided.
# Defaults to &quot;&quot; (disabled)
# ldapbindaddress = &quot;[::]:3636&quot;
#
# The path to the kanidm database.
db_path = &quot;/data/kanidm.db&quot;
#
# If you have a known filesystem, kanidm can tune sqlite
# to match. Valid choices are:
# [zfs, other]
# If you are unsure about this leave it as the default
# (other). After changing this
# value you must run a vacuum task.
# - zfs:
# * sets sqlite pagesize to 64k. You must set
# recordsize=64k on the zfs filesystem.
# - other:
# * sets sqlite pagesize to 4k, matching most
# filesystems block sizes.
# db_fs_type = &quot;zfs&quot;
#
# The number of entries to store in the in-memory cache.
# Minimum value is 256. If unset
# an automatic heuristic is used to scale this.
# db_arc_size = 2048
#
# TLS chain and key in pem format. Both must be
# commented, or both must be present
# tls_chain = &quot;/data/chain.pem&quot;
# tls_key = &quot;/data/key.pem&quot;
#
# The log level of the server. May be default, verbose,
# perfbasic, perffull
# Defaults to &quot;default&quot;
# log_level = &quot;default&quot;
#
# The DNS domain name of the server. This is used in a
# number of security-critical contexts
# such as webauthn, so it *must* match your DNS
# hostname. It is used to create
# security principal names such as `william@idm.example.com`
# so that in a (future)
# trust configuration it is possible to have unique Service
# Principal Names (spns) throughout the topology.
# ⚠️ WARNING ⚠️
# Changing this value WILL break many types of registered
# credentials for accounts
# including but not limited to webauthn, oauth tokens, and more.
# If you change this value you *must* run
# `kanidmd domain_name_change` immediately after.
domain = &quot;idm.example.com&quot;
#
# The origin for webauthn. This is the url to the server,
# with the port included if
# it is non-standard (any port except 443). This must match
# or be a descendent of the
# domain name you configure above. If these two items are
# not consistent, the server WILL refuse to start!
# origin = &quot;https://idm.example.com&quot;
origin = &quot;https://idm.example.com:8443&quot;
#
# The role of this server. This affects available features
# and how replication may interact.
# Valid roles are:
# - WriteReplica
# This server provides all functionality of Kanidm. It
# allows authentication, writes, and
# the web user interface to be served.
# - WriteReplicaNoUI
# This server is the same as a WriteReplica, but does NOT
# offer the web user interface.
# - ReadOnlyReplica
# This server will not writes initiated by clients. It
# supports authentication and reads,
# and must have a replication agreement as a source of
# its data.
# Defaults to &quot;WriteReplica&quot;.
# role = &quot;WriteReplica&quot;
#
# [online_backup]
# The path to the output folder for online backups
# path = &quot;/var/lib/kanidm/backups/&quot;
# The schedule to run online backups (see https://crontab.guru/)
# every day at 22:00 UTC (default)
# schedule = &quot;00 22 * * *&quot;
# four times a day at 3 minutes past the hour, every 6th hours
# schedule = &quot;03 */6 * * *&quot;
# Number of backups to keep (default 7)
# versions = 7
#
</code></pre>
<p>An example is located in <a href="../../examples/server.toml">examples/server.toml</a>.</p>
<blockquote>
<p><strong>WARNING</strong> You MUST set the <code>domain</code> name correctly, aligned with your <code>origin</code>, else the server
may refuse to start, or some features (e.g. webauthn, oauth) may not work correctly!</p>
</blockquote>
<h3 id="check-the-configuration-is-valid"><a class="header" href="#check-the-configuration-is-valid">Check the configuration is valid.</a></h3>
<p>You should test your configuration is valid before you proceed.</p>
<pre><code>docker run --rm -i -t -v kanidmd:/data \
kanidm/server:latest /sbin/kanidmd configtest -c /data/server.toml
</code></pre>
<h3 id="default-admin-account"><a class="header" href="#default-admin-account">Default Admin Account</a></h3>
<p>Then you can setup the initial admin account and initialise the database into your volume.</p>
<pre><code>docker run --rm -i -t -v kanidmd:/data \
kanidm/server:latest /sbin/kanidmd recover_account -c /data/server.toml admin
</code></pre>
<h3 id="run-the-server"><a class="header" href="#run-the-server">Run the Server</a></h3>
<p>Now we can run the server so that it can accept connections. This defaults to using <code>-c /data/server.toml</code></p>
<pre><code>docker run -p 8443:8443 -v kanidmd:/data kanidm/server:latest
</code></pre>
<div style="break-before: page; page-break-before: always;"></div><h1 id="security-hardening"><a class="header" href="#security-hardening">Security Hardening</a></h1>
<p>Kanidm ships with a secure-by-default configuration, however that is only as strong
as the platform that Kanidm operates in. This could be your container environment
or your Unix-like system.</p>
<p>This chapter will detail a number of warnings and security practices you should
follow to ensure that Kanidm operates in a secure environment.</p>
<p>The main server is a high-value target for a potential attack, as Kanidm serves as
the authority on identity and authorisation in a network. Compromise of the Kanidm
server is equivalent to a full-network take over, also known as &quot;game over&quot;.</p>
<p>The unixd resolver is also a high value target as it can be accessed to allow unauthorised
access to a server, to intercept communications to the server, or more. This also must be protected
carefully.</p>
<p>For this reason, Kanidm's components must be protected carefully. Kanidm avoids many classic
attacks by being developed in a memory safe language, but risks still exist.</p>
<h2 id="startup-warnings"><a class="header" href="#startup-warnings">Startup Warnings</a></h2>
<p>At startup Kanidm will warn you if the environment it is running in is suspicious or
has risks. For example:</p>
<pre><code>kanidmd server -c /tmp/server.toml
WARNING: permissions on /tmp/server.toml may not be secure. Should be readonly to running uid. This could be a security risk ...
WARNING: /tmp/server.toml has 'everyone' permission bits in the mode. This could be a security risk ...
WARNING: /tmp/server.toml owned by the current uid, which may allow file permission changes. This could be a security risk ...
WARNING: permissions on ../insecure/ca.pem may not be secure. Should be readonly to running uid. This could be a security risk ...
WARNING: permissions on ../insecure/cert.pem may not be secure. Should be readonly to running uid. This could be a security risk ...
WARNING: permissions on ../insecure/key.pem may not be secure. Should be readonly to running uid. This could be a security risk ...
WARNING: ../insecure/key.pem has 'everyone' permission bits in the mode. This could be a security risk ...
WARNING: DB folder /tmp has 'everyone' permission bits in the mode. This could be a security risk ...
</code></pre>
<p>Each warning highlights an issue that may exist in your environment. It is not possible for us to
prescribe an exact configuration that may secure your system. This is why we only present
possible risks.</p>
<h3 id="should-be-read-only-to-running-uid"><a class="header" href="#should-be-read-only-to-running-uid">Should be Read-only to Running UID</a></h3>
<p>Files, such as configuration files, should be read-only to the UID of the Kanidm daemon. If an attacker is
able to gain code execution, they are then unable to modify the configuration to write, or to over-write
files in other locations, or to tamper with the systems configuration.</p>
<p>This can be prevented by changing the files ownership to another user, or removing &quot;write&quot; bits
from the group.</p>
<h3 id="everyone-permission-bits-in-the-mode"><a class="header" href="#everyone-permission-bits-in-the-mode">'everyone' Permission Bits in the Mode</a></h3>
<p>This means that given a permission mask, &quot;everyone&quot; or all users of the system can read, write or
execute the content of this file. This may mean that if an account on the system is compromised the
attacker can read Kanidm content and may be able to further attack the system as a result.</p>
<p>This can be prevented by removing &quot;everyone: execute bits from parent directories containing the
configuration, and removing &quot;everyone&quot; bits from the files in question.</p>
<h3 id="owned-by-the-current-uid-which-may-allow-file-permission-changes"><a class="header" href="#owned-by-the-current-uid-which-may-allow-file-permission-changes">Owned by the Current UID, Which May Allow File Permission Changes</a></h3>
<p>File permissions in UNIX systems are a discretionary access control system, which means the
named UID owner is able to further modify the access of a file regardless of the current
settings. For example:</p>
<pre><code>[william@amethyst 12:25] /tmp &gt; touch test
[william@amethyst 12:25] /tmp &gt; ls -al test
-rw-r--r-- 1 william wheel 0 29 Jul 12:25 test
[william@amethyst 12:25] /tmp &gt; chmod 400 test
[william@amethyst 12:25] /tmp &gt; ls -al test
-r-------- 1 william wheel 0 29 Jul 12:25 test
[william@amethyst 12:25] /tmp &gt; chmod 644 test
[william@amethyst 12:26] /tmp &gt; ls -al test
-rw-r--r-- 1 william wheel 0 29 Jul 12:25 test
</code></pre>
<p>Notice that even though the file was set to &quot;read only&quot; to william, and no permission to any
other users, user &quot;william&quot; can change the bits to add write permissions back or permissions
for other users.</p>
<p>This can be prevent by making the file owner a different UID than the running process for kanidm.</p>
<h3 id="a-secure-example"><a class="header" href="#a-secure-example">A Secure Example</a></h3>
<p>Between these three issues it can be hard to see a possible strategy to secure files, however
one way exists - group read permissions. The most effective method to secure resources for Kanidm
is to set configurations to:</p>
<pre><code>[william@amethyst 12:26] /etc/kanidm &gt; ls -al server.toml
-r--r----- 1 root kanidm 212 28 Jul 16:53 server.toml
</code></pre>
<p>The Kanidm server should be run as &quot;kanidm:kanidm&quot; with the appropriate user and user private
group created on your system. This applies to unixd configuration as well.</p>
<p>For the database your data folder should be:</p>
<pre><code>[root@amethyst 12:38] /data/kanidm &gt; ls -al .
total 1064
drwxrwx--- 3 root kanidm 96 29 Jul 12:38 .
-rw-r----- 1 kanidm kanidm 544768 29 Jul 12:38 kanidm.db
</code></pre>
<p>This means 770 root:kanidm. This allows Kanidm to create new files in the folder, but prevents
Kanidm from being able to change the permissions of the folder. Because the folder does not have
&quot;everyone&quot; mode bits, the content of the database is secure because users can now cd/read
from the directory.</p>
<p>Configurations for clients, such as /etc/kanidm/config, should be secured with read-only permissions
and owned by root:</p>
<pre><code>[william@amethyst 12:26] /etc/kanidm &gt; ls -al config
-r--r--r-- 1 root root 38 10 Jul 10:10 config
</code></pre>
<p>This file should be &quot;everyone&quot;-readable, which is why the bits are defined as such.</p>
<blockquote>
<p>NOTE: Why do you use 440 or 444 modes?</p>
<p>A bug exists in the implementation of readonly() in rust that checks this as &quot;does a write
bit exist for any user&quot; vs &quot;can the current UID write the file?&quot;. This distinction is subtle
but it affects the check. We don't believe this is a significant issue though, because
setting these to 440 and 444 helps to prevent accidental changes by an administrator anyway</p>
</blockquote>
<h2 id="running-as-non-root-in-docker"><a class="header" href="#running-as-non-root-in-docker">Running as Non-root in docker</a></h2>
<p>The commands provided in this book will run kanidmd as &quot;root&quot; in the container to make the onboarding
smoother. However, this is not recommended in production for security reasons.</p>
<p>You should allocate unique UID and GID numbers for the service to run as on your host
system. In this example we use <code>1000:1000</code></p>
<p>You will need to adjust the permissions on the <code>/data</code> volume to ensure that the process
can manage the files. Kanidm requires the ability to write to the <code>/data</code> directory to create
the sqlite files. This UID/GID number should match the above. You could consider the following
changes to help isolate these changes:</p>
<pre><code>docker run --rm -i -t -v kanidmd:/data opensuse/leap:latest /bin/sh
mkdir /data/db/
chown 1000:1000 /data/db/
chmod 750 /data/db/
sed -i -e &quot;s/db_path.*/db_path = \&quot;\/data\/db\/kanidm.db\&quot;/g&quot; /data/server.toml
chown root:root /data/server.toml
chmod 644 /data/server.toml
</code></pre>
<p>Note that the example commands all run inside the docker container.</p>
<p>You can then use this to run the Kanidm server in docker with a user:</p>
<pre><code>docker run --rm -i -t -u 1000:1000 -v kanidmd:/data kanidm/server:latest /sbin/kanidmd ...
</code></pre>
<blockquote>
<p><strong>HINT</strong>
You need to use the UID or GID number with the <code>-u</code> argument, as the container can't resolve
usernames from the host system.</p>
</blockquote>
<div style="break-before: page; page-break-before: always;"></div><h1 id="client-tools"><a class="header" href="#client-tools">Client tools</a></h1>
<p>To interact with Kanidm as an administrator, you'll need to use our command
line tools. If you haven't installed them yet,
<a href="installing_client_tools.html">install them now</a>.</p>
<h2 id="kanidm-configuration"><a class="header" href="#kanidm-configuration">Kanidm configuration</a></h2>
<p>You can configure <code>kanidm</code> to help make commands simpler by modifying <code>~/.config/kanidm</code>
or <code>/etc/kanidm/config</code>.</p>
<pre><code>uri = &quot;https://idm.example.com&quot;
verify_ca = true|false
verify_hostnames = true|false
ca_path = &quot;/path/to/ca.pem&quot;
</code></pre>
<p>Once configured, you can test this with:</p>
<pre><code>kanidm self whoami --name anonymous
</code></pre>
<h2 id="session-management"><a class="header" href="#session-management">Session Management</a></h2>
<p>To authenticate as a user (for use with the command line), you need to use the <code>login</code> command
to establish a session token.</p>
<pre><code>kanidm login --name USERNAME
kanidm login --name admin
</code></pre>
<p>Once complete, you can use <code>kanidm</code> without re-authenticating for a period of time for administration.</p>
<p>You can list active sessions with:</p>
<pre><code>kanidm session list
</code></pre>
<p>Sessions will expire after a period of time (by default 1 hour). To remove these expired sessions
locally you can use:</p>
<pre><code>kanidm session cleanup
</code></pre>
<p>To log out of a session:</p>
<pre><code>kanidm logout --name USERNAME
kanidm logout --name admin</code></pre>
<div style="break-before: page; page-break-before: always;"></div><h1 id="installing-client-tools"><a class="header" href="#installing-client-tools">Installing Client Tools</a></h1>
<blockquote>
<p><strong>NOTE</strong> As this project is in a rapid development phase, running different
release versions will likely present incompatibilities. Ensure you're running
matching release versions of client and server binaries. If you have any issues,
check that you are running the latest software.</p>
</blockquote>
<h2 id="from-packages"><a class="header" href="#from-packages">From packages</a></h2>
<p>Kanidm currently supports the following Linux distributions:</p>
<ul>
<li>OpenSUSE Tumbleweed</li>
<li>OpenSUSE Leap 15.3/15.4</li>
<li>Fedora 34/35</li>
<li>CentOS Stream 9</li>
</ul>
<p>The <code>kanidm</code> client has been built and tested from Windows, but is not (yet) packaged routinely.</p>
<h3 id="opensuse-tumbleweed"><a class="header" href="#opensuse-tumbleweed">OpenSUSE Tumbleweed</a></h3>
<p>Kanidm has been part of OpenSUSE Tumbleweed since October 2020. You can install
the clients with:</p>
<pre><code>zypper ref
zypper in kanidm-clients
</code></pre>
<h3 id="opensuse-leap-153154"><a class="header" href="#opensuse-leap-153154">OpenSUSE Leap 15.3/15.4</a></h3>
<p>Leap 15.3/15.4 does not have full Kanidm support. For an experimental client, you can
try the development repository. Using zypper you can add the repository with:</p>
<pre><code>zypper ar -f obs://network:idm network_idm
</code></pre>
<p>Then you need to refresh your metadata and install the clients.</p>
<pre><code>zypper ref
zypper in kanidm-clients
</code></pre>
<h3 id="fedora--centos-stream"><a class="header" href="#fedora--centos-stream">Fedora / Centos Stream</a></h3>
<p>Fedora has limited support through the development repository. You need to add the repository
metadata into the correct directory:</p>
<pre><code>cd /etc/yum.repos.d
# Fedora 34
wget https://download.opensuse.org/repositories/network:/idm/Fedora_34/network:idm.repo
# Fedora 35
wget https://download.opensuse.org/repositories/network:/idm/Fedora_35/network:idm.repo
# Centos Stream 9
wget https://download.opensuse.org/repositories/network:/idm/CentOS_9_Stream/network:idm.repo
</code></pre>
<p>You can then install with:</p>
<pre><code>dnf install kanidm-clients
</code></pre>
<h2 id="from-source-cli-only-not-recommended"><a class="header" href="#from-source-cli-only-not-recommended">From source (CLI only, not recommended)</a></h2>
<p>After you check out the source (see <a href="https://github.com/kanidm/kanidm">GitHub</a>), navigate to:</p>
<pre><code>cd kanidm_tools
cargo install --path .
</code></pre>
<h2 id="checking-that-the-tools-work"><a class="header" href="#checking-that-the-tools-work">Checking that the tools work</a></h2>
<p>Now you can check your instance is working. You may need to provide a CA certificate for verification
with the -C parameter:</p>
<pre><code>kanidm login --name anonymous
kanidm self whoami -C ../path/to/ca.pem -H https://localhost:8443 --name anonymous
kanidm self whoami -H https://localhost:8443 --name anonymous
</code></pre>
<p>Now you can take some time to look at what commands are available - please
<a href="https://github.com/kanidm/kanidm#getting-in-contact--questions">ask for help at any time</a>.</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="accounts-and-groups"><a class="header" href="#accounts-and-groups">Accounts and groups</a></h1>
<p>Accounts and Groups are the primary reasons for Kanidm to exist. Kanidm is optimised as a repository
for these data. As a result, there are many concepts and important details to understand.</p>
<h2 id="default-accounts-and-groups"><a class="header" href="#default-accounts-and-groups">Default Accounts and Groups</a></h2>
<p>Kanidm ships with a number of default accounts and groups. This is to give you the best
out-of-box experience possible, as well as supplying best practice examples related to modern
Identity Management (IDM) systems.</p>
<p>The system administrator account has limited privileges (see
<a href="accounts_and_groups.html#recovering-the-initial-idm_admin-account">Recovering the Initial idm_admin Account</a>) to learn
how to access the inbuilt admin account).
It manages only high-privilege accounts and services. This is to help separate system administration
from identity administration actions. An idm_admin user is also provided that is only for management
of accounts and groups.</p>
<p>Both the admin and the idm_admin user should <em>NOT</em> be used for daily activities - they exist for initial
system configuration, and for disaster recovery scenarios. You should delegate permissions
as required to named user accounts instead.</p>
<p>The majority of the provided content is privilege groups that provide rights over Kanidm
administrative actions. These include groups for account management, person management (personal
and sensitive data), group management, and more.</p>
<h2 id="recovering-the-initial-idm_admin-account"><a class="header" href="#recovering-the-initial-idm_admin-account">Recovering the Initial idm_admin Account</a></h2>
<p>By default the idm_admin user has no password, and can not be accessed. You should recover it with the
admin (system admin) account.</p>
<table>
<tr>
<td rowspan=2><img src="/images/kani-warning.png" alt="Kani Warning" /></td>
<td><strong></strong></td>
</tr>
<tr>
<td>Warning: The server must not be running at this point, as it requires exclusive access to the database.</td>
</tr>
</table>
<p>We recommend the use of the &quot;recover_account&quot; functionality as it provides a high strength, random password.</p>
<pre><code class="language-shell">kanidmd recover_account -c /etc/kanidm/server.toml -n idm_admin
Successfully recovered account 'idm_admin' - password reset to -&gt; j9YUv...
</code></pre>
<p>To do this in Docker, you'll need to stop the existing container and run it with <code>bash</code> as the &quot;command&quot; argument, to get a shell, then run the <code>kanidmd</code> command above.</p>
<p>For example, if I'm using the developer image in my test environment:</p>
<pre><code class="language-shell">docker run --rm -it \
-v/tmp/kanidm:/data\
--name kanidmd \
--hostname kanidmd \
ghcr.io/kanidm/kanidmd:devel \
bash
kanidmd:/# kanidmd recover_account -c /data/server.toml -n idm_admin
Successfully recovered account 'idm_admin' - password reset to -&gt; j9YUv...
</code></pre>
<p>Once that's done, exit the shell and start your server container again.</p>
<h2 id="creating-accounts"><a class="header" href="#creating-accounts">Creating Accounts</a></h2>
<p>You can now use the idm_admin user to create initial groups and accounts.</p>
<pre><code class="language-shell">kanidm login --name idm_admin
kanidm group create demo_group --name idm_admin
kanidm account create demo_user &quot;Demonstration User&quot; --name idm_admin
kanidm group add_members demo_group demo_user --name idm_admin
kanidm group list_members demo_group --name idm_admin
kanidm account get demo_user --name idm_admin
</code></pre>
<p>You can also use anonymous to view users and groups - note that you won't see as many fields due
to the limits of the anonymous access profile.</p>
<pre><code>kanidm login --name anonymous
kanidm account get demo_user --name anonymous
</code></pre>
<h2 id="viewing-default-groups"><a class="header" href="#viewing-default-groups">Viewing Default Groups</a></h2>
<p>You should take some time to inspect the default groups which are related to
default permissions. These can be viewed with:</p>
<pre><code>kanidm group list
kanidm group get &lt;name&gt;
</code></pre>
<h2 id="resetting-account-credentials"><a class="header" href="#resetting-account-credentials">Resetting Account Credentials</a></h2>
<p>Members of the <code>idm_account_manage_priv</code> group have the rights to manage other users'
accounts security and login aspects. This includes resetting account credentials.</p>
<p>You can perform a password reset on the demo_user, for example as the idm_admin user, who is
a default member of this group. The lines below prefixed with <code>#</code> are the interactive credential
update interface.</p>
<pre><code class="language-shell">kanidm account credential update demo_user --name idm_admin
# spn: demo_user@idm.example.com
# Name: Demonstration User
# Primary Credential:
# uuid: 0e19cd08-f943-489e-8ff2-69f9eacb1f31
# generated password: set
# Can Commit: true
#
# cred update (? for help) # : pass
# New password:
# New password: [hidden]
# Confirm password:
# Confirm password: [hidden]
# success
#
# cred update (? for help) # : commit
# Do you want to commit your changes? yes
# success
kanidm login --name demo_user
kanidm self whoami --name demo_user
</code></pre>
<h2 id="nested-groups"><a class="header" href="#nested-groups">Nested Groups</a></h2>
<p>Kanidm supports groups being members of groups, allowing nested groups. These nesting relationships
are shown through the &quot;memberof&quot; attribute on groups and accounts.</p>
<p>Kanidm makes all group membership determinations by inspecting an entry's &quot;memberof&quot; attribute.</p>
<p>An example can be easily shown with:</p>
<pre><code class="language-shell">kanidm group create group_1 --name idm_admin
kanidm group create group_2 --name idm_admin
kanidm account create nest_example &quot;Nesting Account Example&quot; --name idm_admin
kanidm group add_members group_1 group_2 --name idm_admin
kanidm group add_members group_2 nest_example --name idm_admin
kanidm account get nest_example --name anonymous
</code></pre>
<h2 id="account-validity"><a class="header" href="#account-validity">Account Validity</a></h2>
<p>Kanidm supports accounts that are only able to be authenticated between specific date and time
date where authentication can succeed, and an expiry date where the account will no longer
windows. This takes the form of a &quot;valid from&quot; attribute that defines the earliest start
allow authentication.</p>
<p>This can be displayed with:</p>
<pre><code>kanidm account validity show demo_user --name idm_admin
valid after: 2020-09-25T21:22:04+10:00
expire: 2020-09-25T01:22:04+10:00
</code></pre>
<p>These datetimes are stored in the server as UTC, but presented according to your local system time
to aid correct understanding of when the events will occur.</p>
<p>To set the values, an account with account management permission is required (for example, idm_admin).
Again, these values will correctly translated from the entered local timezone to UTC.</p>
<p>Set the earliest time the account can start authenticating:</p>
<pre><code class="language-shell">kanidm account validity begin_from demo_user '2020-09-25T11:22:04+00:00' --name idm_admin
</code></pre>
<p>Set the expiry or end date of the account:</p>
<pre><code class="language-shell">kanidm account validity expire_at demo_user '2020-09-25T11:22:04+00:00' --name idm_admin
</code></pre>
<p>To unset or remove these values the following can be used, where <code>any|clear</code> means you may use either <code>any</code> or <code>clear</code>.</p>
<pre><code class="language-shell">kanidm account validity begin_from demo_user any|clear --name idm_admin
kanidm account validity expire_at demo_user never|clear --name idm_admin
</code></pre>
<p>To &quot;lock&quot; an account, you can set the expire_at value to the past, or unix epoch. Even in the situation
where the &quot;valid from&quot; is <em>after</em> the expire_at, the expire_at will be respected.</p>
<pre><code>kanidm account validity expire_at demo_user 1970-01-01T00:00:00+00:00 --name idm_admin
</code></pre>
<p>These validity settings impact all authentication functions of the account (kanidm, ldap, radius).</p>
<h2 id="people-accounts"><a class="header" href="#people-accounts">People Accounts</a></h2>
<p>Kanidm allows extending accounts to include additional &quot;people&quot; attributes,
such as their legal name and email address.</p>
<p>Initially, an account does not have these attributes. If desired, an account
may be modified to have these &quot;person&quot; attributes like so:</p>
<pre><code># Note, both the --legalname and --mail flags may be omitted
kanidm account person extend demo_user --legalname &quot;initial name&quot; --mail &quot;initial@email.address&quot;
</code></pre>
<p>Once an account has been extended, the &quot;person&quot; attributes may be set by the
user of the account, or anyone with enough privileges.</p>
<p>Whether an account is currently a &quot;person&quot; or not can be identified from the &quot;account get&quot; output:</p>
<pre><code>kanidm account get demo_user
# ---
# class: person
# ... (other output omitted)
</code></pre>
<p>The presence of a &quot;class: person&quot; stanza indicates that this account may have
&quot;people&quot; attributes.</p>
<h3 id="allowing-people-accounts-to-change-their-mail-attribute"><a class="header" href="#allowing-people-accounts-to-change-their-mail-attribute">Allowing people accounts to change their mail attribute</a></h3>
<p>By default, Kanidm allows an account to change some attributes, but not their
mail address.</p>
<p>Adding the user to the <code>idm_people_self_write_mail</code> group, as shown
below, allows the user to edit their own mail.</p>
<pre><code>kanidm group add_members idm_people_self_write_mail_priv demo_user --name idm_admin
</code></pre>
<h2 id="why-cant-i-change-admin-with-idm_admin"><a class="header" href="#why-cant-i-change-admin-with-idm_admin">Why Can't I Change admin With idm_admin?</a></h2>
<p>As a security mechanism there is a distinction between &quot;accounts&quot; and &quot;high permission
accounts&quot;. This is to help prevent elevation attacks, where say a member of a
service desk could attempt to reset the password of idm_admin or admin, or even a member of
HR or System Admin teams to move laterally.</p>
<p>Generally, membership of a &quot;privilege&quot; group that ships with Kanidm, such as:</p>
<ul>
<li>idm_account_manage_priv</li>
<li>idm_people_read_priv</li>
<li>idm_schema_manage_priv</li>
<li>many more ...</li>
</ul>
<p>...indirectly grants you membership to &quot;idm_high_privilege&quot;. If you are a member of
this group, the standard &quot;account&quot; and &quot;people&quot; rights groups are NOT able to
alter, read or manage these accounts. To manage these accounts higher rights
are required, such as those held by the admin account are required.</p>
<p>Further, groups that are considered &quot;idm_high_privilege&quot; can NOT be managed
by the standard &quot;idm_group_manage_priv&quot; group.</p>
<p>Management of high privilege accounts and groups is granted through the
the &quot;hp&quot; variants of all privileges. A non-conclusive list:</p>
<ul>
<li>idm_hp_account_read_priv</li>
<li>idm_hp_account_manage_priv</li>
<li>idm_hp_account_write_priv</li>
<li>idm_hp_group_manage_priv</li>
<li>idm_hp_group_write_priv</li>
</ul>
<p>Membership of any of these groups should be considered to be equivalent to
system administration rights in the directory, and by extension, over all network
resources that trust Kanidm.</p>
<p>All groups that are flagged as &quot;idm_high_privilege&quot; should be audited and
monitored to ensure that they are not altered.</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="administration-tasks"><a class="header" href="#administration-tasks">Administration Tasks</a></h1>
<p>This chapter describes some of the routine administration tasks for running
a Kanidm server, such as making backups and restoring from backups, testing
server configuration, reindexing, verifying data consistency, and renaming
your domain.</p>
<h1 id="backup-and-restore"><a class="header" href="#backup-and-restore">Backup and Restore</a></h1>
<p>With any Identity Management (IDM) software, it's important you have the capability to restore in
case of a disaster - be that physical damage or a mistake. Kanidm supports backup
and restore of the database with three methods.</p>
<h2 id="method-1-preferred"><a class="header" href="#method-1-preferred">Method 1 (Preferred)</a></h2>
<p>Method 1 involves taking a backup of the database entry content, which is then re-indexed on restore.
This is the preferred method.</p>
<p>To take the backup (assuming our docker environment) you first need to stop the instance:</p>
<pre><code>docker stop &lt;container name&gt;
docker run --rm -i -t -v kanidmd:/data -v kanidmd_backups:/backup \
kanidm/server:latest /sbin/kanidmd backup -c /data/server.toml \
/backup/kanidm.backup.json
docker start &lt;container name&gt;
</code></pre>
<p>You can then restart your instance. DO NOT modify the backup.json as it may introduce
data errors into your instance.</p>
<p>To restore from the backup:</p>
<pre><code>docker stop &lt;container name&gt;
docker run --rm -i -t -v kanidmd:/data -v kanidmd_backups:/backup \
kanidm/server:latest /sbin/kanidmd restore -c /data/server.toml \
/backup/kanidm.backup.json
docker start &lt;container name&gt;
</code></pre>
<p>That's it!</p>
<h2 id="method-2"><a class="header" href="#method-2">Method 2</a></h2>
<p>This is a simple backup of the data volume.</p>
<pre><code>docker stop &lt;container name&gt;
# Backup your docker's volume folder
docker start &lt;container name&gt;
</code></pre>
<h2 id="method-3"><a class="header" href="#method-3">Method 3</a></h2>
<p>Automatic backups can be generated online by a <code>kanidmd server</code> instance
by including the <code>[online_backup]</code> section in the <code>server.toml</code>.
This allows you to run regular backups, defined by a cron schedule, and maintain
the number of backup versions to keep. An example is located in
<a href="https://github.com/kanidm/kanidm/blob/master/examples/server.toml">examples/server.toml</a>.</p>
<h1 id="configuration-test"><a class="header" href="#configuration-test">Configuration Test</a></h1>
<p>You can test your configuration will correctly start with the server.</p>
<blockquote>
<p><strong>WARNING:</strong> While this is a configuration test, it still needs to open the database so that
it can check a number of internal values are consistent with the configuration. As a result,
this requires the instance under config test to be stopped!</p>
</blockquote>
<pre><code>docker stop &lt;container name&gt;
docker run --rm -i -t -v kanidmd:/data \
kanidm/server:latest /sbin/kanidmd configtest -c /data/server.toml
docker start &lt;container name&gt;
</code></pre>
<h1 id="reindexing-after-schema-extension"><a class="header" href="#reindexing-after-schema-extension">Reindexing after schema extension</a></h1>
<p>In some (rare) cases you may need to reindex.
Please note the server will sometimes reindex on startup as a result of the project
changing its internal schema definitions. This is normal and expected - you may never need
to start a reindex yourself as a result!</p>
<p>You'll likely notice a need to reindex if you add indexes to schema and you see a message in
your logs such as:</p>
<pre><code>Index EQUALITY name not found
Index {type} {attribute} not found
</code></pre>
<p>This indicates that an index of type equality has been added for name, but the indexing process
has not been run. The server will continue to operate and the query execution code will correctly
process the query - however it will not be the optimal method of delivering the results as we need to
disregard this part of the query and act as though it's un-indexed.</p>
<p>Reindexing will resolve this by forcing all indexes to be recreated based on their schema
definitions (this works even though the schema is in the same database!)</p>
<pre><code>docker stop &lt;container name&gt;
docker run --rm -i -t -v kanidmd:/data \
kanidm/server:latest /sbin/kanidmd reindex -c /data/server.toml
docker start &lt;container name&gt;
</code></pre>
<p>Generally, reindexing is a rare action and should not normally be required.</p>
<h1 id="vacuum"><a class="header" href="#vacuum">Vacuum</a></h1>
<p><a href="https://www.sqlite.org/lang_vacuum.html">Vacuuming</a> is the process of reclaiming un-used pages
from the sqlite freelists, as well as performing some data reordering tasks that may make some
queries more efficient . It is recommended that you vacuum after a reindex is performed or
when you wish to reclaim space in the database file.</p>
<p>Vacuum is also able to change the pagesize of the database. After changing <code>db_fs_type</code> (which affects
pagesize) in server.toml, you must run a vacuum for this to take effect:</p>
<pre><code>docker stop &lt;container name&gt;
docker run --rm -i -t -v kanidmd:/data \
kanidm/server:latest /sbin/kanidmd vacuum -c /data/server.toml
docker start &lt;container name&gt;
</code></pre>
<h1 id="verification"><a class="header" href="#verification">Verification</a></h1>
<p>The server ships with a number of verification utilities to ensure that data is consistent such
as referential integrity or memberof.</p>
<p>Note that verification really is a last resort - the server does <em>a lot</em> to prevent and self-heal
from errors at run time, so you should rarely if ever require this utility. This utility was
developed to guarantee consistency during development!</p>
<p>You can run a verification with:</p>
<pre><code>docker stop &lt;container name&gt;
docker run --rm -i -t -v kanidmd:/data \
kanidm/server:latest /sbin/kanidmd verify -c /data/server.toml
docker start &lt;container name&gt;
</code></pre>
<p>If you have errors, please contact the project to help support you to resolve these.</p>
<h1 id="rename-the-domain"><a class="header" href="#rename-the-domain">Rename the domain</a></h1>
<p>There are some cases where you may need to rename the domain. You should have configured
this initially in the setup, however you may have a situation where a business is changing
name, merging, or other needs which may prompt this needing to be changed.</p>
<blockquote>
<p><strong>WARNING:</strong> This WILL break ALL u2f/webauthn tokens that have been enrolled, which MAY cause
accounts to be locked out and unrecoverable until further action is taken. DO NOT CHANGE
the domain name unless REQUIRED and have a plan on how to manage these issues.</p>
</blockquote>
<blockquote>
<p><strong>WARNING:</strong> This operation can take an extensive amount of time as ALL accounts and groups
in the domain MUST have their Security Principal Names (SPNs) regenerated. This WILL also cause
a large delay in replication once the system is restarted.</p>
</blockquote>
<p>You should make a backup before proceeding with this operation.</p>
<p>When you have a created a migration plan and strategy on handling the invalidation of webauthn,
you can then rename the domain.</p>
<p>First, stop the instance.</p>
<pre><code>docker stop &lt;container name&gt;
</code></pre>
<p>Second, change <code>domain</code> and <code>origin</code> in <code>server.toml</code>.</p>
<p>Third, trigger the database domain rename process.</p>
<pre><code>docker run --rm -i -t -v kanidmd:/data \
kanidm/server:latest /sbin/kanidmd domain rename -c /data/server.toml
</code></pre>
<p>Finally, you can now start your instance again.</p>
<pre><code>docker start &lt;container name&gt;
</code></pre>
<h1 id="raw-actions"><a class="header" href="#raw-actions">Raw actions</a></h1>
<p>The server has a low-level stateful API you can use for more complex or advanced tasks on large numbers
of entries at once. Some examples are below, but generally we advise you to use the APIs as listed
above.</p>
<pre><code># Create from json (group or account)
kanidm raw create -H https://localhost:8443 -C ../insecure/ca.pem -D admin example.create.account.json
kanidm raw create -H https://localhost:8443 -C ../insecure/ca.pem -D idm_admin example.create.group.json
# Apply a json stateful modification to all entries matching a filter
kanidm raw modify -H https://localhost:8443 -C ../insecure/ca.pem -D admin '{&quot;or&quot;: [ {&quot;eq&quot;: [&quot;name&quot;, &quot;idm_person_account_create_priv&quot;]}, {&quot;eq&quot;: [&quot;name&quot;, &quot;idm_service_account_create_priv&quot;]}, {&quot;eq&quot;: [&quot;name&quot;, &quot;idm_account_write_priv&quot;]}, {&quot;eq&quot;: [&quot;name&quot;, &quot;idm_group_write_priv&quot;]}, {&quot;eq&quot;: [&quot;name&quot;, &quot;idm_people_write_priv&quot;]}, {&quot;eq&quot;: [&quot;name&quot;, &quot;idm_group_create_priv&quot;]} ]}' example.modify.idm_admin.json
kanidm raw modify -H https://localhost:8443 -C ../insecure/ca.pem -D idm_admin '{&quot;eq&quot;: [&quot;name&quot;, &quot;idm_admins&quot;]}' example.modify.idm_admin.json
# Search and show the database representations
kanidm raw search -H https://localhost:8443 -C ../insecure/ca.pem -D admin '{&quot;eq&quot;: [&quot;name&quot;, &quot;idm_admin&quot;]}'
# Delete all entries matching a filter
kanidm raw delete -H https://localhost:8443 -C ../insecure/ca.pem -D idm_admin '{&quot;eq&quot;: [&quot;name&quot;, &quot;test_account_delete_me&quot;]}'
</code></pre>
<div style="break-before: page; page-break-before: always;"></div><h1 id="monitoring-the-platform"><a class="header" href="#monitoring-the-platform">Monitoring the platform</a></h1>
<p>The monitoring design of Kanidm is still very much in its infancy -
<a href="https://github.com/kanidm/kanidm/issues/216">take part in the dicussion at github.com/kanidm/kanidm/issues/216</a>.</p>
<h2 id="kanidmd"><a class="header" href="#kanidmd">kanidmd</a></h2>
<p>kanidmd currently responds to HTTP GET requests at the <code>/status</code> endpoint with a JSON object of
either &quot;true&quot; or &quot;false&quot;. <code>true</code> indicates that the platform is responding to requests.</p>
<div class="table-wrapper"><table><thead><tr><th>URL</th><th><code>&lt;hostname&gt;/status</code></th></tr></thead><tbody>
<tr><td>Example URL</td><td><code>https://example.com/status</code></td></tr>
<tr><td>Expected response</td><td>One of either <code>true</code> or <code>false</code> (without quotes)</td></tr>
<tr><td>Additional Headers</td><td>x-kanidm-opid</td></tr>
<tr><td>Content Type</td><td>application/json</td></tr>
<tr><td>Cookies</td><td>kanidm-session</td></tr>
</tbody></table>
</div><div style="break-before: page; page-break-before: always;"></div><h1 id="password-quality-and-badlisting"><a class="header" href="#password-quality-and-badlisting">Password Quality and Badlisting</a></h1>
<p>Kanidm embeds a set of tools to help your users use and create strong passwords.
This is important as not all user types will require multi-factor authentication (MFA)
for their roles, but compromised accounts still pose a risk. There may also be deployment
or other barriers to a site rolling out sitewide MFA.</p>
<h2 id="quality-checking"><a class="header" href="#quality-checking">Quality Checking</a></h2>
<p>Kanidm enforces that all passwords are checked by the library &quot;<a href="https://github.com/dropbox/zxcvbn">zxcvbn</a>&quot;.
This has a large number of checks for password quality. It also provides constructive feedback to users on how
to improve their passwords if they are rejected.</p>
<p>Some things that zxcvbn looks for is use of the account name or email in the password, common passwords,
low entropy passwords, dates, reverse words and more.</p>
<p>This library can not be disabled - all passwords in Kanidm must pass this check.</p>
<h2 id="password-badlisting"><a class="header" href="#password-badlisting">Password Badlisting</a></h2>
<p>This is the process of configuring a list of passwords to exclude from being able to be used.
This is especially useful if a specific business has been notified of a compromised account, allowing
you to maintain a list of customised excluded passwords.</p>
<p>The other value to this feature is being able to badlist common passwords that zxcvbn does not detect, or
from other large scale password compromises.</p>
<p>By default we ship with a preconfigured badlist that is updated over time as new password breach lists are
made available.</p>
<h2 id="updating-your-own-badlist"><a class="header" href="#updating-your-own-badlist">Updating your own Badlist</a></h2>
<p>You can update your own badlist by using the provided <code>kanidm_badlist_preprocess</code> tool which helps to automate this process.</p>
<p>Given a list of passwords in a text file, it will generate a modification set which can be applied.
The tool also provides the command you need to run to apply this:</p>
<pre><code>kanidm_badlist_preprocess -m -o /tmp/modlist.json &lt;password file&gt; [&lt;password file&gt; &lt;password file&gt; ...]
</code></pre>
<div style="break-before: page; page-break-before: always;"></div><h1 id="posix-accounts-and-groups"><a class="header" href="#posix-accounts-and-groups">POSIX Accounts and Groups</a></h1>
<p>Kanidm has features that enable its accounts and groups to be consumed on
POSIX-like machines, such as Linux, FreeBSD, or others.</p>
<h2 id="notes-on-posix-features"><a class="header" href="#notes-on-posix-features">Notes on POSIX Features</a></h2>
<p>Many design decisions have been made in the POSIX features
of Kanidm that are intended to make distributed systems easier to manage and
client systems more secure.</p>
<h3 id="uid-and-gid-numbers"><a class="header" href="#uid-and-gid-numbers">UID and GID Numbers</a></h3>
<p>In Kanidm there is no difference between a UID and a GID number. On most UNIX systems
a user will create all files with a primary user and group. The primary group is
effectively equivalent to the permissions of the user. It is very easy to see scenarios
where someone may change the account to have a shared primary group (ie <code>allusers</code>),
but without changing the umask on all client systems. This can cause users' data to be
compromised by any member of the same shared group.</p>
<p>To prevent this, many systems create a &quot;user private group&quot;, or UPG. This group has the
GID number matching the UID of the user, and the user sets their primary
group ID to the GID number of the UPG.</p>
<p>As there is now an equivalence between the UID and GID number of the user and the UPG,
there is no benefit in separating these values. As a result Kanidm accounts <em>only</em>
have a GID number, which is also considered to be its UID number as well. This has the benefit
of preventing the accidental creation of a separate group that has an overlapping GID number
(the <code>uniqueness</code> attribute of the schema will block the creation).</p>
<h3 id="upg-generation"><a class="header" href="#upg-generation">UPG Generation</a></h3>
<p>Due to the requirement that a user have a UPG for security, many systems create these as
two independent items. For example in /etc/passwd and /etc/group:</p>
<pre><code># passwd
william:x:654401105:654401105::/home/william:/bin/zsh
# group
william:x:654401105:
</code></pre>
<p>Other systems like FreeIPA use a plugin that generates a UPG as a database record on
creation of the account.</p>
<p>Kanidm does neither of these. As the GID number of the user must be unique, and a user
implies the UPG must exist, we can generate UPG's on-demand from the account.
This has a single side effect - that you are unable to add any members to a
UPG - given the nature of a user private group, this is the point.</p>
<h3 id="gid-number-generation"><a class="header" href="#gid-number-generation">GID Number Generation</a></h3>
<p>In the future, Kanidm plans to have asynchronous replication as a feature between writable
database servers. In this case, we need to be able to allocate stable and reliable
GID numbers to accounts on replicas that may not be in continual communication.</p>
<p>To do this, we use the last 32 bits of the account or group's UUID to
generate the GID number.</p>
<p>A valid concern is the possibility of duplication in the lower 32 bits. Given the
birthday problem, if you have 77,000 groups and accounts, you have a 50% chance
of duplication. With 50,000 you have a 20% chance, 9,300 you have a 1% chance and
with 2900 you have a 0.1% chance.</p>
<p>We advise that if you have a site with &gt;10,000 users you should use an external system
to allocate GID numbers serially or consistently to avoid potential duplication events.</p>
<p>This design decision is made as most small sites will benefit greatly from the
auto-allocation policy and the simplicity of its design, while larger enterprises
will already have IDM or business process applications for HR/People that are
capable of supplying this kind of data in batch jobs.</p>
<h2 id="enabling-posix-attributes"><a class="header" href="#enabling-posix-attributes">Enabling POSIX Attributes</a></h2>
<h3 id="enabling-posix-attributes-on-accounts"><a class="header" href="#enabling-posix-attributes-on-accounts">Enabling POSIX Attributes on Accounts</a></h3>
<p>To enable POSIX account features and IDs on an account, you require the permission
<code>idm_account_unix_extend_priv</code>. This is provided to <code>idm_admins</code> in the default database.</p>
<p>You can then use the following command to enable POSIX extensions.</p>
<pre><code>kanidm account posix set --name idm_admin &lt;account_id&gt; [--shell SHELL --gidnumber GID]
kanidm account posix set --name idm_admin demo_user
kanidm account posix set --name idm_admin demo_user --shell /bin/zsh
kanidm account posix set --name idm_admin demo_user --gidnumber 2001
</code></pre>
<p>You can view the accounts POSIX token details with:</p>
<pre><code>kanidm account posix show --name anonymous demo_user
</code></pre>
<h3 id="enabling-posix-attributes-on-groups"><a class="header" href="#enabling-posix-attributes-on-groups">Enabling POSIX Attributes on Groups</a></h3>
<p>To enable POSIX group features and IDs on an account, you require the permission <code>idm_group_unix_extend_priv</code>.
This is provided to <code>idm_admins</code> in the default database.</p>
<p>You can then use the following command to enable POSIX extensions:</p>
<pre><code>kanidm group posix set --name idm_admin &lt;group_id&gt; [--gidnumber GID]
kanidm group posix set --name idm_admin demo_group
kanidm group posix set --name idm_admin demo_group --gidnumber 2001
</code></pre>
<p>You can view the accounts POSIX token details with:</p>
<pre><code>kanidm group posix show --name anonymous demo_group
</code></pre>
<p>POSIX-enabled groups will supply their members as POSIX members to clients. There is no
special or separate type of membership for POSIX members required.</p>
<h2 id="troubleshooting-common-issues"><a class="header" href="#troubleshooting-common-issues">Troubleshooting Common Issues</a></h2>
<h3 id="subuid-conflicts-with-podman"><a class="header" href="#subuid-conflicts-with-podman">subuid conflicts with Podman</a></h3>
<p>Due to the way that Podman operates, in some cases using the Kanidm client inside non-root containers
with Kanidm accounts may fail with an error such as:</p>
<pre><code>ERRO[0000] cannot find UID/GID for user NAME: No subuid ranges found for user &quot;NAME&quot; in /etc/subuid
</code></pre>
<p>This is a fault in Podman and how it attempts to provide non-root containers, when UID/GIDs
are greater than 65535. In this case you may manually allocate your users GID number to be
between 1000 - 65535, which may not trigger the fault.</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="ssh-key-distribution"><a class="header" href="#ssh-key-distribution">SSH Key Distribution</a></h1>
<p>To support SSH authentication securely to a large set of hosts running SSH, we support
distribution of SSH public keys via the Kanidm server.</p>
<h2 id="configuring-accounts"><a class="header" href="#configuring-accounts">Configuring Accounts</a></h2>
<p>To view the current SSH public keys on accounts, you can use:</p>
<pre><code>kanidm account ssh list_publickeys --name &lt;login user&gt; &lt;account to view&gt;
kanidm account ssh list_publickeys --name idm_admin william
</code></pre>
<p>All users by default can self-manage their SSH public keys. To upload a key, a command like this
is the best way to do so:</p>
<pre><code>kanidm account ssh add_publickey --name william william 'test-key' &quot;`cat ~/.ssh/id_rsa.pub`&quot;
</code></pre>
<p>To remove (revoke) an SSH public key, delete them by the tag name:</p>
<pre><code>kanidm account ssh delete_publickey --name william william 'test-key'
</code></pre>
<h2 id="security-notes"><a class="header" href="#security-notes">Security Notes</a></h2>
<p>As a security feature, Kanidm validates <em>all</em> public keys to ensure they are valid SSH public keys.
Uploading a private key or other data will be rejected. For example:</p>
<pre><code>kanidm account ssh add_publickey --name william william 'test-key' &quot;invalid&quot;
Enter password:
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value:
Http(400, Some(SchemaViolation(InvalidAttributeSyntax)))', src/libcore/result.rs:1084:5
</code></pre>
<h2 id="server-configuration"><a class="header" href="#server-configuration">Server Configuration</a></h2>
<h3 id="public-key-caching-configuration"><a class="header" href="#public-key-caching-configuration">Public Key Caching Configuration</a></h3>
<p>If you have kanidm_unixd running, you can use it to locally cache SSH public keys. This means you
can still SSH into your machines, even if your network is down, you move away from Kanidm, or
some other interruption occurs.</p>
<p>The kanidm_ssh_authorizedkeys command is part of the kanidm-unix-clients package, so should be installed
on the servers. It communicates to kanidm_unixd, so you should have a configured PAM/nsswitch
setup as well.</p>
<p>You can test this is configured correctly by running:</p>
<pre><code>kanidm_ssh_authorizedkeys &lt;account name&gt;
</code></pre>
<p>If the account has SSH public keys you should see them listed, one per line.</p>
<p>To configure servers to accept these keys, you must change their /etc/ssh/sshd_config to
contain the lines:</p>
<pre><code>PubkeyAuthentication yes
UsePAM yes
AuthorizedKeysCommand /usr/bin/kanidm_ssh_authorizedkeys %u
AuthorizedKeysCommandUser nobody
</code></pre>
<p>Restart sshd, and then attempt to authenticate with the keys.</p>
<p>It's highly recommended you keep your client configuration and sshd_configuration in a configuration
management tool such as salt or ansible.</p>
<blockquote>
<p><strong>NOTICE:</strong>
With a working SSH key setup, you should also consider adding the following
sshd_config options as hardening.</p>
</blockquote>
<pre><code>PermitRootLogin no
PasswordAuthentication no
PermitEmptyPasswords no
GSSAPIAuthentication no
KerberosAuthentication no
</code></pre>
<h3 id="direct-communication-configuration"><a class="header" href="#direct-communication-configuration">Direct Communication Configuration</a></h3>
<p>In this mode, the authorised keys commands will contact Kanidm directly.</p>
<blockquote>
<p><strong>NOTICE:</strong>
As Kanidm is contacted directly there is no SSH public key cache. Any network
outage or communication loss may prevent you accessing your systems. You should
only use this version if you have a requirement for it.</p>
</blockquote>
<p>The kanidm_ssh_authorizedkeys_direct command is part of the kanidm-clients package, so should be installed
on the servers.</p>
<p>To configure the tool, you should edit /etc/kanidm/config, as documented in <a href="./client_tools.html">clients</a></p>
<p>You can test this is configured correctly by running:</p>
<pre><code>kanidm_ssh_authorizedkeys_direct -D anonymous &lt;account name&gt;
</code></pre>
<p>If the account has SSH public keys you should see them listed, one per line.</p>
<p>To configure servers to accept these keys, you must change their /etc/ssh/sshd_config to
contain the lines:</p>
<pre><code>PubkeyAuthentication yes
UsePAM yes
AuthorizedKeysCommand /usr/bin/kanidm_ssh_authorizedkeys_direct -D anonymous %u
AuthorizedKeysCommandUser nobody
</code></pre>
<p>Restart sshd, and then attempt to authenticate with the keys.</p>
<p>It's highly recommended you keep your client configuration and sshd_configuration in a configuration
management tool such as salt or ansible.</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="recycle-bin"><a class="header" href="#recycle-bin">Recycle Bin</a></h1>
<p>The recycle bin is a storage of deleted entries from the server. This allows
recovery from mistakes for a period of time.</p>
<blockquote>
<p><strong>WARNING:</strong> The recycle bin is a best effort - when recovering in some cases
not everything can be &quot;put back&quot; the way it was. Be sure to check your entries
are valid once they have been revived.</p>
</blockquote>
<h2 id="where-is-the-recycle-bin"><a class="header" href="#where-is-the-recycle-bin">Where is the Recycle Bin?</a></h2>
<p>The recycle bin is stored as part of your main database - it is included in all
backups and restores, just like any other data. It is also replicated between
all servers.</p>
<h2 id="how-do-things-get-into-the-recycle-bin"><a class="header" href="#how-do-things-get-into-the-recycle-bin">How do Things Get Into the Recycle Bin?</a></h2>
<p>Any delete operation of an entry will cause it to be sent to the recycle bin. No
configuration or specification is required.</p>
<h2 id="how-long-do-items-stay-in-the-recycle-bin"><a class="header" href="#how-long-do-items-stay-in-the-recycle-bin">How Long Do Items Stay in the Recycle Bin?</a></h2>
<p>Currently they stay up to 1 week before they are removed.</p>
<h2 id="managing-the-recycle-bin"><a class="header" href="#managing-the-recycle-bin">Managing the Recycle Bin</a></h2>
<p>You can display all items in the Recycle Bin with:</p>
<pre><code>kanidm recycle_bin list --name admin
</code></pre>
<p>You can show a single items with:</p>
<pre><code>kanidm recycle_bin get --name admin &lt;id&gt;
</code></pre>
<p>An entry can be revived with:</p>
<pre><code>kanidm recycle_bin revive --name admin &lt;id&gt;
</code></pre>
<h2 id="edge-cases"><a class="header" href="#edge-cases">Edge Cases</a></h2>
<p>The recycle bin is a best effort to restore your data - there are some cases where
the revived entries may not be the same as their were when they were deleted. This
generally revolves around reference types such as group membership, or when the reference
type includes supplemental map data such as the oauth2 scope map type.</p>
<p>An example of this data loss is the following steps:</p>
<pre><code>add user1
add group1
add user1 as member of group1
delete user1
delete group1
revive user1
revive group1
</code></pre>
<p>In this series of steps, due to the way that referential integrity is implemented, the
membership of user1 in group1 would be lost in this process. To explain why:</p>
<pre><code>add user1
add group1
add user1 as member of group1 // refint between the two established, and memberof added
delete user1 // group1 removes member user1 from refint
delete group1 // user1 now removes memberof group1 from refint
revive user1 // re-add groups based on directmemberof (empty set)
revive group1 // no members
</code></pre>
<p>These issues could be looked at again in the future, but for now we think that deletes of
groups is rare - we expect recycle bin to save you in &quot;opps&quot; moments, and in a majority
of cases you may delete a group or a user and then restore them. To handle this series
of steps requires extra code complexity in how we flag operations. For more,
see <a href="https://github.com/kanidm/kanidm/issues/177">This issue on github</a>.</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="why-tls"><a class="header" href="#why-tls">Why TLS?</a></h1>
<p>You may have noticed that Kanidm requires you to configure TLS in
your container - or that you provide something <em>with</em> TLS in front, like haproxy.</p>
<p>This is due to a single setting on the server - <code>secure_cookies</code></p>
<h2 id="what-are-secure-cookies"><a class="header" href="#what-are-secure-cookies">What are Secure Cookies?</a></h2>
<p><code>secure-cookies</code> is a flag set in cookies that asks a client to transmit them
back to the origin site if and only if HTTPS is present in the URL.</p>
<p>Certificate authority (CA) verification is <em>not</em> checked - you can use invalid,
out of date certificates, or even certificates where the <code>subjectAltName</code> does
not match, but the client must see https:// as the destination else it <em>will not</em>
send the cookies.</p>
<h2 id="how-does-that-affect-kanidm"><a class="header" href="#how-does-that-affect-kanidm">How Does That Affect Kanidm?</a></h2>
<p>Kanidm's authentication system is a stepped challenge response design, where you
initially request an &quot;intent&quot; to authenticate. Once you establish this intent,
the server sets up a session-id into a cookie, and informs the client of
what authentication methods can proceed.</p>
<p>If you do NOT have a HTTPS URL, the cookie with the session-id is not transmitted.
The server detects this as an invalid-state request in the authentication design,
and immediately breaks the connection, because it appears insecure.</p>
<p>Simply put, we are trying to use settings like secure_cookies to add constraints
to the server so that you <em>must</em> perform and adhere to best practices - such
as having TLS present on your communication channels.</p>
<div style="break-before: page; page-break-before: always;"></div><h2 id="getting-started-for-developers"><a class="header" href="#getting-started-for-developers">Getting Started (for Developers)</a></h2>
<h3 id="designs"><a class="header" href="#designs">Designs</a></h3>
<p>See the <a href="https://github.com/kanidm/kanidm/tree/master/designs">designs</a> folder, and compile the private documentation locally:</p>
<pre><code>cargo doc --document-private-items --open --no-deps
</code></pre>
<h3 id="rust-documentation"><a class="header" href="#rust-documentation">Rust Documentation</a></h3>
<p>A list of links to the library documentation is at
<a href="https://kanidm.com/documentation/">kanidm.com/documentation</a>.</p>
<h3 id="minimum-supported-rust-version"><a class="header" href="#minimum-supported-rust-version">Minimum Supported Rust Version</a></h3>
<p>The MSRV is specified in the package <code>Cargo.toml</code> files.</p>
<h3 id="build-profiles"><a class="header" href="#build-profiles">Build Profiles</a></h3>
<p>Setting different developer profiles while building is done by setting the
environment
variable <code>KANIDM_BUILD_PROFILE</code> to one of the bare filename of the TOML files in
<code>/profiles</code>. </p>
<p>For example, this will set the CPU flags to &quot;none&quot; and the location for the Web UI files to <code>/usr/share/kanidm/ui/pkg</code>:</p>
<pre><code class="language-shell">KANIDM_BUILD_PROFILE=release_suse_generic cargo build --release --bin kanidmd
</code></pre>
<h3 id="dependencies"><a class="header" href="#dependencies">Dependencies</a></h3>
<h4 id="macos"><a class="header" href="#macos">MacOS</a></h4>
<p>You will need <a href="https://rustup.rs/">rustup</a> to install a Rust toolchain.</p>
<h4 id="suse"><a class="header" href="#suse">SUSE</a></h4>
<p>You will need <a href="https://rustup.rs/">rustup</a> to install a Rust toolchain. If
you're
using the Tumbleweed release, it's packaged in <code>zypper</code>.</p>
<p>You will also need some system libraries to build this:</p>
<pre><code>libudev-devel sqlite3-devel libopenssl-devel
</code></pre>
<h4 id="fedora"><a class="header" href="#fedora">Fedora</a></h4>
<p>You need to install the Rust toolchain packages:</p>
<pre><code>rust cargo
</code></pre>
<p>You will also need some system libraries to build this:</p>
<pre><code>systemd-devel sqlite-devel openssl-devel pam-devel
</code></pre>
<p>Building the Web UI requires additional packages:</p>
<pre><code>perl-FindBin perl-File-Compare rust-std-static-wasm32-unknown-unknown
</code></pre>
<h4 id="ubuntu"><a class="header" href="#ubuntu">Ubuntu</a></h4>
<p>You need <a href="https://rustup.rs/">rustup</a> to install a Rust toolchain.</p>
<p>You will also need some system libraries to build this, which can be installed by running:</p>
<pre><code class="language-shell">sudo apt-get install libsqlite3-dev libudev-dev libssl-dev pkg-config libpam0g-dev
</code></pre>
<p>Tested with Ubuntu 20.04 and 22.04.</p>
<h4 id="windows"><a class="header" href="#windows">Windows</a></h4>
<p>You need <a href="https://rustup.rs/">rustup</a> to install a Rust toolchain.</p>
<p>An easy way to grab the dependencies is to install <a href="https://vcpkg.io/en/getting-started.html">vcpkg</a>.</p>
<p>This is how it works in the automated build:</p>
<ol>
<li>Enable use of installed packages for the user system-wide:</li>
</ol>
<pre><code class="language-shell">vcpkg integrate install
</code></pre>
<ol start="2">
<li>Install the openssl dependency, which compiles it from source. This downloads all sorts of dependencies, including perl for the build.</li>
</ol>
<pre><code class="language-shell">vcpkg install openssl:x64-windows-static-md
</code></pre>
<p>There's a powershell script in the root directory of the repository which, in concert with <code>openssl</code> will generate a config file and certs for testing.</p>
<h3 id="get-involved"><a class="header" href="#get-involved">Get Involved</a></h3>
<p>To get started, you'll need to fork or branch, and we'll merge based on pull
requests.</p>
<p>If you are a contributor to the project, simply clone:</p>
<pre><code class="language-shell">git clone git@github.com:kanidm/kanidm.git
</code></pre>
<p>If you are forking, then fork in GitHub and clone with:</p>
<pre><code class="language-shell">git clone https://github.com/kanidm/kanidm.git
cd kanidm
git remote add myfork git@github.com:&lt;YOUR USERNAME&gt;/kanidm.git
</code></pre>
<p>Select an issue (always feel free to reach out to us for advice!), and create a
branch to start working:</p>
<pre><code class="language-shell">git branch &lt;feature-branch-name&gt;
git checkout &lt;feature-branch-name&gt;
cargo test
</code></pre>
<p>When you are ready for review (even if the feature isn't complete and you just
want some advice):</p>
<ol>
<li>Run the test suite: <code>cargo test --workspace</code></li>
<li>Ensure rust formatting standards are followed: <code>cargo fmt --check</code></li>
<li>Try following the suggestions from clippy, after running <code>cargo clippy</code>.
This is not a blocker on us accepting your code!</li>
<li>Then commit your changes:</li>
</ol>
<pre><code class="language-shell">git commit -m 'Commit message' change_file.rs ...
git push &lt;myfork/origin&gt; &lt;feature-branch-name&gt;
</code></pre>
<p>If you receive advice or make further changes, just keep commiting to the branch,
and pushing to your branch. When we are happy with the code, we'll merge in GitHub,
meaning you can now clean up your branch.</p>
<pre><code>git checkout master
git pull
git branch -D &lt;feature-branch-name&gt;
</code></pre>
<h4 id="rebasing"><a class="header" href="#rebasing">Rebasing</a></h4>
<p>If you are asked to rebase your change, follow these steps:</p>
<pre><code>git checkout master
git pull
git checkout &lt;feature-branch-name&gt;
git rebase master
</code></pre>
<p>Then be sure to fix any merge issues or other comments as they arise. If you
have issues, you can always stop and reset with:</p>
<pre><code>git rebase --abort
</code></pre>
<h3 id="development-server-quickstart-for-interactive-testing"><a class="header" href="#development-server-quickstart-for-interactive-testing">Development Server Quickstart for Interactive Testing</a></h3>
<p>After getting the code, you will need a rust environment. Please investigate
<a href="https://rustup.rs">rustup</a> for your platform to establish this.</p>
<p>Once you have the source code, you need encryption certificates to use with the server,
because without certificates, authentication will fail. </p>
<p>We recommend using <a href="https://letsencrypt.org">Let's Encrypt</a>, but if this is not
possible, please use our insecure certificate tool (<code>insecure_generate_tls.sh</code>). </p>
<p><strong>NOTE:</strong> Windows developers can use <code>insecure_generate_tls.ps1</code>, which puts everything (including a templated confi gfile) in <code>$TEMP\kanidm</code>. Please adjust paths below to suit.</p>
<p>The insecure certificate tool creates <code>/tmp/kanidm</code> and puts some self-signed certificates there.</p>
<p>You can now build and run the server with the commands below. It will use a database
in <code>/tmp/kanidm.db</code>.</p>
<p>Create the initial database and generate an <code>admin</code> username:</p>
<pre><code>cargo run --bin kanidmd recover_account -c ./examples/insecure_server.toml admin
&lt;snip&gt;
Success - password reset to -&gt; Et8QRJgQkMJu3v1AQxcbxRWW44qRUZPpr6BJ9fCGapAB9cT4
</code></pre>
<p>Record the password above, then run the server start command:</p>
<pre><code>cd kanidmd/daemon
cargo run --bin kanidmd server -c ../../examples/insecure_server.toml
</code></pre>
<p>(The server start command is also a script in <code>kanidmd/daemon/run_insecure_dev_server.sh</code>)</p>
<p>In a new terminal, you can now build and run the client tools with:</p>
<pre><code>cargo run --bin kanidm -- --help
cargo run --bin kanidm -- login -H https://localhost:8443 -D anonymous -C /tmp/kanidm/ca.pem
cargo run --bin kanidm -- self whoami -H https://localhost:8443 -D anonymous -C /tmp/kanidm/ca.pem
cargo run --bin kanidm -- login -H https://localhost:8443 -D admin -C /tmp/kanidm/ca.pem
cargo run --bin kanidm -- self whoami -H https://localhost:8443 -D admin -C /tmp/kanidm/ca.pem
</code></pre>
<h3 id="building-the-web-ui"><a class="header" href="#building-the-web-ui">Building the Web UI</a></h3>
<p><strong>NOTE:</strong> There is a pre-packaged version of the Web UI at <code>/kanidmd_web_ui/pkg/</code>,
which can be used directly. This means you don't need to build the Web UI yourself.</p>
<p>The Web UI uses Rust WebAssembly rather than Javascript. To build this you need
to set up the environment:</p>
<pre><code>cargo install wasm-pack
</code></pre>
<p>Then you are able to build the UI:</p>
<pre><code>cd kanidmd_web_ui/
./build_wasm_dev.sh
</code></pre>
<p>To build for release, run <code>build_wasm_release.sh</code>.</p>
<p>The &quot;developer&quot; profile for kanidmd will automatically use the pkg output in this folder.</p>
<h3 id="build-a-kanidm-container"><a class="header" href="#build-a-kanidm-container">Build a Kanidm Container</a></h3>
<p>Build a container with the current branch using:</p>
<pre><code>make &lt;TARGET&gt;
</code></pre>
<p>Check <code>make help</code> for a list of valid targets.</p>
<p>The following environment variables control the build:</p>
<div class="table-wrapper"><table><thead><tr><th>ENV variable</th><th>Definition</th><th>Default</th></tr></thead><tbody>
<tr><td><code>IMAGE_BASE</code></td><td>Base location of the container image.</td><td><code>kanidm</code></td></tr>
<tr><td><code>IMAGE_VERSION</code></td><td>Determines the container's tag.</td><td>None</td></tr>
<tr><td><code>CONTAINER_TOOL_ARGS</code></td><td>Specify extra options for the container build tool.</td><td>None</td></tr>
<tr><td><code>IMAGE_ARCH</code></td><td>Passed to <code>--platforms</code> when the container is built.</td><td><code>linux/amd64,linux/arm64</code></td></tr>
<tr><td><code>CONTAINER_BUILD_ARGS</code></td><td>Override default ARG settings during the container build.</td><td>None</td></tr>
<tr><td><code>CONTAINER_TOOL</code></td><td>Use an alternative container build tool.</td><td><code>docker</code></td></tr>
<tr><td><code>BOOK_VERSION</code></td><td>Sets version used when building the documentation book.</td><td><code>master</code></td></tr>
</tbody></table>
</div>
<h4 id="container-build-examples"><a class="header" href="#container-build-examples">Container Build Examples</a></h4>
<p>Build a <code>kanidm</code> container using <code>podman</code>:</p>
<pre><code>CONTAINER_TOOL=podman make build/kanidmd
</code></pre>
<p>Build a <code>kanidm</code> container and use a redis build cache:</p>
<pre><code>CONTAINER_BUILD_ARGS='--build-arg &quot;SCCACHE_REDIS=redis://redis.dev.blackhats.net.au:6379&quot;' make build/kanidmd
</code></pre>
<h4 id="automatically-built-containers"><a class="header" href="#automatically-built-containers">Automatically Built Containers</a></h4>
<p>To speed up testing across platforms, we're leveraging GitHub actions to build
containers for test use.</p>
<p>Whenever code is merged with the <code>master</code> branch of Kanidm, containers are automatically
built for <code>kanidmd</code> and <code>radius</code>. Sometimes they fail to build, but we'll try to
keep them avilable.</p>
<p>To find information on the packages,
<a href="https://github.com/orgs/kanidm/packages?repo_name=kanidm">visit the Kanidm packages page</a>.</p>
<p>An example command for pulling and running the radius container is below. You'll
need to
<a href="https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry#authenticating-to-the-container-registry">authenticate with the GitHub container registry first</a>.</p>
<pre><code class="language-shell">docker pull ghcr.io/kanidm/radius:devel
docker run --rm -it \
-v $(pwd)/kanidm:/data/kanidm \
ghcr.io/kanidm/radius:devel
</code></pre>
<p>This assumes you have a <code>kanidm</code> client configuration file in the current working directory.</p>
<h2 id="building-the-book"><a class="header" href="#building-the-book">Building the Book</a></h2>
<p>You'll need <code>mdbook</code> to build the book:</p>
<pre><code class="language-shell">cargo install mdbook
</code></pre>
<p>To build it:</p>
<pre><code class="language-shell">cd kanidm_book
mdbook build
</code></pre>
<p>Or to run a local webserver:</p>
<pre><code class="language-shell">cd kanidm_book
mdbook serve
</code></pre>
<div style="break-before: page; page-break-before: always;"></div><h1 id="access-profiles"><a class="header" href="#access-profiles">Access Profiles</a></h1>
<p>Access Profiles (ACPs) are a way of expressing the set of actions which accounts are
permitted to perform on database records (<code>object</code>) in the system.</p>
<p>As a result, there are specific requirements to what these can control and how they are
expressed.</p>
<p>Access profiles define an action of <code>allow</code> or <code>deny</code>: <code>deny</code> has priority over <code>allow</code>
and will override even if applicable. They should only be created by system access profiles
because certain changes must be denied.</p>
<p>Access profiles are stored as entries and are dynamically loaded into a structure that is
more efficent for use at runtime. <code>Schema</code> and its transactions are a similar implementation.</p>
<h2 id="search-requirements"><a class="header" href="#search-requirements">Search Requirements</a></h2>
<p>A search access profile must be able to limit:</p>
<ol>
<li>the content of a search request and its scope.</li>
<li>the set of data returned from the objects visible.</li>
</ol>
<p>An example:</p>
<blockquote>
<p>Alice should only be able to search for objects where the class is <code>person</code>
and the object is a memberOf the group called &quot;visible&quot;. </p>
<p>Alice should only be able to see those the attribute <code>displayName</code> for those
users (not their <code>legalName</code>), and their public <code>email</code>.</p>
</blockquote>
<p>Worded a bit differently. You need permission over the scope of entries, you need to be able
to read the attribute to filter on it, and you need to be able to read the attribute to recieve
it in the result entry.</p>
<p>If Alice searches for <code>(&amp;(name=william)(secretdata=x))</code>, we should not allow this to
proceed because Alice doesn't have the rights to read secret data, so they should not be allowed
to filter on it. How does this work with two overlapping ACPs? For example: one that allows read
of name and description to class = group, and one that allows name to user. We don't want to
say <code>(&amp;(name=x)(description=foo))</code> and it to be allowed, because we don't know the target class
of the filter. Do we &quot;unmatch&quot; all users because they have no access to the filter components? (Could
be done by inverting and putting in an AndNot of the non-matchable overlaps). Or do we just
filter our description from the users returned (But that implies they DID match, which is a disclosure).</p>
<p>More concrete:</p>
<pre><code class="language-yaml">search {
action: allow
targetscope: Eq(&quot;class&quot;, &quot;group&quot;)
targetattr: name
targetattr: description
}
search {
action: allow
targetscope: Eq(&quot;class&quot;, &quot;user&quot;)
targetattr: name
}
SearchRequest {
...
filter: And: {
Pres(&quot;name&quot;),
Pres(&quot;description&quot;),
}
}
</code></pre>
<p>A potential defense is:</p>
<pre><code class="language-yaml">acp class group: Pres(name) and Pres(desc) both in target attr, allow
acp class user: Pres(name) allow, Pres(desc) deny. Invert and Append
</code></pre>
<p>So the filter now is:</p>
<pre><code class="language-yaml">And: {
AndNot: {
Eq(&quot;class&quot;, &quot;user&quot;)
},
And: {
Pres(&quot;name&quot;),
Pres(&quot;description&quot;),
},
}
</code></pre>
<p>This would now only allow access to the <code>name</code> and <code>description</code> of the class <code>group</code>.</p>
<p>If we extend this to a third, this would work. A more complex example:</p>
<pre><code class="language-yaml">search {
action: allow
targetscope: Eq(&quot;class&quot;, &quot;group&quot;)
targetattr: name
targetattr: description
}
search {
action: allow
targetscope: Eq(&quot;class&quot;, &quot;user&quot;)
targetattr: name
}
search {
action: allow
targetscope: And(Eq(&quot;class&quot;, &quot;user&quot;), Eq(&quot;name&quot;, &quot;william&quot;))
targetattr: description
}
</code></pre>
<p>Now we have a single user where we can read <code>description</code>. So the compiled filter above as:</p>
<pre><code class="language-yaml">And: {
AndNot: {
Eq(&quot;class&quot;, &quot;user&quot;)
},
And: {
Pres(&quot;name&quot;),
Pres(&quot;description&quot;),
},
}
</code></pre>
<p>This would now be invalid, first, because we would see that <code>class=user</code> and <code>william</code> has no name
so that would be excluded also. We also may not even have &quot;class=user&quot; in the second ACP, so we can't
use subset filter matching to merge the two.</p>
<p>As a result, I think the only possible valid solution is to perform the initial filter, then determine
on the candidates if we <em>could</em> have have valid access to filter on all required attributes. IE
this means even with an index look up, we still are required to perform some filter application
on the candidates.</p>
<p>I think this will mean on a possible candidate, we have to apply all ACP, then create a union of
the resulting targetattrs, and then compared that set into the set of attributes in the filter.</p>
<p>This will be slow on large candidate sets (potentially), but could be sped up with parallelism, caching
or other methods. However, in the same step, we can also apply the step of extracting only the allowed
read target attrs, so this is a valuable exercise.</p>
<h2 id="delete-requirements"><a class="header" href="#delete-requirements">Delete Requirements</a></h2>
<p>A <code>delete</code> profile must contain the <code>content</code> and <code>scope</code> of a delete.</p>
<p>An example:</p>
<blockquote>
<p>Alice should only be able to delete objects where the <code>memberOf</code> is
<code>purgeable</code>, and where they are not marked as <code>protected</code>.</p>
</blockquote>
<h2 id="create-requirements"><a class="header" href="#create-requirements">Create Requirements</a></h2>
<p>A <code>create</code> profile defines the following limits to what objects can be created, through the combination of filters and atttributes.</p>
<p>An example: </p>
<blockquote>
<p>Alice should only be able to create objects where the <code>class</code> is <code>group</code>, and can
only name the group, but they cannot add members to the group.</p>
</blockquote>
<p>An example of a content requirement could be something like &quot;the value of an attribute must pass a regular expression filter&quot;.
This could limit a user to creating a group of any name, except where the group's name contains &quot;admin&quot;.
This a contrived example which is also possible with filtering, but more complex requirements are possible.</p>
<p>For example, we want to be able to limit the classes that someone <em>could</em> create on an object
because classes often are used in security rules.</p>
<h2 id="modify-requirements"><a class="header" href="#modify-requirements">Modify Requirements</a></h2>
<p>A <code>modify</code> profile defines the following limits:</p>
<ul>
<li>a filter for which objects can be modified,</li>
<li>a set of attributes which can be modified.</li>
</ul>
<p>A <code>modify</code> profile defines a limit on the <code>modlist</code> actions. </p>
<p>For example: you may only be allowed to ensure <code>presence</code> of a value. (Modify allowing purge, not-present, and presence).</p>
<p>Content requirements (see <a href="developers/designs/access_profiles_and_security.html#create-requirements">Create Requirements</a>) are out of scope at the moment.</p>
<p>An example:</p>
<blockquote>
<p>Alice should only be able to modify a user's password if that user is a member of the
students group.</p>
</blockquote>
<p><strong>Note:</strong> <code>modify</code> does not imply <code>read</code> of the attribute. Care should be taken that we don't disclose
the current value in any error messages if the operation fails.</p>
<h2 id="targeting-requirements"><a class="header" href="#targeting-requirements">Targeting Requirements</a></h2>
<p>The <code>target</code> of an access profile should be a filter defining the objects that this applies to.</p>
<p>The filter limit for the profiles of what they are acting on requires a single special operation
which is the concept of &quot;targeting self&quot;. </p>
<p>For example: we could define a rule that says &quot;members of group X are allowed self-write to the <code>mobilePhoneNumber</code> attribute&quot;.</p>
<p>An extension to the filter code could allow an extra filter enum of <code>self</code>, that would allow this
to operate correctly, and would consume the entry in the event as the target of &quot;Self&quot;. This would
be best implemented as a compilation of <code>self -&gt; eq(uuid, self.uuid)</code>.</p>
<h2 id="implementation-details"><a class="header" href="#implementation-details">Implementation Details</a></h2>
<p>CHANGE: Receiver should be a group, and should be single value/multivalue? Can <em>only</em> be a group.</p>
<p>Example profiles:</p>
<pre><code class="language-yaml">search {
action: allow
receiver: Eq(&quot;memberof&quot;, &quot;admins&quot;)
targetscope: Pres(&quot;class&quot;)
targetattr: legalName
targetattr: displayName
description: Allow admins to read all users names
}
search {
action: allow
receiver: Self
targetscope: Self
targetattr: homeAddress
description: Allow everyone to read only their own homeAddress
}
delete {
action: allow
receiver: Or(Eq(&quot;memberof&quot;, &quot;admins), Eq(&quot;memberof&quot;, &quot;servicedesk&quot;))
targetscope: Eq(&quot;memberof&quot;, &quot;tempaccount&quot;)
description: Allow admins or servicedesk to delete any member of &quot;temp accounts&quot;.
}
// This difference in targetscope behaviour could be justification to change the keyword here
// to prevent confusion.
create {
action: allow
receiver: Eq(&quot;name&quot;, &quot;alice&quot;)
targetscope: And(Eq(&quot;class&quot;, &quot;person&quot;), Eq(&quot;location&quot;, &quot;AU&quot;))
createattr: location
createattr: legalName
createattr: mail
createclass: person
createclass: object
description: Allow alice to make new persons, only with class person+object, and only set
the attributes mail, location and legalName. The created object must conform to targetscope
}
modify {
action: allow
receiver: Eq(&quot;name&quot;, &quot;claire&quot;)
targetscope: And(Eq(&quot;class&quot;, &quot;group&quot;), Eq(&quot;name&quot;, &quot;admins&quot;))
presentattr: member
description: Allow claire to promote people as members of the admins group.
}
modify {
action: allow
receiver: Eq(&quot;name&quot;, &quot;claire&quot;)
targetscope: And(Eq(&quot;class&quot;, &quot;person&quot;), Eq(&quot;memberof&quot;, &quot;students&quot;))
presentattr: sshkeys
presentattr: class
targetclass: unixuser
description: Allow claire to modify persons in the students group, and to grant them the
class of unixuser (only this class can be granted!). Subsequently, she may then give
the sshkeys values as a modification.
}
modify {
action: allow
receiver: Eq(&quot;name&quot;, &quot;alice&quot;)
targetscope: Eq(&quot;memberof&quot;, &quot;students&quot;)
removedattr: sshkeys
description: Allow allice to purge or remove sshkeys from members of the students group,
but not add new ones
}
modify {
action: allow
receiver: Eq(&quot;name&quot;, &quot;alice&quot;)
targetscope: Eq(&quot;memberof&quot;, &quot;students&quot;)
removedattr: sshkeys
presentattr: sshkeys
description: Allow alice full control over the ssh keys attribute on members of students.
}
// This may not be valid: Perhaps if &lt;*&gt;attr: is on modify/create, then targetclass, must
// must be set, else class is considered empty.
//
// This profile could in fact be an invalid example, because presentattr: class, but not
// targetclass, so nothing could be granted.
modify {
action: allow
receiver: Eq(&quot;name&quot;, &quot;alice&quot;)
targetscope: Eq(&quot;memberof&quot;, &quot;students&quot;)
presentattr: class
description: Allow alice to grant any class to members of students.
}
</code></pre>
<h2 id="formalised-schema"><a class="header" href="#formalised-schema">Formalised Schema</a></h2>
<p>A complete schema would be:</p>
<h3 id="attributes"><a class="header" href="#attributes">Attributes</a></h3>
<div class="table-wrapper"><table><thead><tr><th>Name</th><th>Single/Multi</th><th>Type</th><th>Description</th></tr></thead><tbody>
<tr><td>acp_allow</td><td>single value</td><td>bool</td><td></td></tr>
<tr><td>acp_enable</td><td>single value</td><td>bool</td><td>This ACP is enabled</td></tr>
<tr><td>acp_receiver</td><td>single value</td><td>filter</td><td>???</td></tr>
<tr><td>acp_targetscope</td><td>single value</td><td>filter</td><td>???</td></tr>
<tr><td>acp_search_attr</td><td>multi value</td><td>utf8 case insense</td><td>A list of attributes that can be searched.</td></tr>
<tr><td>acp_create_class</td><td>multi value</td><td>utf8 case insense</td><td>Object classes in which an object can be created.</td></tr>
<tr><td>acp_create_attr</td><td>multi value</td><td>utf8 case insense</td><td>Attribute Entries that can be created.</td></tr>
<tr><td>acp_modify_removedattr</td><td>multi value</td><td>utf8 case insense</td><td>Modify if removed?</td></tr>
<tr><td>acp_modify_presentattr</td><td>multi value</td><td>utf8 case insense</td><td>???</td></tr>
<tr><td>acp_modify_class</td><td>multi value</td><td>utf8 case insense</td><td>???</td></tr>
</tbody></table>
</div>
<h3 id="classes"><a class="header" href="#classes">Classes</a></h3>
<div class="table-wrapper"><table><thead><tr><th>Name</th><th>Must Have</th><th>May Have</th></tr></thead><tbody>
<tr><td>access_control_profile</td><td><code>[acp_receiver, acp_targetscope]</code></td><td><code>[description, acp_allow]</code></td></tr>
<tr><td>access_control_search</td><td><code>[acp_search_attr]</code></td><td></td></tr>
<tr><td>access_control_delete</td><td></td><td></td></tr>
<tr><td>access_control_modify</td><td></td><td><code>[acp_modify_removedattr, acp_modify_presentattr, acp_modify_class]</code></td></tr>
<tr><td>access_control_create</td><td></td><td><code>[acp_create_class, acp_create_attr]</code></td></tr>
</tbody></table>
</div>
<p><strong>Important</strong>: empty sets really mean empty sets! </p>
<p>The ACP code will assert that both <code>access_control_profile</code> <em>and</em> one of the <code>search/delete/modify/create</code>
classes exists on an ACP. An important factor of this design is now the ability to <em>compose</em>
multiple ACP's into a single entry allowing a <code>create/delete/modify</code> to exist! However, each one must
still list their respective actions to allow proper granularity.</p>
<h2 id="search-application"><a class="header" href="#search-application">&quot;Search&quot; Application</a></h2>
<p>The set of access controls is checked, and the set where receiver matches the current identified
user is collected. These then are added to the users requested search as:</p>
<pre><code>And(&lt;User Search Request&gt;, Or(&lt;Set of Search Profile Filters))
</code></pre>
<p>In this manner, the search security is easily applied, as if the targets to conform to one of the
required search profile filters, the outer <code>And</code> condition is nullified and no results returned.</p>
<p>Once complete, in the translation of the entry -&gt; proto_entry, each access control and its allowed
set of attrs has to be checked to determine what of that entry can be displayed. Consider there are
three entries, A, B, C. An ACI that allows read of &quot;name&quot; on A, B exists, and a read of &quot;mail&quot; on
B, C. The correct behaviour is then:</p>
<pre><code>A: name
B: name, mail
C: mail
</code></pre>
<p>So this means that the <code>entry -&gt; proto entry</code> part is likely the most expensive part of the access
control operation, but also one of the most important. It may be possible to compile to some kind
of faster method, but initially a simple version is needed.</p>
<h2 id="delete-application"><a class="header" href="#delete-application">&quot;Delete&quot; Application</a></h2>
<p>Delete is similar to search, however there is the risk that the user may say something like:</p>
<pre><code>Pres(&quot;class&quot;).
</code></pre>
<p>Were we to approach this like search, this would then have &quot;every thing the identified user
is allowed to delete, is deleted&quot;. A consideration here is that <code>Pres(&quot;class&quot;)</code> would delete &quot;all&quot;
objects in the directory, but with the access control present, it would limit the deletion to the
set of allowed deletes.</p>
<p>In a sense this is a correct behaviour - they were allowed to delete everything they asked to
delete. However, in another it's not valid: the request was broad and they were not allowed access
to delete everything they requested.</p>
<p>The possible abuse vector here is that an attacker could then use delete requests to enumerate the
existence of entries in the database that they do not have access to. This requires someone to have
the delete privilege which in itself is very high level of access, so this risk may be minimal.</p>
<p>So the choices are:</p>
<ol>
<li>Treat it like search and allow the user to delete what they are allowed to delete,
but ignore other objects</li>
<li>Deny the request because their delete was too broad, and they must specify a valid deletion request.</li>
</ol>
<p>Option #2 seems more correct because the <code>delete</code> request is an explicit request, not a request where
you want partial results. Imagine someone wants to delete users A and B at the same time, but only
has access to A. They want this request to fail so they KNOW B was not deleted, rather than it
succeed and have B still exist with a partial delete status.</p>
<p>However, a possible issue is that Option #2 means that a delete request of
<code>And(Eq(attr, allowed_attribute), Eq(attr, denied))</code>, which is rejected may indicate presence of the
<code>denied</code> attribute. So option #1 may help in preventing a security risk of information disclosure.</p>
<!-- TODO
@yaleman: not always, it could indicate that the attribute doesn't exist so it's an invalid filter, but
that would depend if the response was "invalid" in both cases, or "invalid" / "refused"
-->
<p>This is also a concern for modification, where the modification attempt may or may not
fail depending on the entries and if you can/can't see them.</p>
<p><strong>IDEA:</strong> You can only <code>delete</code>/<code>modify</code> within the read scope you have. If you can't
read it (based on the read rules of <code>search</code>), you can't <code>delete</code> it. This is in addition to the filter
rules of the <code>delete</code> applying as well. So performing a <code>delete</code> of <code>Pres(class)</code>, will only delete
in your <code>read</code> scope and will never disclose if you are denied access.</p>
<!-- TODO
@yaleman: This goes back to the commentary on Option #2 and feels icky like SQL's `DELETE FROM <table>` just deleting everything. It's more complex from the client - you have to search for a set of things to delete - then delete them.
Explicitly listing the objects you want to delete feels.... way less bad. This applies to modifies too.  😁
-->
<h2 id="create-application"><a class="header" href="#create-application">&quot;Create&quot; Application</a></h2>
<p>Create seems like the easiest to apply. Ensure that only the attributes in <code>createattr</code> are in the
<code>createevent</code>, ensure the classes only contain the set in <code>createclass</code>, then finally apply
<code>filter_no_index</code> to the entry to entry. If all of this passes, the create is allowed.</p>
<p>A key point is that there is no union of <code>create</code> ACI's - the WHOLE ACI must pass, not parts of
multiple. This means if a control say &quot;allows creating group with member&quot; and &quot;allows creating
user with name&quot;, creating a group with <code>name</code> is not allowed - despite your ability to create
an entry with <code>name</code>, its classes don't match. This way, the administrator of the service can define
create controls with specific intent for how they will be used without the risk of two
controls causing unintended effects (<code>users</code> that are also <code>groups</code>, or allowing invalid values.</p>
<p>An important consideration is how to handle overlapping ACI. If two ACI <em>could</em> match the create
should we enforce both conditions are upheld? Or only a single upheld ACI allows the create?</p>
<p>In some cases it may not be possible to satisfy both, and that would block creates. The intent
of the access profile is that &quot;something like this CAN&quot; be created, so I believe that provided
only a single control passes, the create should be allowed.</p>
<h2 id="modify-application"><a class="header" href="#modify-application">&quot;Modify&quot; Application</a></h2>
<p>Modify is similar to Create, however we specifically filter on the <code>modlist</code> action of <code>present</code>,
<code>removed</code> or <code>purged</code> with the action. The rules of create still apply; provided all requirements
of the modify are permitted, then it is allowed once at least one profile allows the change.</p>
<p>A key difference is that if the modify ACP lists multiple <code>presentattr</code> types, the modify request
is valid if it is only modifying one attribute. IE we say <code>presentattr: name, email</code>, but we
only attempt to modify <code>email</code>.</p>
<h2 id="considerations"><a class="header" href="#considerations">Considerations</a></h2>
<ul>
<li>When should access controls be applied? During an operation, we only validate schema after
pre* Plugin application, so likely it has to be &quot;at that point&quot;, to ensure schema-based
validity of the entries that are allowed to be changed.</li>
<li>Self filter keyword should compile to <code>eq(&quot;uuid&quot;, &quot;....&quot;)</code>. When do we do this and how?</li>
<li><code>memberof</code> could take <code>name</code> or <code>uuid</code>, we need to be able to resolve this correctly, but this is
likely an issue in <code>memberof</code> which needs to be addressed, ie <code>memberof uuid</code> vs <code>memberof attr</code>.</li>
<li>Content controls in <code>create</code> and <code>modify</code> will be important to get right to avoid the security issues
of LDAP access controls. Given that <code>class</code> has special importance, it's only right to give it extra
consideration in these controls.</li>
<li>In the future when <code>recyclebin</code> is added, a <code>re-animation</code> access profile should be created allowing
revival of entries given certain conditions of the entry we are attempting to revive. A service-desk user
should not be able to revive a deleted high-privilege user.</li>
</ul>
<div style="break-before: page; page-break-before: always;"></div><h1 id="rest-interface"><a class="header" href="#rest-interface">REST Interface</a></h1>
<table>
<tr>
<td rowspan=2><img src="developers/designs//images/kani-warning.png" alt="Kani Warning" /></td>
<td><strong></strong></td>
</tr>
<tr>
<td>Here begins some early notes on the REST interface - much better ones are in the repository's designs directory.</td>
</tr>
</table>
<p>There's an endpoint at <code>/&lt;api_version&gt;/routemap</code> (for example, https://localhost/v1/routemap) which is based on the API routes as they get instantiated.</p>
<p>It's <em>very, very, very</em> early work, and should not be considered stable at all.</p>
<p>An example of some elements of the output is below:</p>
<pre><code class="language-json">{
&quot;routelist&quot;: [
{
&quot;path&quot;: &quot;/&quot;,
&quot;method&quot;: &quot;GET&quot;
},
{
&quot;path&quot;: &quot;/robots.txt&quot;,
&quot;method&quot;: &quot;GET&quot;
},
{
&quot;path&quot;: &quot;/ui/&quot;,
&quot;method&quot;: &quot;GET&quot;
},
{
&quot;path&quot;: &quot;/v1/account/:id/_unix/_token&quot;,
&quot;method&quot;: &quot;GET&quot;
},
{
&quot;path&quot;: &quot;/v1/schema/attributetype/:id&quot;,
&quot;method&quot;: &quot;GET&quot;
},
{
&quot;path&quot;: &quot;/v1/schema/attributetype/:id&quot;,
&quot;method&quot;: &quot;PUT&quot;
},
{
&quot;path&quot;: &quot;/v1/schema/attributetype/:id&quot;,
&quot;method&quot;: &quot;PATCH&quot;
}
]
}
</code></pre>
<div style="break-before: page; page-break-before: always;"></div><h1 id="kanidm-python-module"><a class="header" href="#kanidm-python-module">Kanidm Python Module</a></h1>
<p>So far it includes:</p>
<ul>
<li>asyncio methods for all calls, leveraging <a href="https://pypi.org/project/aiohttp/">aiohttp</a></li>
<li>every class and function is fully python typed (test by running <code>make test/pykanidm/mypy</code>)</li>
<li>test coverage for 95% of code, and most of the missing bit is just when you break things</li>
<li>loading configuration files into nice models using <a href="https://pypi.org/project/pydantic/">pydantic</a></li>
<li>basic password authentication</li>
<li>pulling RADIUS tokens</li>
</ul>
<p>TODO: a lot of things.</p>
<h2 id="setting-up-your-dev-environment"><a class="header" href="#setting-up-your-dev-environment">Setting up your dev environment.</a></h2>
<p>Setting up a dev environment can be a little complex because of the mono-repo.</p>
<ol>
<li>Install poetry: <code>python -m pip install poetry</code>. This is what we use to manage the packages, and allows you to set up virtual python environments easier.</li>
<li>Build the base environment. From within the <code>pykanidm</code> directory, run: <code>poetry install</code> This'll set up a virtual environment and install all the required packages (and development-related ones)</li>
<li>Start editing!</li>
</ol>
<p>Most IDEs will be happier if you open the kanidm_rlm_python or pykanidm directories as the base you are working from, rather than the kanidm repository root, so they can auto-load integrations etc.</p>
<h2 id="building-the-documentation"><a class="header" href="#building-the-documentation">Building the documentation</a></h2>
<p>To build a static copy of the docs, run:</p>
<pre><code class="language-shell">make docs/pykanidm/build
</code></pre>
<p>You can also run a local live server by running:</p>
<pre><code class="language-shell">make docs/pykanidm/serve
</code></pre>
<p>This'll expose a web server at <a href="http://localhost:8000">http://localhost:8000</a>.</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="radius-module-development"><a class="header" href="#radius-module-development">RADIUS Module Development</a></h1>
<p>Setting up a dev environment has some extra complexity due to the mono-repo design.</p>
<ol>
<li>Install poetry: <code>python -m pip install poetry</code>. This is what we use to manage the packages, and allows you to set up virtual python environments easier.</li>
<li>Build the base environment. From within the kanidm_rlm_python directory, run: <code>poetry install</code></li>
<li>Install the <code>kanidm</code> python library: <code>poetry run python -m pip install ../pykanidm</code></li>
<li>Start editing!</li>
</ol>
<p>Most IDEs will be happier if you open the <code>kanidm_rlm_python</code> or <code>pykanidm</code> directories as the base you are working from, rather than the <code>kanidm</code> repository root, so they can auto-load integrations etc.</p>
<h2 id="running-a-test-radius-container"><a class="header" href="#running-a-test-radius-container">Running a test RADIUS container</a></h2>
<p>From the root directory of the Kanidm repository:</p>
<ol>
<li>Build the container - this'll give you a container image called <code>kanidm/radius</code> with the tag <code>devel</code>:</li>
</ol>
<pre><code class="language-shell">make build/radiusd
</code></pre>
<ol start="2">
<li>Once the process has completed, check the container exists in your docker environment:</li>
</ol>
<pre><code class="language-shell">➜ docker image ls kanidm/radius
REPOSITORY TAG IMAGE ID CREATED SIZE
kanidm/radius devel 5dabe894134c About a minute ago 622MB
</code></pre>
<p><em>Note:</em> If you're just looking to play with a pre-built container, images are also automatically built based on the development branch and available at <code>ghcr.io/kanidm/radius:devel</code></p>
<ol start="3">
<li>Generate some self-signed certificates by running the script - just hit enter on all the prompts if you don't want to customise them. This'll put the files in <code>/tmp/kanidm</code>:</li>
</ol>
<pre><code class="language-shell">./insecure_generate_tls.sh
</code></pre>
<ol start="4">
<li>Run the container: </li>
</ol>
<pre><code class="language-shell">cd kanidm_rlm_python &amp;&amp; ./run_radius_container.sh
</code></pre>
<p>You can pass the following environment variables to <code>run_radius_container.sh</code> to set other options:</p>
<ul>
<li>IMAGE: an alternative image such as <code>ghcr.io/kanidm/radius:devel</code></li>
<li>CONFIG_FILE: mount your own config file</li>
</ul>
<p>For example:</p>
<pre><code class="language-shell">IMAGE=ghcr.io/kanidm/radius:devel \
CONFIG_FILE=~/.config/kanidm \
./run_radius_container.sh
</code></pre>
<h2 id="testing-authentication"><a class="header" href="#testing-authentication">Testing authentication</a></h2>
<p>Authentication can be tested through the client.localhost Network Access Server (NAS) configuration with:</p>
<pre><code class="language-shell">docker exec -i -t radiusd radtest \
&lt;username&gt; badpassword \
127.0.0.1 10 testing123
docker exec -i -t radiusd radtest \
&lt;username&gt; &lt;radius show_secret value here&gt; \
127.0.0.1 10 testing123
</code></pre>
<div style="break-before: page; page-break-before: always;"></div><h1 id="oauth2"><a class="header" href="#oauth2">OAuth2</a></h1>
<p>OAuth is a web authorisation protocol that allows &quot;single sign on&quot;. It's key to note
OAuth is authorisation, not authentication, as the protocol in its default forms
do not provide identity or authentication information, only information that
an entity is authorised for the requested resources.</p>
<p>OAuth can tie into extensions allowing an identity provider to reveal information
about authorised sessions. This extends OAuth from an authorisation only system
to a system capable of identity and authorisation. Two primary methods of this
exist today: RFC7662 token introspection, and OpenID connect.</p>
<h2 id="how-does-oauth2-work"><a class="header" href="#how-does-oauth2-work">How Does OAuth2 Work?</a></h2>
<p>A user wishes to access a service (resource, resource server). The resource
server does not have an active session for the client, so it redirects to the
authorisation server (Kanidm) to determine if the client should be allowed to proceed, and
has the appropriate permissions (scopes) for the requested resources.</p>
<p>The authorisation server checks the current session of the user and may present
a login flow if required. Given the identity of the user known to the authorisation
sever, and the requested scopes, the authorisation server makes a decision if it
allows the authorisation to proceed. The user is then prompted to consent to the
authorisation from the authorisation server to the resource server as some identity
information may be revealed by granting this consent.</p>
<p>If successful and consent given, the user is redirected back to the resource server with an
authorisation code. The resource server then contacts the authorisation server directly with this
code and exchanges it for a valid token that may be provided to the user's browser.</p>
<p>The resource server may then optionally contact the token introspection endpoint of the
authorisation server about the provided OAuth token, which yields extra metadata about the identity
that holds the token from the authorisation. This metadata may include identity information,
but also may include extended metadata, sometimes refered to as &quot;claims&quot;. Claims are
information bound to a token based on properties of the session that may allow
the resource server to make extended authorisation decisions without the need
to contact the authorisation server to arbitrate.</p>
<p>It's important to note that OAuth2 at its core is an authorisation system which has layered
identity-providing elements on top.</p>
<h3 id="resource-server"><a class="header" href="#resource-server">Resource Server</a></h3>
<p>This is the server that a user wants to access. Common examples could be Nextcloud, a wiki,
or something else. This is the system that &quot;needs protecting&quot; and wants to delegate authorisation
decisions to Kanidm.</p>
<p>It's important for you to know <em>how</em> your resource server supports OAuth2. For example, does it
support RFC 7662 token introspection or does it rely on OpenID connect for identity information?
Does the resource server support PKCE S256?</p>
<p>In general Kanidm requires that your resource server supports:</p>
<ul>
<li>HTTP basic authentication to the authorisation server</li>
<li>PKCE S256 code verification to prevent certain token attack classes</li>
<li>OIDC only - JWT ES256 for token signatures</li>
</ul>
<p>Kanidm will expose its OAuth2 APIs at the following URLs:</p>
<ul>
<li>user auth url: https://idm.example.com/ui/oauth2</li>
<li>api auth url: https://idm.example.com/oauth2/authorise</li>
<li>token url: https://idm.example.com/oauth2/token</li>
<li>token inspect url: https://idm.example.com/oauth2/inspect</li>
</ul>
<p>OpenID Connect discovery - you need to substitute your OAuth2 client id in the following
urls:</p>
<ul>
<li>OpenID connect issuer uri: https://idm.example.com/oauth2/openid/:client_id:/</li>
<li>OpenID connect discovery: https://idm.example.com/oauth2/openid/:client_id:/.well-known/openid-configuration</li>
</ul>
<p>For manual OpenID configuration:</p>
<ul>
<li>OpenID connect userinfo: https://idm.example.com/oauth2/openid/:client_id:/userinfo</li>
<li>token signing public key: https://idm.example.com/oauth2/openid/:client_id:/public_key.jwk</li>
</ul>
<h3 id="scope-relationships"><a class="header" href="#scope-relationships">Scope Relationships</a></h3>
<p>For an authorisation to proceed, the resource server will request a list of scopes, which are
unique to that resource server. For example, when a user wishes to login to the admin panel
of the resource server, it may request the &quot;admin&quot; scope from Kanidm for authorisation. But when
a user wants to login, it may only request &quot;access&quot; as a scope from Kanidm.</p>
<p>As each resource server may have its own scopes and understanding of these, Kanidm isolates
scopes to each resource server connected to Kanidm. Kanidm has two methods of granting scopes to accounts (users).</p>
<p>The first are implicit scopes. These are scopes granted to all accounts that Kanidm holds.</p>
<p>The second is scope mappings. These provide a set of scopes if a user is a member of a specific
group within Kanidm. This allows you to create a relationship between the scopes of a resource
server, and the groups/roles in Kanidm which can be specific to that resource server.</p>
<p>For an authorisation to proceed, all scopes requested must be available in the final scope set
that is granted to the account. This final scope set can be built from implicit and mapped
scopes.</p>
<p>This use of scopes is the primary means to control who can access what resources. For example, if
you have a resource server that will always request a scope of &quot;read&quot;, then you can limit the
&quot;read&quot; scope to a single group of users by a scope map so that only they may access that resource.</p>
<h2 id="configuration"><a class="header" href="#configuration">Configuration</a></h2>
<h3 id="create-the-kanidm-configuration"><a class="header" href="#create-the-kanidm-configuration">Create the Kanidm Configuration</a></h3>
<p>After you have understood your resource server requirements you first need to configure Kanidm.
By default members of &quot;system_admins&quot; or &quot;idm_hp_oauth2_manage_priv&quot; are able to create or
manage OAuth2 resource server integrations.</p>
<p>You can create a new resource server with:</p>
<pre><code>kanidm system oauth2 create &lt;name&gt; &lt;displayname&gt; &lt;origin&gt;
kanidm system oauth2 create nextcloud &quot;Nextcloud Production&quot; https://nextcloud.example.com
</code></pre>
<p>If you wish to create implicit scopes you can set these with:</p>
<pre><code>kanidm system oauth2 set_implicit_scopes &lt;name&gt; [scopes]...
kanidm system oauth2 set_implicit_scopes nextcloud login read_user
</code></pre>
<p>You can create a scope map with:</p>
<pre><code>kanidm system oauth2 create_scope_map &lt;name&gt; &lt;kanidm_group_name&gt; [scopes]...
kanidm system oauth2 create_scope_map nextcloud nextcloud_admins admin
</code></pre>
<blockquote>
<p><strong>WARNING</strong>
If you are creating an OpenID Connect (OIDC) resource server you <em>MUST</em> provide a
scope map OR implicit scope named 'openid'. Without this, OpenID clients <em>WILL NOT WORK</em></p>
</blockquote>
<blockquote>
<p><strong>HINT</strong>
OpenID connect allows a number of scopes that affect the content of the resulting
authorisation token. If one of the following scopes are requested by the OpenID client,
then the associated claims may be added to the authorisation token. It is not guaranteed
that all of the associated claims will be added.</p>
<ul>
<li>profile - (name, family_name, given_name, middle_name, nickname, preferred_username, profile, picture, website, gender, birthdate, zoneinfo, locale, and updated_at)</li>
<li>email - (email, email_verified)</li>
<li>address - (address)</li>
<li>phone - (phone_number, phone_number_verified)</li>
</ul>
</blockquote>
<p>Once created you can view the details of the resource server.</p>
<pre><code>kanidm system oauth2 get nextcloud
---
class: oauth2_resource_server
class: oauth2_resource_server_basic
class: object
displayname: Nextcloud Production
oauth2_rs_basic_secret: &lt;secret&gt;
oauth2_rs_name: nextcloud
oauth2_rs_origin: https://nextcloud.example.com
oauth2_rs_token_key: hidden
</code></pre>
<h3 id="configure-the-resource-server"><a class="header" href="#configure-the-resource-server">Configure the Resource Server</a></h3>
<p>On your resource server, you should configure the client ID as the &quot;oauth2_rs_name&quot; from
Kanidm, and the password to be the value shown in &quot;oauth2_rs_basic_secret&quot;. Ensure that
the code challenge/verification method is set to S256.</p>
<p>You should now be able to test authorisation.</p>
<h2 id="resetting-resource-server-security-material"><a class="header" href="#resetting-resource-server-security-material">Resetting Resource Server Security Material</a></h2>
<p>In the case of disclosure of the basic secret, or some other security event where you may wish
to invalidate a resource servers active sessions/tokens, you can reset the secret material of
the server with:</p>
<pre><code>kanidm system oauth2 reset_secrets
</code></pre>
<p>Each resource server has unique signing keys and access secrets, so this is limited to each
resource server.</p>
<h2 id="extended-options-for-legacy-clients"><a class="header" href="#extended-options-for-legacy-clients">Extended Options for Legacy Clients</a></h2>
<p>Not all resource servers support modern standards like PKCE or ECDSA. In these situations
it may be necessary to disable these on a per-resource server basis. Disabling these on
one resource server will not affect others.</p>
<p>To disable PKCE for a resource server:</p>
<pre><code>kanidm system oauth2 warning_insecure_client_disable_pkce &lt;resource server name&gt;
</code></pre>
<p>To enable legacy cryptograhy (RSA PKCS1-5 SHA256):</p>
<pre><code>kanidm system oauth2 warning_enable_legacy_crypto &lt;resource server name&gt;
</code></pre>
<h2 id="example-integrations"><a class="header" href="#example-integrations">Example Integrations</a></h2>
<h3 id="apache-mod_auth_openidc"><a class="header" href="#apache-mod_auth_openidc">Apache mod_auth_openidc</a></h3>
<p>Add the following to a <code>mod_auth_openidc.conf</code>. It should be included in a <code>mods_enabled</code> folder
or with an appropriate include.</p>
<pre><code>OIDCRedirectURI /protected/redirect_uri
OIDCCryptoPassphrase &lt;random password here&gt;
OIDCProviderMetadataURL https://kanidm.example.com/oauth2/openid/&lt;resource server name&gt;/.well-known/openid-configuration
OIDCScope &quot;openid&quot;
OIDCUserInfoTokenMethod authz_header
OIDCClientID &lt;resource server name&gt;
OIDCClientSecret &lt;resource server password&gt;
OIDCPKCEMethod S256
OIDCCookieSameSite On
# Set the `REMOTE_USER` field to the `preferred_username` instead of the UUID.
# Remember that the username can change, but this can help with systems like Nagios which use this as a display name.
# OIDCRemoteUserClaim preferred_username
</code></pre>
<p>Other scopes can be added as required to the <code>OIDCScope</code> line, eg: <code>OIDCScope &quot;openid scope2 scope3&quot;</code></p>
<p>In the virtual host, to protect a location:</p>
<pre><code>&lt;Location /&gt;
AuthType openid-connect
Require valid-user
&lt;/Location&gt;
</code></pre>
<h3 id="nextcloud"><a class="header" href="#nextcloud">Nextcloud</a></h3>
<p>Install the module <a href="https://apps.nextcloud.com/apps/user_oidc">from the nextcloud market place</a> -
it can also be found in the Apps section of your deployment as &quot;OpenID Connect user backend&quot;.</p>
<p>In Nextcloud's config.php you need to allow connection to remote servers:</p>
<pre><code>'allow_local_remote_servers' =&gt; true,
</code></pre>
<p>You may optionally choose to add:</p>
<pre><code>'allow_user_to_change_display_name' =&gt; false,
'lost_password_link' =&gt; 'disabled',
</code></pre>
<p>If you forget this, you may see the following error in logs:</p>
<pre><code>Host 172.24.11.129 was not connected to because it violates local access rules
</code></pre>
<p>This module does not support PKCE or ES256. You will need to run:</p>
<pre><code>kanidm system oauth2 warning_insecure_client_disable_pkce &lt;resource server name&gt;
kanidm system oauth2 warning_enable_legacy_crypto &lt;resource server name&gt;
</code></pre>
<p>In the settings menu, configure the discovery URL and client ID and secret.</p>
<p>You can choose to disable other login methods with:</p>
<pre><code>php occ config:app:set --value=0 user_oidc allow_multiple_user_backends
</code></pre>
<p>You can login directly by appending <code>?direct=1</code> to your login page. You can re-enable
other backends by setting the value to <code>1</code></p>
<h3 id="velociraptor"><a class="header" href="#velociraptor">Velociraptor</a></h3>
<p>Velociraptor supports OIDC. To configure it select &quot;Authenticate with SSO&quot; then &quot;OIDC&quot; during
the interactive configuration generator. Alternately, you can set the following keys in server.config.yaml:</p>
<pre><code>GUI:
authenticator:
type: OIDC
oidc_issuer: https://idm.example.com/oauth2/openid/:client\_id:/
oauth_client_id: &lt;resource server name/&gt;
oauth_client_secret: &lt;resource server secret&gt;
</code></pre>
<p>Velociraptor does not support PKCE. You will need to run the following:</p>
<pre><code>kanidm system oauth2 warning_insecure_client_disable_pkce &lt;resource server name&gt;
</code></pre>
<p>Initial users are mapped via their email in the Velociraptor server.config.yaml config:</p>
<pre><code>GUI:
initial_users:
- name: &lt;email address&gt;
</code></pre>
<p>Accounts require the <code>openid</code> and <code>email</code> scopes to be authenticated. It is recommended you limit
these to a group with a scope map due to Velociraptors high impact.</p>
<pre><code># kanidm group create velociraptor_users
# kanidm group add_members velociraptor_users ...
kanidm system oauth2 create_scope_map &lt;resource server name&gt; velociraptor_users openid email
</code></pre>
<h3 id="vouch-proxy"><a class="header" href="#vouch-proxy">Vouch Proxy</a></h3>
<p><em>You need to run at least the version 0.37.0</em>.</p>
<p>Vouch Proxy supports multiple OAuth and OIDC login providers.
To configure it you need to pass:</p>
<pre><code class="language-yaml">oauth:
auth_url: https://idm.wherekanidmruns.com/ui/oauth2
callback_url: https://login.wherevouchproxyruns.com/auth
client_id: &lt;oauth2_rs_name&gt; # Found in kanidm system oauth2 get XXXX (should be the same as XXXX)
client_secret: &lt;oauth2_rs_basic_secret&gt; # Found in kanidm system oauth2 get XXXX
code_challenge_method: S256
provider: oidc
scopes:
- email # Important, vouch proxy requiers a username (but does not use the proper scope, sub) or an email see https://github.com/vouch/vouch-proxy/issues/309, 310
token_url: https://idm.wherekanidmruns.com/oauth2/token
user_info_url: https://idm.wherekanidmruns.com/oauth2/openid/&lt;oauth2_rs_name&gt;/userinfo
</code></pre>
<p>The <code>email</code> scope needs to be passed and thus the attribute needs to exist in
the account:</p>
<pre><code>kanidm login --name idm_admin
kanidm account person extend YYYY --mail &quot;YYYY@somedomain.com&quot; --name idm_admin
</code></pre>
<div style="break-before: page; page-break-before: always;"></div><h1 id="pam-and-nsswitch"><a class="header" href="#pam-and-nsswitch">PAM and nsswitch</a></h1>
<p><a href="http://linux-pam.org">PAM</a> and <a href="https://en.wikipedia.org/wiki/Name_Service_Switch">nsswitch</a>
are the core mechanisms used by Linux and BSD clients to resolve identities from an IDM service
like Kanidm into accounts that can be used on the machine for various interactive tasks.</p>
<h2 id="the-unix-daemon"><a class="header" href="#the-unix-daemon">The UNIX Daemon</a></h2>
<p>Kanidm provides a UNIX daemon that runs on any client that wants to use PAM and nsswitch integration.
The daemon can cache the accounts for users who have unreliable networks, or who leave
the site where Kanidm is hosted. The daemon is also able to cache missing-entry responses to reduce network
traffic and main server load.</p>
<p>Additionally, running the daemon means that the PAM and nsswitch integration libraries can be small,
helping to reduce the attack surface of the machine. Similarly, a tasks daemon is available that can
create home directories on first login and supports several features related to aliases and links to
these home directories.</p>
<p>We recommend you install the client daemon from your system package manager:</p>
<pre><code># OpenSUSE
zypper in kanidm-unixd-clients
# Fedora
dnf install kanidm-unixd-clients
</code></pre>
<p>You can check the daemon is running on your Linux system with:</p>
<pre><code>systemctl status kanidm-unixd
</code></pre>
<p>You can check the privileged tasks daemon is running with:</p>
<pre><code>systemctl status kanidm-unixd-tasks
</code></pre>
<blockquote>
<p><strong>NOTE</strong> The <code>kanidm_unixd_tasks</code> daemon is not required for PAM and nsswitch functionality.
If disabled, your system will function as usual. It is, however, recommended due to the features
it provides supporting Kanidm's capabilities.</p>
</blockquote>
<p>Both unixd daemons use the connection configuration from /etc/kanidm/config. This is the covered in
<a href="integrations/./client_tools.html#kanidm-configuration">client_tools</a>.</p>
<p>You can also configure some unixd-specific options with the file /etc/kanidm/unixd:</p>
<pre><code>pam_allowed_login_groups = [&quot;posix_group&quot;]
default_shell = &quot;/bin/sh&quot;
home_prefix = &quot;/home/&quot;
home_attr = &quot;uuid&quot;
home_alias = &quot;spn&quot;
uid_attr_map = &quot;spn&quot;
gid_attr_map = &quot;spn&quot;
</code></pre>
<p><code>pam_allowed_login_groups</code> defines a set of POSIX groups where membership of any of these
groups will be allowed to login via PAM. All POSIX users and groups can be resolved by nss
regardless of PAM login status. This may be a group name, spn, or uuid.</p>
<p><code>default_shell</code> is the default shell for users. Defaults to <code>/bin/sh</code>.</p>
<p><code>home_prefix</code> is the prepended path to where home directories are stored. Must end with
a trailing <code>/</code>. Defaults to <code>/home/</code>.</p>
<p><code>home_attr</code> is the default token attribute used for the home directory path. Valid
choices are <code>uuid</code>, <code>name</code>, <code>spn</code>. Defaults to <code>uuid</code>.</p>
<p><code>home_alias</code> is the default token attribute used for generating symlinks
pointing to the user's
home directory. If set, this will become the value of the home path
to nss calls. It is recommended you choose a &quot;human friendly&quot; attribute here.
Valid choices are <code>none</code>, <code>uuid</code>, <code>name</code>, <code>spn</code>. Defaults to <code>spn</code>.</p>
<blockquote>
<p><strong>NOTICE:</strong>
All users in Kanidm can change their name (and their spn) at any time. If you change
<code>home_attr</code> from <code>uuid</code> you <em>must</em> have a plan on how to manage these directory renames
in your system. We recommend that you have a stable ID (like the UUID), and symlinks
from the name to the UUID folder. Automatic support is provided for this via the unixd
tasks daemon, as documented here.</p>
</blockquote>
<p><code>uid_attr_map</code> chooses which attribute is used for domain local users in presentation. Defaults
to <code>spn</code>. Users from a trust will always use spn.</p>
<p><code>gid_attr_map</code> chooses which attribute is used for domain local groups in presentation. Defaults
to <code>spn</code>. Groups from a trust will always use spn.</p>
<p>You can then check the communication status of the daemon:</p>
<pre><code>kanidm_unixd_status
</code></pre>
<p>If the daemon is working, you should see:</p>
<pre><code>[2020-02-14T05:58:37Z INFO kanidm_unixd_status] working!
</code></pre>
<p>If it is not working, you will see an error message:</p>
<pre><code>[2020-02-14T05:58:10Z ERROR kanidm_unixd_status] Error -&gt;
Os { code: 111, kind: ConnectionRefused, message: &quot;Connection refused&quot; }
</code></pre>
<p>For more information, see the
<a href="integrations/./pam_and_nsswitch.html#troubleshooting">Troubleshooting</a> section.</p>
<h2 id="nsswitch"><a class="header" href="#nsswitch">nsswitch</a></h2>
<p>When the daemon is running you can add the nsswitch libraries to /etc/nsswitch.conf</p>
<pre><code>passwd: compat kanidm
group: compat kanidm
</code></pre>
<p>You can <a href="integrations/./accounts_and_groups.html#creating-accounts">create a user</a> then
<a href="integrations/./posix_accounts.html#enabling-posix-attributes-on-accounts">enable POSIX feature on the user</a>.</p>
<p>You can then test that the POSIX extended user is able to be resolved with:</p>
<pre><code>getent passwd &lt;account name&gt;
getent passwd testunix
testunix:x:3524161420:3524161420:testunix:/home/testunix:/bin/sh
</code></pre>
<p>You can also do the same for groups.</p>
<pre><code>getent group &lt;group name&gt;
getent group testgroup
testgroup:x:2439676479:testunix
</code></pre>
<blockquote>
<p><strong>HINT</strong> Remember to also create a UNIX password with something like
<code>kanidm account posix set_password --name idm_admin demo_user</code>.
Otherwise there will be no credential for the account to authenticate. </p>
</blockquote>
<h2 id="pam"><a class="header" href="#pam">PAM</a></h2>
<blockquote>
<p><strong>WARNING:</strong> Modifications to PAM configuration <em>may</em> leave your system in a state
where you are unable to login or authenticate. You should always have a recovery
shell open while making changes (for example, root), or have access to single-user mode
at the machine's console.</p>
</blockquote>
<p>Pluggable Authentication Modules (PAM) is the mechanism a UNIX-like system
that authenticates users, and to control access to some resources. This is
configured through a stack of modules
that are executed in order to evaluate the request, and then each module may
request or reuse authentication token information.</p>
<h3 id="before-you-start"><a class="header" href="#before-you-start">Before You Start</a></h3>
<p>You <em>should</em> backup your /etc/pam.d directory from its original state as you
<em>may</em> change the PAM configuration in a way that will not allow you
to authenticate to your machine.</p>
<pre><code>cp -a /etc/pam.d /root/pam.d.backup
</code></pre>
<h3 id="suse--opensuse"><a class="header" href="#suse--opensuse">SUSE / OpenSUSE</a></h3>
<p>To configure PAM on suse you must modify four files, which control the
various stages of authentication:</p>
<pre><code>/etc/pam.d/common-account
/etc/pam.d/common-auth
/etc/pam.d/common-password
/etc/pam.d/common-session
</code></pre>
<blockquote>
<p><strong>IMPORTANT</strong> By default these files are symlinks to their corresponding <code>-pc</code> file, for example
<code>common-account -&gt; common-account-pc</code>. If you directly edit these you are updating the inner
content of the <code>-pc</code> file and it WILL be reset on a future upgrade. To prevent this you must
first copy the <code>-pc</code> files. You can then edit the files safely.</p>
</blockquote>
<pre><code>cp /etc/pam.d/common-account-pc /etc/pam.d/common-account
cp /etc/pam.d/common-auth-pc /etc/pam.d/common-auth
cp /etc/pam.d/common-password-pc /etc/pam.d/common-password
cp /etc/pam.d/common-session-pc /etc/pam.d/common-session
</code></pre>
<p>The content should look like:</p>
<pre><code># /etc/pam.d/common-auth-pc
# Controls authentication to this system (verification of credentials)
auth required pam_env.so
auth [default=1 ignore=ignore success=ok] pam_localuser.so
auth sufficient pam_unix.so nullok try_first_pass
auth requisite pam_succeed_if.so uid &gt;= 1000 quiet_success
auth sufficient pam_kanidm.so ignore_unknown_user
auth required pam_deny.so
# /etc/pam.d/common-account-pc
# Controls authorisation to this system (who may login)
account [default=1 ignore=ignore success=ok] pam_localuser.so
account sufficient pam_unix.so
account [default=1 ignore=ignore success=ok] pam_succeed_if.so uid &gt;= 1000 quiet_success quiet_fail
account sufficient pam_kanidm.so ignore_unknown_user
account required pam_deny.so
# /etc/pam.d/common-password-pc
# Controls flow of what happens when a user invokes the passwd command. Currently does NOT
# interact with kanidm.
password [default=1 ignore=ignore success=ok] pam_localuser.so
password required pam_unix.so use_authtok nullok shadow try_first_pass
password [default=1 ignore=ignore success=ok] pam_succeed_if.so uid &gt;= 1000 quiet_success quiet_fail
password required pam_kanidm.so
# /etc/pam.d/common-session-pc
# Controls setup of the user session once a successful authentication and authorisation has
# occured.
session optional pam_systemd.so
session required pam_limits.so
session optional pam_unix.so try_first_pass
session optional pam_umask.so
session [default=1 ignore=ignore success=ok] pam_succeed_if.so uid &gt;= 1000 quiet_success quiet_fail
session optional pam_kanidm.so
session optional pam_env.so
</code></pre>
<blockquote>
<p><strong>WARNING:</strong> Ensure that <code>pam_mkhomedir</code> or <code>pam_oddjobd</code> are <em>not</em> present in any stage of your
PAM configuration, as they interfere with the correct operation of the
Kanidm tasks daemon.</p>
</blockquote>
<h3 id="fedora--centos"><a class="header" href="#fedora--centos">Fedora / CentOS</a></h3>
<blockquote>
<p><strong>WARNING:</strong> Kanidm currently has no support for SELinux policy - this may mean you need to
run the daemon with permissive mode for the unconfined_service_t daemon type. To do this run:
<code>semanage permissive -a unconfined_service_t</code>. To undo this run <code>semanage permissive -d unconfined_service_t</code>.</p>
<p>You may also need to run <code>audit2allow</code> for sshd and other types to be able to access the UNIX daemon sockets.</p>
</blockquote>
<p>These files are managed by authselect as symlinks. You can either work with
authselect, or remove the symlinks first.</p>
<h4 id="without-authselect"><a class="header" href="#without-authselect">Without authselect</a></h4>
<p>If you just remove the symlinks:</p>
<p>Edit the content.</p>
<pre><code># /etc/pam.d/password-auth
auth required pam_env.so
auth required pam_faildelay.so delay=2000000
auth [default=1 ignore=ignore success=ok] pam_usertype.so isregular
auth [default=1 ignore=ignore success=ok] pam_localuser.so
auth sufficient pam_unix.so nullok try_first_pass
auth [default=1 ignore=ignore success=ok] pam_usertype.so isregular
auth sufficient pam_kanidm.so ignore_unknown_user
auth required pam_deny.so
account sufficient pam_unix.so
account sufficient pam_localuser.so
account sufficient pam_usertype.so issystem
account sufficient pam_kanidm.so ignore_unknown_user
account required pam_permit.so
password requisite pam_pwquality.so try_first_pass local_users_only
password sufficient pam_unix.so sha512 shadow nullok try_first_pass use_authtok
password sufficient pam_kanidm.so
password required pam_deny.so
session optional pam_keyinit.so revoke
session required pam_limits.so
-session optional pam_systemd.so
session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid
session required pam_unix.so
session optional pam_kanidm.so
</code></pre>
<ul>
<li></li>
</ul>
<pre><code># /etc/pam.d/system-auth
auth required pam_env.so
auth required pam_faildelay.so delay=2000000
auth sufficient pam_fprintd.so
auth [default=1 ignore=ignore success=ok] pam_usertype.so isregular
auth [default=1 ignore=ignore success=ok] pam_localuser.so
auth sufficient pam_unix.so nullok try_first_pass
auth [default=1 ignore=ignore success=ok] pam_usertype.so isregular
auth sufficient pam_kanidm.so ignore_unknown_user
auth required pam_deny.so
account sufficient pam_unix.so
account sufficient pam_localuser.so
account sufficient pam_usertype.so issystem
account sufficient pam_kanidm.so ignore_unknown_user
account required pam_permit.so
password requisite pam_pwquality.so try_first_pass local_users_only
password sufficient pam_unix.so sha512 shadow nullok try_first_pass use_authtok
password sufficient pam_kanidm.so
password required pam_deny.so
session optional pam_keyinit.so revoke
session required pam_limits.so
-session optional pam_systemd.so
session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid
session required pam_unix.so
session optional pam_kanidm.so
</code></pre>
<h4 id="with-authselect"><a class="header" href="#with-authselect">With authselect</a></h4>
<p>To work with authselect:</p>
<p>You will need to
<a href="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_authentication_and_authorization_in_rhel/configuring-user-authentication-using-authselect_configuring-authentication-and-authorization-in-rhel#creating-and-deploying-your-own-authselect-profile_configuring-user-authentication-using-authselect">create a new profile</a>.</p>
<!--TODO this URL is too short -->
<p>First run the following command:</p>
<pre><code>authselect create-profile kanidm -b sssd
</code></pre>
<p>A new folder, /etc/authselect/custom/kanidm, should be created. Inside that folder, create or
overwrite the following three files: nsswitch.conf, password-auth, system-auth.
password-auth and system-auth should be the same as above. nsswitch should be
modified for your use case. A working example looks like this:</p>
<pre><code>passwd: compat kanidm sss files systemd
group: compat kanidm sss files systemd
shadow: files
hosts: files dns myhostname
services: sss files
netgroup: sss files
automount: sss files
aliases: files
ethers: files
gshadow: files
networks: files dns
protocols: files
publickey: files
rpc: files
</code></pre>
<p>Then run:</p>
<pre><code>authselect select custom/kanidm
</code></pre>
<p>to update your profile.</p>
<h2 id="troubleshooting"><a class="header" href="#troubleshooting">Troubleshooting</a></h2>
<h3 id="check-posix-status-of-group-and-configuration"><a class="header" href="#check-posix-status-of-group-and-configuration">Check POSIX-status of Group and Configuration</a></h3>
<p>If authentication is failing via PAM, make sure that a list of groups is configured in <code>/etc/kanidm/unixd</code>:</p>
<pre><code>pam_allowed_login_groups = [&quot;example_group&quot;]
</code></pre>
<p>Check the status of the group with <code>kanidm group posix show example_group</code>.
If you get something similar to the following example:</p>
<pre><code class="language-shell">&gt; kanidm group posix show example_group
Using cached token for name idm_admin
Error -&gt; Http(500, Some(InvalidAccountState(&quot;Missing class: account &amp;&amp; posixaccount OR group &amp;&amp; posixgroup&quot;)),
&quot;b71f137e-39f3-4368-9e58-21d26671ae24&quot;)
</code></pre>
<p>POSIX-enable the group with <code>kanidm group posix set example_group</code>. You should get a result similar
to this when you search for your group name:</p>
<pre><code class="language-shell">&gt; kanidm group posix show example_group
[ spn: example_group@kanidm.example.com, gidnumber: 3443347205 name: example_group, uuid: b71f137e-39f3-4368-9e58-21d26671ae24 ]
</code></pre>
<p>Also, ensure the target user is in the group by running:</p>
<pre><code>&gt; kanidm group list_members example_group
</code></pre>
<h3 id="increase-logging"><a class="header" href="#increase-logging">Increase Logging</a></h3>
<p>For the unixd daemon, you can increase the logging with:</p>
<pre><code>systemctl edit kanidm-unixd.service
</code></pre>
<p>And add the lines:</p>
<pre><code>[Service]
Environment=&quot;RUST_LOG=kanidm=debug&quot;
</code></pre>
<p>Then restart the kanidm-unixd.service.</p>
<p>The same pattern is true for the kanidm-unixd-tasks.service daemon.</p>
<p>To debug the pam module interactions add <code>debug</code> to the module arguments such as:</p>
<pre><code>auth sufficient pam_kanidm.so debug
</code></pre>
<h3 id="check-the-socket-permissions"><a class="header" href="#check-the-socket-permissions">Check the Socket Permissions</a></h3>
<p>Check that the <code>/var/run/kanidm-unixd/sock</code> has permissions mode 777, and that non-root readers can see it with
ls or other tools.</p>
<p>Ensure that <code>/var/run/kanidm-unixd/task_sock</code> has permissions mode 700, and
that it is owned by the kanidm unixd process user.</p>
<h3 id="verify-that-you-can-access-the-kanidm-server"><a class="header" href="#verify-that-you-can-access-the-kanidm-server">Verify that You Can Access the Kanidm Server</a></h3>
<p>You can check this with the client tools:</p>
<pre><code>kanidm self whoami --name anonymous
</code></pre>
<h3 id="ensure-the-libraries-are-correct"><a class="header" href="#ensure-the-libraries-are-correct">Ensure the Libraries are Correct</a></h3>
<p>You should have:</p>
<pre><code>/usr/lib64/libnss_kanidm.so.2
/usr/lib64/security/pam_kanidm.so
</code></pre>
<p>The exact path <em>may</em> change depending on your distribution, <code>pam_unixd.so</code> should be co-located
with pam_kanidm.so. Look for it with the find command:</p>
<pre><code>find /usr/ -name 'pam_unix.so'
</code></pre>
<p>For example, on a Debian machine, it's located in <code>/usr/lib/x86_64-linux-gnu/security/</code>.</p>
<h3 id="increase-connection-timeout"><a class="header" href="#increase-connection-timeout">Increase Connection Timeout</a></h3>
<p>In some high-latency environments, you may need to increase the connection timeout. We set
this low to improve response on LANs, but over the internet this may need to be increased.
By increasing the conn_timeout, you will be able to operate on higher latency
links, but some operations may take longer to complete causing a degree of
latency. </p>
<p>By increasing the cache_timeout, you will need to refresh less often, but it may result in an
account lockout or group change until cache_timeout takes effect. Note that this has security
implications:</p>
<pre><code># /etc/kanidm/unixd
# Seconds
conn_timeout = 8
# Cache timeout
cache_timeout = 60
</code></pre>
<h3 id="invalidate-or-clear-the-cache"><a class="header" href="#invalidate-or-clear-the-cache">Invalidate or Clear the Cache</a></h3>
<p>You can invalidate the kanidm_unixd cache with:</p>
<pre><code>kanidm_cache_invalidate
</code></pre>
<p>You can clear (wipe) the cache with:</p>
<pre><code>kanidm_cache_clear
</code></pre>
<p>There is an important distinction between these two - invalidated cache items may still
be yielded to a client request if the communication to the main Kanidm server is not
possible. For example, you may have your laptop in a park without wifi.</p>
<p>Clearing the cache, however, completely wipes all local data about all accounts and groups.
If you are relying on this cached (but invalid) data, you may lose access to your accounts until
other communication issues have been resolved.</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="radius"><a class="header" href="#radius">RADIUS</a></h1>
<p>Remote Authentication Dial In User Service (RADIUS) is a network protocol
that is commonly used to authenticate Wi-Fi devices or Virtual Private
Networks (VPNs). While it should not be a sole point of trust/authentication
to an identity, it's still an important control for protecting network resources.</p>
<p>Kanidm has a philosophy that each account can have multiple credentials which
are related to their devices, and limited to specific resources. RADIUS is
no exception and has a separate credential for each account to use for
RADIUS access.</p>
<h2 id="disclaimer"><a class="header" href="#disclaimer">Disclaimer</a></h2>
<p>It's worth noting some disclaimers about Kanidm's RADIUS integration.</p>
<h3 id="one-credential---one-account"><a class="header" href="#one-credential---one-account">One Credential - One Account</a></h3>
<p>Kanidm normally attempts to have credentials for each <em>device</em> and
<em>application</em> rather than the legacy model of one to one.</p>
<p>The RADIUS protocol is only able to attest a <em>single</em> credential in an
authentication attempt, which limits us to storing a single RADIUS credential
per account. However, despite this limitation, it still greatly improves the
situation by isolating the RADIUS credential from the primary or application
credentials of the account. This solves many common security concerns around
credential loss or disclosure, and prevents rogue devices from locking out
accounts as they attempt to authenticate to Wi-Fi with expired credentials.</p>
<h3 id="cleartext-credential-storage"><a class="header" href="#cleartext-credential-storage">Cleartext Credential Storage</a></h3>
<p>RADIUS offers many different types of tunnels and authentication mechanisms.
However, most client devices &quot;out of the box&quot; only attempt a single type when
a WPA2-Enterprise network is selected: MSCHAPv2 with PEAP. This is a
challenge-response protocol that requires clear text or Windows NT LAN
Manager (NTLM) credentials.</p>
<p>As MSCHAPv2 with PEAP is the only practical, universal RADIUS-type supported
on all devices with minimal configuration, we consider it imperative
that it MUST be supported as the default. Esoteric RADIUS types can be used
as well, but this is up to administrators to test and configure.</p>
<p>Due to this requirement, we must store the RADIUS material as clear text or
NTLM hashes. It would be silly to think that NTLM is secure as it relies on
the obsolete and deprecated MD4 cryptographic hash, providing only an
illusion of security.</p>
<p>This means Kanidm stores RADIUS credentials in the database as clear text.</p>
<p>We believe this is a reasonable decision and is a low risk to security because:</p>
<ul>
<li>The access controls around RADIUS secrets by default are strong, limited
to only self-account read and RADIUS-server read.</li>
<li>As RADIUS credentials are separate from the primary account credentials and
have no other rights, their disclosure is not going to lead to a full
account compromise.</li>
<li>Having the credentials in clear text allows a better user experience as<br />
clients can view the credentials at any time to enroll further devices.</li>
</ul>
<h2 id="account-credential-configuration"><a class="header" href="#account-credential-configuration">Account Credential Configuration</a></h2>
<p>For an account to use RADIUS they must first generate a RADIUS secret unique
to that account. By default, all accounts can self-create this secret.</p>
<pre><code>kanidm account radius generate_secret --name william william
kanidm account radius show_secret --name william william
</code></pre>
<h2 id="account-group-configuration"><a class="header" href="#account-group-configuration">Account Group Configuration</a></h2>
<p>In Kanidm, accounts which can authenticate to RADIUS must be a member
of an allowed group. This allows you to define which users or groups may use
a Wi-Fi or VPN infrastructure, and provides a path for revoking access to the
resources through group management. The key point of this is that service
accounts should not be part of this group:</p>
<pre><code>kanidm group create --name idm_admin radius_access_allowed
kanidm group add_members --name idm_admin radius_access_allowed william
</code></pre>
<h2 id="radius-server-service-account"><a class="header" href="#radius-server-service-account">RADIUS Server Service Account</a></h2>
<p>To read these secrets, the RADIUS server requires an account with the
correct privileges. This can be created and assigned through the group
&quot;idm_radius_servers&quot;, which is provided by default.</p>
<p>First, create the account and add it to the group:</p>
<pre><code class="language-shell">kanidm account create --name admin radius_service_account &quot;Radius Service Account&quot;
kanidm group add_members --name admin idm_radius_servers radius_service_account
</code></pre>
<p>Now reset the account password, using the <code>admin</code> account:</p>
<pre><code class="language-shell">kanidm account credential update --name admin radius_service_account
</code></pre>
<h2 id="deploying-a-radius-container"><a class="header" href="#deploying-a-radius-container">Deploying a RADIUS Container</a></h2>
<p>We provide a RADIUS container that has all the needed integrations.
This container requires some cryptographic material, with the following files being in <code>/etc/raddb/certs</code>. (Modifiable in the configuration)</p>
<div class="table-wrapper"><table><thead><tr><th>filename</th><th>description</th></tr></thead><tbody>
<tr><td>ca.pem</td><td>The signing CA of the RADIUS certificate</td></tr>
<tr><td>dh.pem</td><td>The output of <code>openssl dhparam -in ca.pem -out ./dh.pem 2048</code></td></tr>
<tr><td>cert.pem</td><td>The certificate for the RADIUS server</td></tr>
<tr><td>key.pem</td><td>The signing key for the RADIUS certificate</td></tr>
</tbody></table>
</div>
<p>The configuration file (<code>/data/kanidm</code>) has the following template:</p>
<pre><code class="language-toml">uri = &quot;https://example.com&quot; # URL to the Kanidm server
verify_hostnames = true # verify the hostname of the Kanidm server
verify_ca = false # Strict CA verification
ca = /data/ca.pem # Path to the kanidm ca
username = # Username of the RADIUS service account
password = # Generated secret for the service account
# Default vlans for groups that don't specify one.
radius_default_vlan = 1
# A list of Kanidm groups which must be a member
# before they can authenticate via RADIUS.
radius_required_groups = [
&quot;radius_access_allowed&quot;,
]
# A mapping between Kanidm groups and VLANS
radius_groups = [
{ name = &quot;radius_access_allowed&quot;, vlan = 10 },
]
# A mapping of clients and their authentication tokens
radius_clients = [
{ name = &quot;test&quot;, ipaddr = &quot;127.0.0.1&quot;, secret = &quot;testing123&quot; },
# TODO: see if this works - it gets written out to the file
{ name = &quot;docker&quot; , ipaddr = &quot;172.17.0.0/16&quot;, secret = &quot;testing123&quot; },
]
# radius_cert_path = &quot;/etc/raddb/certs/cert.pem&quot;
# the signing key for radius TLS
# radius_key_path = &quot;/etc/raddb/certs/key.pem&quot;
# the diffie-hellman output
# radius_dh_path = &quot;/etc/raddb/certs/dh.pem&quot;
# the CA certificate
# radius_ca_path = &quot;/etc/raddb/certs/ca.pem&quot;
</code></pre>
<h2 id="a-fully-configured-example"><a class="header" href="#a-fully-configured-example">A fully configured example</a></h2>
<pre><code class="language-toml">url = &quot;https://example.com&quot;
username = &quot;radius_service_account&quot;
# The generated password from above
password = &quot;cr4bzr0ol&quot;
# default vlan for groups that don't specify one.
radius_default_vlan = 99
# if the user is in one of these Kanidm groups,
# then they're allowed to authenticate
radius_required_groups = [
&quot;radius_access_allowed&quot;,
]
radius_groups = [
{ name = &quot;radius_access_allowed&quot;, vlan = 10 }
]
radius_clients = [
{ name = &quot;localhost&quot;, ipaddr = &quot;127.0.0.1&quot;, secret = &quot;testing123&quot; },
{ name = &quot;docker&quot; , ipaddr = &quot;172.17.0.0/16&quot;, secret = &quot;testing123&quot; },
]
</code></pre>
<h2 id="moving-to-production"><a class="header" href="#moving-to-production">Moving to Production</a></h2>
<p>To expose this to a Wi-Fi infrastructure, add your NAS in the configuration:</p>
<pre><code class="language-toml">radius_clients = [
{ name = &quot;access_point&quot;, ipaddr = &quot;10.2.3.4&quot;, secret = &quot;&lt;a_random_value&gt;&quot; }
]
</code></pre>
<p>Then re-create/run your docker instance and expose the ports by adding
<code>-p 1812:1812 -p 1812:1812/udp</code> to the command.</p>
<p>If you have any issues, check the logs from the RADIUS output, as they tend
to indicate the cause of the problem. To increase the logging level you can
re-run your environment with debug enabled:</p>
<pre><code class="language-shell">docker rm radiusd
docker run --name radiusd \
-e DEBUG=True \
-p 1812:1812 \
-p 1812:1812/udp
--interactive --tty \
--volume /tmp/kanidm:/etc/raddb/certs \
kanidm/radius:latest
</code></pre>
<p>Note: the RADIUS container <em>is</em> configured to provide
<a href="https://freeradius.org/rfc/rfc2868.html#Tunnel-Private-Group-ID">Tunnel-Private-Group-ID</a>,
so if you wish to use Wi-Fi-assigned VLANs on your infrastructure, you can
assign these by groups in the configuration file as shown in the above examples.</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="ldap"><a class="header" href="#ldap">LDAP</a></h1>
<p>While many applications can support systems like Security Assertion Markup
Language (SAML), or Open Authorization (OAuth), many do not.
Lightweight Directory Access Protocol (LDAP) has been the &quot;lingua franca&quot; of
authentication for many years, with almost every application in the world being
able to search and bind to LDAP. As many organization still rely on LDAP, Kanidm
can host a read-only LDAP interface.</p>
<blockquote>
<p><strong>WARNING</strong> The LDAP server in Kanidm is not RFC compliant. This
is intentional, as Kanidm wants to cover the common use case,
simple bind and search.</p>
</blockquote>
<h2 id="what-is-ldap"><a class="header" href="#what-is-ldap">What is LDAP</a></h2>
<p>LDAP is a protocol to read data from a directory of information. It is not
a server, but a way to communicate to a server. There are many famous LDAP
implementations such as Active Directory, 389 Directory Server, DSEE,
FreeIPA, and many others. Because it is a standard, applications can use
an LDAP client library to authenticate users to LDAP, given &quot;one account&quot; for
many applications - an IDM just like Kanidm!</p>
<h2 id="data-mapping"><a class="header" href="#data-mapping">Data Mapping</a></h2>
<p>Kanidm cannot be mapped 100% to LDAP's objects. This is because LDAP
types are simple key-values on objects which are all UTF8 strings (or subsets
thereof) based on validation (matching) rules. Kanidm internally implements complex
data types such as tagging on SSH keys, or multi-value credentials. These can not
be represented in LDAP.</p>
<p>Many of the structures in Kanidm do not correlate closely to LDAP. For example
Kanidm only has a GID number, where LDAP's schemas define both a UID number and a
GID number.</p>
<p>Entries in the database also have a specific name in LDAP, related to their path
in the directory tree. Kanidm is a flat model, so we have to emulate some tree-like
elements, and ignore others.</p>
<p>For this reason, when you search the LDAP interface, Kanidm will make some mapping decisions.</p>
<ul>
<li>The domain_info object becomes the suffix root.</li>
<li>All other entries are direct subordinates of the domain_info for DN purposes.</li>
<li>Distinguished Names (DNs) are generated from the attributes naming attributes.</li>
<li>Bind DNs can be remapped and rewritten, and may not even be a DN during bind.</li>
<li>The Kanidm domain name is used to generate the base DN.</li>
<li>The '*' and '+' operators can not be used in conjuction with attribute lists in searches.</li>
</ul>
<p>These decisions were made to make the path as simple and effective as possible,
relying more on the Kanidm query and filter system than attempting to generate a tree-like
representation of data. As almost all clients can use filters for entry selection
we don't believe this is a limitation for the consuming applications.</p>
<h2 id="security"><a class="header" href="#security">Security</a></h2>
<h3 id="tls-1"><a class="header" href="#tls-1">TLS</a></h3>
<p>StartTLS is not supported due to security risks. LDAPS is the only secure method
of communicating to any LDAP server. Kanidm, if configured with certificates, will
use them for LDAPS (and will not listen on a plaintext LDAP port). If no certificates exist
Kanidm will listen on a plaintext LDAP port, and you MUST TLS terminate in front
of the Kanidm system to secure data and authentication.</p>
<h3 id="access-controls"><a class="header" href="#access-controls">Access Controls</a></h3>
<p>LDAP only supports password authentication. As LDAP is used heavily in POSIX environments
the LDAP bind for any DN will use its configured posix password.</p>
<p>As the POSIX password is not equivalent in strength to the primary credentials of Kanidm
(which may be multi-factor authentication, MFA), the LDAP bind does not grant
rights to elevated read permissions. All binds have the permissions of &quot;Anonymous&quot;
even if the anonymous account is locked.</p>
<h2 id="server-configuration-1"><a class="header" href="#server-configuration-1">Server Configuration</a></h2>
<p>To configure Kanidm to provide LDAP, add the argument to the <code>server.toml</code> configuration:</p>
<pre><code>ldapbindaddress = &quot;127.0.0.1:3636&quot;
</code></pre>
<p>You should configure TLS certificates and keys as usual - LDAP will re-use the Web
server TLS material.</p>
<h2 id="showing-ldap-entries-and-attribute-maps"><a class="header" href="#showing-ldap-entries-and-attribute-maps">Showing LDAP Entries and Attribute Maps</a></h2>
<p>By default Kanidm is limited in what attributes are generated or remapped into
LDAP entries. However, the server internally contains a map of extended attribute
mappings for application specific requests that must be satisfied.</p>
<p>An example is that some applications expect and require a 'CN' value, even though Kanidm does not
provide it. If the application is unable to be configured to accept &quot;name&quot; it may be necessary
to use Kanidm's mapping feature. Currently these are compiled into the server, so you may need to open
an issue with your requirements.</p>
<p>To show what attribute maps exists for an entry you can use the attribute search term '+'.</p>
<pre><code># To show Kanidm attributes
ldapsearch ... -x '(name=admin)' '*'
# To show all attribute maps
ldapsearch ... -x '(name=admin)' '+'
</code></pre>
<p>Attributes that are in the map can be requested explicitly, and this can be combined with requesting
Kanidm native attributes.</p>
<pre><code>ldapsearch ... -x '(name=admin)' cn objectClass displayname memberof
</code></pre>
<h2 id="example"><a class="header" href="#example">Example</a></h2>
<p>Given a default install with domain &quot;example.com&quot; the configured LDAP DN will be &quot;dc=example,dc=com&quot;.
This can be queried with:</p>
<pre><code>cargo run -- server -D kanidm.db -C ca.pem -c cert.pem -k key.pem -b 127.0.0.1:8443 -l 127.0.0.1:3636
&gt; LDAPTLS_CACERT=ca.pem ldapsearch -H ldaps://127.0.0.1:3636 -b 'dc=example,dc=com' -x '(name=test1)'
# test1@example.com, example.com
dn: spn=test1@example.com,dc=example,dc=com
objectclass: account
objectclass: memberof
objectclass: object
objectclass: person
displayname: Test User
memberof: spn=group240@example.com,dc=example,dc=com
name: test1
spn: test1@example.com
entryuuid: 22a65b6c-80c8-4e1a-9b76-3f3afdff8400
</code></pre>
<p>It is recommended that client applications filter accounts that can login with '(class=account)'
and groups with '(class=group)'. If possible, group membership is defined in RFC2307bis or
Active Directory style. This means groups are determined from the &quot;memberof&quot; attribute which contains
a DN to a group.</p>
<p>LDAP binds can use any unique identifier of the account. The following are all valid bind DNs for
the object listed above (if it was a POSIX account, that is).</p>
<pre><code>ldapwhoami ... -x -D 'name=test1'
ldapwhoami ... -x -D 'spn=test1@example.com'
ldapwhoami ... -x -D 'test1@example.com'
ldapwhoami ... -x -D 'test1'
ldapwhoami ... -x -D '22a65b6c-80c8-4e1a-9b76-3f3afdff8400'
ldapwhoami ... -x -D 'spn=test1@example.com,dc=example,dc=com'
ldapwhoami ... -x -D 'name=test1,dc=example,dc=com'
</code></pre>
<p>Most LDAP clients are very picky about TLS, and can be very hard to debug or display errors. For example
these commands:</p>
<pre><code>ldapsearch -H ldaps://127.0.0.1:3636 -b 'dc=example,dc=com' -x '(name=test1)'
ldapsearch -H ldap://127.0.0.1:3636 -b 'dc=example,dc=com' -x '(name=test1)'
ldapsearch -H ldap://127.0.0.1:3389 -b 'dc=example,dc=com' -x '(name=test1)'
</code></pre>
<p>All give the same error:</p>
<pre><code>ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1)
</code></pre>
<p>This is despite the fact:</p>
<ul>
<li>The first command is a certificate validation error.</li>
<li>The second is a missing LDAPS on a TLS port.</li>
<li>The third is an incorrect port.</li>
</ul>
<p>To diagnose errors like this, you may need to add &quot;-d 1&quot; to your LDAP commands or client.</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="kubernetes-ingress"><a class="header" href="#kubernetes-ingress">Kubernetes Ingress</a></h1>
<p>Guard your Kubernetes ingress with Kanidm authentication and authorization.</p>
<h2 id="prerequisites"><a class="header" href="#prerequisites">Prerequisites</a></h2>
<p>We recommend you have the following before continuing:</p>
<ul>
<li><a href="examples/../installing_the_server.html">Kanidm</a> </li>
<li><a href="https://docs.k0sproject.io/v1.23.6+k0s.2/install/">Kubernetes v1.23 or above</a></li>
<li><a href="https://kubernetes.github.io/ingress-nginx/deploy/">Nginx Ingress</a></li>
<li>A fully qualified domain name with an A record pointing to your k8s ingress.</li>
<li><a href="https://cert-manager.io/docs/installation/">CertManager with a Cluster Issuer installed.</a></li>
</ul>
<h2 id="instructions"><a class="header" href="#instructions">Instructions</a></h2>
<ol>
<li>
<p>Create a Kanidm account and group:</p>
<ol>
<li>Create a Kanidm account. Please see the section <a href="examples/../accounts_and_groups.html">Creating Accounts</a>.</li>
<li>Give the account a password. Please see the section <a href="examples/../accounts_and_groups.html">Resetting Account Credentials</a>.</li>
<li>Make the account a person. Please see the section <a href="examples/../accounts_and_groups.html">People Accounts</a>.</li>
<li>Create a Kanidm group. Please see the section <a href="examples/../accounts_and_groups.html">Creating Accounts</a>.</li>
<li>Add the account you created to the group you create. Please see the section <a href="examples/../accounts_and_groups.html">Creating Accounts</a>.</li>
</ol>
</li>
<li>
<p>Create a Kanidm OAuth2 resource:</p>
<ol>
<li>Create the OAuth2 resource for your domain. Please see the section <a href="examples/../oauth2.html">Create the Kanidm Configuration</a>.</li>
<li>Add a scope mapping from the resource you created to the group you create with the openid, profile, and email scopes. Please see the section <a href="examples/../oauth2.html">Create the Kanidm Configuration</a>.</li>
</ol>
</li>
<li>
<p>Create a <code>Cookie Secret</code> to for the placeholder <code>&lt;COOKIE_SECRET&gt;</code> in step 4:</p>
<pre><code class="language-shell">docker run -ti --rm python:3-alpine python -c 'import secrets,base64; print(base64.b64encode(base64.b64encode(secrets.token_bytes(16))).decode(&quot;utf-8&quot;));'
</code></pre>
</li>
<li>
<p>Create a file called <code>k8s.kanidm-nginx-auth-example.yaml</code> with the block below. Replace every <code>&lt;string&gt;</code> (drop the <code>&lt;&gt;</code>) with appropriate values:</p>
<ol>
<li><code>&lt;FQDN&gt;</code>: The fully qualified domain name with an A record pointing to your k8s ingress.</li>
<li><code>&lt;KANIDM_FQDN&gt;</code>: The fully qualified domain name of your Kanidm deployment.</li>
<li><code>&lt;COOKIE_SECRET&gt;</code>: The output from step 3.</li>
<li><code>&lt;OAUTH2_RS_NAME&gt;</code>: Please see the output from step 2.1 or <a href="examples/../oauth2.html">get</a> the OAuth2 resource you create from that step.</li>
<li><code>&lt;OAUTH2_RS_BASIC_SECRET&gt;</code>: Please see the output from step 2.1 or <a href="examples/../oauth2.html">get</a> the OAuth2 resource you create from that step.</li>
</ol>
<p>This will deploy the following to your cluster:</p>
<ul>
<li><a href="https://github.com/modem7/docker-starwars">modem7/docker-starwars</a> - An example web site.</li>
<li><a href="https://oauth2-proxy.github.io/oauth2-proxy/">OAuth2 Proxy</a> - A OAuth2 proxy is used as an OAuth2 client with NGINX <a href="https://docs.nginx.com/nginx/admin-guide/security-controls/configuring-subrequest-authentication/">Authentication Based on Subrequest Result</a>.</li>
</ul>
<pre><code class="language-yaml">---
apiVersion: v1
kind: Namespace
metadata:
name: kanidm-example
labels:
pod-security.kubernetes.io/enforce: restricted
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: kanidm-example
name: website
labels:
app: website
spec:
revisionHistoryLimit: 1
replicas: 1
selector:
matchLabels:
app: website
template:
metadata:
labels:
app: website
spec:
containers:
- name: website
image: modem7/docker-starwars
imagePullPolicy: Always
ports:
- containerPort: 8080
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: [&quot;ALL&quot;]
securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
---
apiVersion: v1
kind: Service
metadata:
namespace: kanidm-example
name: website
spec:
selector:
app: website
ports:
- protocol: TCP
port: 8080
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: lets-encrypt-cluster-issuer
nginx.ingress.kubernetes.io/auth-url: &quot;https://$host/oauth2/auth&quot;
nginx.ingress.kubernetes.io/auth-signin: &quot;https://$host/oauth2/start?rd=$escaped_request_uri&quot;
name: website
namespace: kanidm-example
spec:
ingressClassName: nginx
tls:
- hosts:
- &lt;FQDN&gt;
secretName: &lt;FQDN&gt;-ingress-tls # replace . with - in the hostname
rules:
- host: &lt;FQDN&gt;
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: website
port:
number: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
namespace: kanidm-example
spec:
replicas: 1
selector:
matchLabels:
k8s-app: oauth2-proxy
template:
metadata:
labels:
k8s-app: oauth2-proxy
spec:
containers:
- args:
- --provider=oidc
- --email-domain=*
- --upstream=file:///dev/null
- --http-address=0.0.0.0:4182
- --oidc-issuer-url=https://&lt;KANIDM_FQDN&gt;/oauth2/openid/&lt;OAUTH2_RS_NAME&gt;
- --code-challenge-method=S256
env:
- name: OAUTH2_PROXY_CLIENT_ID
value: &lt;OAUTH2_RS_NAME&gt;
- name: OAUTH2_PROXY_CLIENT_SECRET
value: &lt;OAUTH2_RS_BASIC_SECRET&gt;
- name: OAUTH2_PROXY_COOKIE_SECRET
value: &lt;COOKIE_SECRET&gt;
image: quay.io/oauth2-proxy/oauth2-proxy:latest
imagePullPolicy: Always
name: oauth2-proxy
ports:
- containerPort: 4182
protocol: TCP
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: [&quot;ALL&quot;]
securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
namespace: kanidm-example
spec:
ports:
- name: http
port: 4182
protocol: TCP
targetPort: 4182
selector:
k8s-app: oauth2-proxy
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: oauth2-proxy
namespace: kanidm-example
spec:
ingressClassName: nginx
rules:
- host: &lt;FQDN&gt;
http:
paths:
- path: /oauth2
pathType: Prefix
backend:
service:
name: oauth2-proxy
port:
number: 4182
tls:
- hosts:
- &lt;FQDN&gt;
secretName: &lt;FQDN&gt;-ingress-tls # replace . with - in the hostname
</code></pre>
</li>
<li>
<p>Apply the configuration by running the following command:</p>
<pre><code class="language-shell">kubectl apply -f k8s.kanidm-nginx-auth-example.yaml
</code></pre>
</li>
<li>
<p>Check your deployment succeeded by running the following commands:</p>
<pre><code class="language-shell">kubectl -n kanidm-example get all
kubectl -n kanidm-example get ingress
kubectl -n kanidm-example get Certificate
</code></pre>
<p>You may use kubectl's describe and log for troubleshooting. If there are ingress errors see the Ingress NGINX documentation's <a href="https://kubernetes.github.io/ingress-nginx/troubleshooting/">troubleshooting page</a>. If there are certificate errors see the CertManger documentation's <a href="https://cert-manager.io/docs/faq/troubleshooting/">troubleshooting page</a>.</p>
<p>Once it has finished deploying, you will be able to access it at <code>https://&lt;FQDN&gt;</code> which will prompt you for authentication.</p>
</li>
</ol>
<h2 id="cleaning-up"><a class="header" href="#cleaning-up">Cleaning Up</a></h2>
<ol>
<li>
<p>Remove the resources create for this example from k8s:</p>
<pre><code class="language-shell">kubectl delete namespace kanidm-example
</code></pre>
</li>
<li>
<p>Remove the objects created for this example from Kanidm:</p>
<ol>
<li>Delete the account created in section Instructions step 1.</li>
<li>Delete the group created in section Instructions step 2.</li>
<li>Delete the OAuth2 resource created in section Instructions step 3.</li>
</ol>
</li>
</ol>
<h2 id="references"><a class="header" href="#references">References</a></h2>
<ol>
<li><a href="https://kubernetes.github.io/ingress-nginx/examples/auth/oauth-external-auth/">NGINX Ingress Controller: External OAUTH Authentication</a></li>
<li><a href="https://oauth2-proxy.github.io/oauth2-proxy/docs/configuration/oauth_provider#openid-connect-provider">OAuth2 Proxy: OpenID Connect Provider</a></li>
</ol>
<div style="break-before: page; page-break-before: always;"></div><h1 id="packaging"><a class="header" href="#packaging">Packaging</a></h1>
<p>Packages are known to exist for the following distributions:</p>
<ul>
<li><a href="https://aur.archlinux.org/packages?O=0&amp;K=kanidm">Arch Linux</a></li>
<li><a href="https://software.opensuse.org/search?baseproject=ALL&amp;q=kanidm">OpenSUSE</a></li>
<li><a href="https://search.nixos.org/packages?sort=relevance&amp;type=packages&amp;query=kanidm">NixOS</a></li>
</ul>
<p>To ease packaging for your distribution, the <code>Makefile</code> has targets for sets of binary outputs.</p>
<div class="table-wrapper"><table><thead><tr><th>Target</th><th>Description</th></tr></thead><tbody>
<tr><td><code>release/kanidm</code></td><td>Kanidm's CLI</td></tr>
<tr><td><code>release/kanidmd</code></td><td>The server daemon</td></tr>
<tr><td><code>release/kanidm-ssh</code></td><td>SSH-related utilities</td></tr>
<tr><td><code>release/kanidm-unixd</code></td><td>UNIX tools, PAM/NSS modules</td></tr>
</tbody></table>
</div><div style="break-before: page; page-break-before: always;"></div><h1 id="debian--ubuntu-packaging"><a class="header" href="#debian--ubuntu-packaging">Debian / Ubuntu Packaging</a></h1>
<h2 id="building-packages"><a class="header" href="#building-packages">Building packages</a></h2>
<p>This happens in Docker currently, and here's some instructions for doing it for Ubuntu:</p>
<ol>
<li>Start in the root directory of the repository.</li>
<li>Run <code>./platform/debian/ubuntu_docker_builder.sh</code> This'll start a container, mounting the repository in <code>~/kanidm/</code>.</li>
<li>Install the required dependencies by running <code>./platform/debian/install_deps.sh</code>.</li>
<li>Building packages uses make, get a list by running <code>make -f ./platform/debian/Makefile help</code></li>
</ol>
<pre><code>➜ make -f platform/debian/Makefile help
debs/kanidm:
build a .deb for the Kanidm CLI
debs/kanidmd:
build a .deb for the Kanidm daemon
debs/kanidm-ssh:
build a .deb for the Kanidm SSH tools
debs/kanidm-unixd:
build a .deb for the Kanidm UNIX tools (PAM/NSS, unixd and related tools)
debs/all:
build all the debs
</code></pre>
<ol start="5">
<li>So if you wanted to build the package for the Kanidm CLI, run <code>make -f ./platform/debian/Makefile debs/kanidm</code>.</li>
<li>The package will be copied into the <code>target</code> directory of the repository on the docker host - not just in the container.</li>
</ol>
<h2 id="adding-a-package"><a class="header" href="#adding-a-package">Adding a package</a></h2>
<p>There's a set of default configuration files in <code>packaging/</code>; if you want to add a package definition, add a folder with the package name and then files in there will be copied over the top of the ones from <code>packaging/</code> on build.</p>
<p>You'll need two custom files at minimum:</p>
<ul>
<li><code>control</code> - a file containing information about the package.</li>
<li><code>rules</code> - a makefile doing all the build steps.</li>
</ul>
<p>There's a lot of other files that can go into a .deb, some handy ones are:</p>
<div class="table-wrapper"><table><thead><tr><th>Filename</th><th>What it does</th></tr></thead><tbody>
<tr><td>preinst</td><td>Runs before installation occurs</td></tr>
<tr><td>postrm</td><td>Runs after removal happens</td></tr>
<tr><td>prerm</td><td>Runs before removal happens - handy to shut down services.</td></tr>
<tr><td>postinst</td><td>Runs after installation occurs - we're using that to show notes to users</td></tr>
</tbody></table>
</div>
<h2 id="some-debian-packaging-links"><a class="header" href="#some-debian-packaging-links">Some Debian packaging links</a></h2>
<ul>
<li><a href="https://www.debian.org/doc/manuals/maint-guide/dreq.en.html">DH reference</a> - Explains what needs to be done for packaging (mostly).</li>
<li><a href="https://www.debian.org/doc/debian-policy/ch-controlfields">Reference for what goes in control files</a></li>
</ul>
</main>
<nav class="nav-wrapper" aria-label="Page navigation">
<!-- Mobile navigation buttons -->
<div style="clear: both"></div>
</nav>
</div>
</div>
<nav class="nav-wide-wrapper" aria-label="Page navigation">
</nav>
</div>
<script type="text/javascript">
window.playground_copyable = true;
</script>
<script src="elasticlunr.min.js" type="text/javascript" charset="utf-8"></script>
<script src="mark.min.js" type="text/javascript" charset="utf-8"></script>
<script src="searcher.js" type="text/javascript" charset="utf-8"></script>
<script src="clipboard.min.js" type="text/javascript" charset="utf-8"></script>
<script src="highlight.js" type="text/javascript" charset="utf-8"></script>
<script src="book.js" type="text/javascript" charset="utf-8"></script>
<!-- Custom JS scripts -->
<script type="text/javascript">
window.addEventListener('load', function() {
window.setTimeout(window.print, 100);
});
</script>
</body>
</html>