Improve our readme (#1150)

This commit is contained in:
Firstyear 2022-10-26 08:18:25 +10:00 committed by GitHub
parent c4ecdf4447
commit 1ea3aa6dfc
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
15 changed files with 477 additions and 331 deletions

127
README.md
View file

@ -1,30 +1,47 @@
# Kanidm - Simple and Secure Identity Management
<p align="center">
<img src="https://raw.githubusercontent.com/kanidm/kanidm/master/artwork/logo-small.png" width="20%" height="auto" />
</p>
# Kanidm
## About
Kanidm is an identity management platform written in rust. Our goals are:
Kanidm is a simple and secure identity management platform, which provides services to allow
other systems and application to authenticate against. The project aims for the highest levels
of reliability, security and ease of use.
* Modern identity management platform
* Simple to deploy and integrate with
* Extensible for various needs
* Correct and secure behaviour by default
The goal of this project is to be a complete identity management provider, covering the broadest
possible set of requirements and integrations. You should not need any other components (like Keycloak)
when you use Kanidm. We want to create a project that will be suitable for everything
from personal home deployments, to the largest enterprise needs.
Today the project is still under heavy development to achieve these goals - We have many foundational
parts in place, and many of the required security features, but it is still an Alpha, and should be
treated as such.
To achieve this we rely heavily on strict defaults, simple configuration, and self-healing components.
The project is still growing and some areas are developing at a fast pace. The core of the server
however is reliable and we make all effort to ensure upgrades will always work.
Kanidm supports:
* Oauth2/OIDC Authentication provider for web SSO
* Read only LDAPS gateway
* Linux/Unix integration (with offline authentication)
* SSH key distribution to Linux/Unix systems
* RADIUS for network authentication
* Passkeys / Webauthn for secure cryptographic authentication
* A self service web ui
* Complete CLI tooling for administration
If you want to host your own centralised authentication service, then Kanidm is for you!
## Documentation / Getting Started / Install
If you want to deploy Kanidm to see what it can do, you should read the kanidm book.
If you want to deploy Kanidm to see what it can do, you should read the Kanidm book.
- [Kanidm book (Latest commit)](https://kanidm.github.io/kanidm/master/)
- [Kanidm book (Latest stable)](https://kanidm.github.io/kanidm/stable/)
- [Kanidm book (Latest commit)](https://kanidm.github.io/kanidm/master/)
We also publish limited [support guidelines](https://github.com/kanidm/kanidm/blob/master/project_docs/RELEASE_AND_SUPPORT.md).
We also publish [support guidelines](https://github.com/kanidm/kanidm/blob/master/project_docs/RELEASE_AND_SUPPORT.md)
for what the project will support.
## Code of Conduct / Ethics
@ -42,6 +59,46 @@ answer questions via email, which can be found on their github profile.
[gitter community channel]: https://gitter.im/kanidm/community
## Comparison with other services
### LLDAP
[LLDAP](https://github.com/nitnelave/lldap) is a similar project aiming for a small and easy to administer
LDAP server with a web administration portal. Both projects use the [Kanidm LDAP bindings](https://github.com/kanidm/ldap3), and have
many similar ideas.
The primary benefit of Kanidm over LLDAP is that Kanidm offers a broader set of "built in" features
like Oauth2 and OIDC. To use these from LLDAP you need an external portal like Keycloak, where in Kanidm
they are "built in". However that is also a strength of LLDAP is that is offers "less" which may make
it easier to administer and deploy for you.
If Kanidm is too complex for your needs, you should check out LLDAP as a smaller alternative. If you
want a project which has a broader feature set out of the box, then Kanidm might be a better fit.
### 389-ds / OpenLDAP
Both 389-ds and OpenLDAP are generic LDAP servers. This means they only provide LDAP and you need
to bring your own IDM configuration on top.
If you need the highest levels of customisation possible from your LDAP deployment, then these are
probably better alternatives. If you want a service that is easier to setup and focused on IDM, then
Kanidm is a better choice.
Kanidm was originally inspired by many elements of both 389-ds and OpenLDAP. Already Kanidm is as fast
as (or faster than) 389-ds for performance and scaling.
### FreeIPA
FreeIPA is another identity management service for Linux/Unix, and ships a huge number of features
from LDAP, Kerberos, DNS, Certificate Authority, and more.
FreeIPA however is a complex system, with a huge amount of parts and configuration. This adds a lot
of resource overhead and difficulty for administration.
Kanidm aims to have the features richness of FreeIPA, but without the resource and administration
overheads. If you want a complete IDM package, but in a lighter footprint and easier to manage, then
Kanidm is probably for you.
## Developer Getting Started
If you want to develop on the server, there is a getting started [guide for developers]. IDM
@ -50,50 +107,6 @@ all backgrounds.
[guide for developers]: https://kanidm.github.io/kanidm/master/DEVELOPER_README.html
## Features
### Implemented
* SSH key distribution for servers
* PAM/nsswitch clients (with limited offline auth)
* MFA - TOTP
* Highly concurrent design (MVCC, COW)
* RADIUS integration
* MFA - Webauthn
### Currently Working On
* CLI for administration
* WebUI for self-service with wifi enrollment, claim management and more.
* RBAC/Claims/Policy (limited by time and credential scope)
* OIDC/Oauth
### Upcoming Focus Areas
* Replication (async multiple active write servers, read-only servers)
### Future
* SSH CA management
* Sudo rule distribution via nsswitch
* WebUI for administration
* Account impersonation
* Synchronisation to other IDM services
## Some key project ideas
* All people should be respected and able to be represented securely.
* Devices represent users and their identities - they are part of the authentication.
* Human error occurs - we should be designed to minimise human mistakes and empower people.
* The system should be easy to understand and reason about for users and admins.
### Features We Want to Avoid
* Auditing: This is better solved by SIEM software, so we should generate data they can consume.
* Fully synchronous behaviour: This prevents scaling and our future ability to expand.
* Generic database: We don't want to be another NoSQL database, we want to be an IDM solution.
* Being like LDAP/GSSAPI/Kerberos: These are all legacy protocols that are hard to use and confine our thinking - we should avoid "being like them" or using them as models.
## What does Kanidm mean?
The original project name was rsidm while it was a thought experiment. Now that it's growing

View file

@ -203,6 +203,26 @@ In a new terminal, you can now build and run the client tools with:
cargo run --bin kanidm -- login -H https://localhost:8443 -D admin -C /tmp/kanidm/ca.pem
cargo run --bin kanidm -- self whoami -H https://localhost:8443 -D admin -C /tmp/kanidm/ca.pem
### Raw actions
The server has a low-level stateful API you can use for more complex or advanced tasks on large numbers
of entries at once. Some examples are below, but generally we advise you to use the APIs or CLI tools. These are
very handy to "unbreak" something if you make a mistake however!
# Create from json (group or account)
kanidm raw create -H https://localhost:8443 -C ../insecure/ca.pem -D admin example.create.account.json
kanidm raw create -H https://localhost:8443 -C ../insecure/ca.pem -D idm_admin example.create.group.json
# Apply a json stateful modification to all entries matching a filter
kanidm raw modify -H https://localhost:8443 -C ../insecure/ca.pem -D admin '{"or": [ {"eq": ["name", "idm_person_account_create_priv"]}, {"eq": ["name", "idm_service_account_create_priv"]}, {"eq": ["name", "idm_account_write_priv"]}, {"eq": ["name", "idm_group_write_priv"]}, {"eq": ["name", "idm_people_write_priv"]}, {"eq": ["name", "idm_group_create_priv"]} ]}' example.modify.idm_admin.json
kanidm raw modify -H https://localhost:8443 -C ../insecure/ca.pem -D idm_admin '{"eq": ["name", "idm_admins"]}' example.modify.idm_admin.json
# Search and show the database representations
kanidm raw search -H https://localhost:8443 -C ../insecure/ca.pem -D admin '{"eq": ["name", "idm_admin"]}'
# Delete all entries matching a filter
kanidm raw delete -H https://localhost:8443 -C ../insecure/ca.pem -D idm_admin '{"eq": ["name", "test_account_delete_me"]}'
### Building the Web UI
__NOTE:__ There is a pre-packaged version of the Web UI at `/kanidmd_web_ui/pkg/`,

View file

@ -4,12 +4,17 @@
- [Glossary of Technical Terms](glossary.md)
- [Installing the Server](installing_the_server.md)
- [Choosing a Domain Name](choosing_a_domain_name.md)
- [Server Configuration](server_configuration.md)
- [Security Hardening](security_hardening.md)
- [Preparing for your Deployment](prepare_the_server.md)
- [Server Configuration and Install](server_configuration.md)
- [Server Updates](server_update.md)
- [Platform Security Hardening](security_hardening.md)
- [Client Tools](client_tools.md)
- [Installing client tools](installing_client_tools.md)
- [Accounts and Groups](accounts_and_groups.md)
- [Administrative Tasks](administrivia.md)
- [Administration](administrivia.md)
- [Backup and Restore](backup_restore.md)
- [Database Maintenance](database_maint.md)
- [Domain Rename](domain_rename.md)
- [Monitoring the platform](monitoring.md)
- [Password Quality and Badlisting](password_quality.md)
- [POSIX Accounts and Groups](posix_accounts.md)

View file

@ -5,68 +5,6 @@ a Kanidm server, such as making backups and restoring from backups, testing
server configuration, reindexing, verifying data consistency, and renaming
your domain.
# Configuration Test
{{#template
templates/kani-warning.md
imagepath=images
title=Take note!
text=While this is a configuration file test, it still needs to open the database so that it can check a number of internal values are consistent with the configuration. As a result, this requires the instance under config test to be stopped!
}}
You can test that your configuration is correct, and the server should correctly start.
docker stop <container name>
docker run --rm -i -t -v kanidmd:/data \
kanidm/server:latest /sbin/kanidmd configtest -c /data/server.toml
docker start <container name>
# Backup and Restore
With any Identity Management (IDM) software, it's important you have the capability to restore in
case of a disaster - be that physical damage or a mistake. Kanidm supports backup
and restore of the database with three methods.
## Method 1 - Automatic Backup
Automatic backups can be generated online by a `kanidmd server` instance
by including the `[online_backup]` section in the `server.toml`.
This allows you to run regular backups, defined by a cron schedule, and maintain
the number of backup versions to keep. An example is located in
[examples/server.toml](https://github.com/kanidm/kanidm/blob/master/examples/server.toml).
## Method 2 - Manual Backup
This method uses the same process as the automatic process, but is manually invoked. This can
be useful for pre-upgrade backups
To take the backup (assuming our docker environment) you first need to stop the instance:
docker stop <container name>
docker run --rm -i -t -v kanidmd:/data -v kanidmd_backups:/backup \
kanidm/server:latest /sbin/kanidmd database backup -c /data/server.toml \
/backup/kanidm.backup.json
docker start <container name>
You can then restart your instance. DO NOT modify the backup.json as it may introduce
data errors into your instance.
To restore from the backup:
docker stop <container name>
docker run --rm -i -t -v kanidmd:/data -v kanidmd_backups:/backup \
kanidm/server:latest /sbin/kanidmd database restore -c /data/server.toml \
/backup/kanidm.backup.json
docker start <container name>
## Method 3 - Manual Database Copy
This is a simple backup of the data volume.
docker stop <container name>
# Backup your docker's volume folder
docker start <container name>
# Rename the domain
There are some cases where you may need to rename the domain. You should have configured
@ -101,67 +39,6 @@ Finally, you can now start your instance again.
docker start <container name>
# Reindexing
In some (rare) cases you may need to reindex.
Please note the server will sometimes reindex on startup as a result of the project
changing its internal schema definitions. This is normal and expected - you may never need
to start a reindex yourself as a result!
You'll likely notice a need to reindex if you add indexes to schema and you see a message in
your logs such as:
Index EQUALITY name not found
Index {type} {attribute} not found
This indicates that an index of type equality has been added for name, but the indexing process
has not been run. The server will continue to operate and the query execution code will correctly
process the query - however it will not be the optimal method of delivering the results as we need to
disregard this part of the query and act as though it's un-indexed.
Reindexing will resolve this by forcing all indexes to be recreated based on their schema
definitions (this works even though the schema is in the same database!)
docker stop <container name>
docker run --rm -i -t -v kanidmd:/data \
kanidm/server:latest /sbin/kanidmd reindex -c /data/server.toml
docker start <container name>
Generally, reindexing is a rare action and should not normally be required.
# Vacuum
[Vacuuming](https://www.sqlite.org/lang_vacuum.html) is the process of reclaiming un-used pages
from the sqlite freelists, as well as performing some data reordering tasks that may make some
queries more efficient . It is recommended that you vacuum after a reindex is performed or
when you wish to reclaim space in the database file.
Vacuum is also able to change the pagesize of the database. After changing `db_fs_type` (which affects
pagesize) in server.toml, you must run a vacuum for this to take effect:
docker stop <container name>
docker run --rm -i -t -v kanidmd:/data \
kanidm/server:latest /sbin/kanidmd vacuum -c /data/server.toml
docker start <container name>
# Verification
The server ships with a number of verification utilities to ensure that data is consistent such
as referential integrity or memberof.
Note that verification really is a last resort - the server does _a lot_ to prevent and self-heal
from errors at run time, so you should rarely if ever require this utility. This utility was
developed to guarantee consistency during development!
You can run a verification with:
docker stop <container name>
docker run --rm -i -t -v kanidmd:/data \
kanidm/server:latest /sbin/kanidmd verify -c /data/server.toml
docker start <container name>
If you have errors, please contact the project to help support you to resolve these.
# Raw actions
The server has a low-level stateful API you can use for more complex or advanced tasks on large numbers

View file

@ -0,0 +1,49 @@
# Backup and Restore
With any Identity Management (IDM) software, it's important you have the capability to restore in
case of a disaster - be that physical damage or a mistake. Kanidm supports backup
and restore of the database with three methods.
## Method 1 - Automatic Backup
Automatic backups can be generated online by a `kanidmd server` instance
by including the `[online_backup]` section in the `server.toml`.
This allows you to run regular backups, defined by a cron schedule, and maintain
the number of backup versions to keep. An example is located in
[examples/server.toml](https://github.com/kanidm/kanidm/blob/master/examples/server.toml).
## Method 2 - Manual Backup
This method uses the same process as the automatic process, but is manually invoked. This can
be useful for pre-upgrade backups
To take the backup (assuming our docker environment) you first need to stop the instance:
docker stop <container name>
docker run --rm -i -t -v kanidmd:/data -v kanidmd_backups:/backup \
kanidm/server:latest /sbin/kanidmd database backup -c /data/server.toml \
/backup/kanidm.backup.json
docker start <container name>
You can then restart your instance. DO NOT modify the backup.json as it may introduce
data errors into your instance.
To restore from the backup:
docker stop <container name>
docker run --rm -i -t -v kanidmd:/data -v kanidmd_backups:/backup \
kanidm/server:latest /sbin/kanidmd database restore -c /data/server.toml \
/backup/kanidm.backup.json
docker start <container name>
## Method 3 - Manual Database Copy
This is a simple backup of the data volume.
docker stop <container name>
# Backup your docker's volume folder
docker start <container name>
Restoration is the reverse process.

View file

@ -22,8 +22,7 @@ have unique ownership of these domains, if you move your machine to a foreign ne
you may leak credentials or other cookies to these domains. TLS in a majority of cases can and will
protect you from such leaks however, but it should not always be relied upon as a sole line of defence.
Failure to use a unique domain you own, may allow DNS hijacking or other credential leaks when you are *not* on
your own network.
Failure to use a unique domain you own, may allow DNS hijacking or other credential leaks in some circumstances.
### Subdomains
@ -34,7 +33,7 @@ to cookies from `a.example.com` and `example.com`.
For this reason your kanidm host (or hosts) should be on a unique subdomain, with no other services
registered under that subdomain. For example, consider `idm.example.com` as a subdomain for exclusive
use of kanidm. This is *inverse* to Active Directory which often has it's domain name selected to be
the domain (`example.com`).
the parent (toplevel) domain (`example.com`).
Failure to use a unique subdomain may allow cookies to leak to other entities within your domain, and
may allow webauthn to be used on entities you did not intend for which may or may not lead to some phishing
@ -42,7 +41,7 @@ scenarioes.
## Examples
### Good Example
### Good Domain Names
Consider we own `kanidm.com`. If we were to run geographical instances, and have testing environments
the following domain and hostnames could be used.
@ -68,7 +67,7 @@ Note that due to the name being `idm.dev.kanidm.com` vs `idm.kanidm.com`, the te
a subdomain of production, meaning the cookies and webauthn tokens can NOT be transferred between
them. This provides proper isolation between the instances.
### Bad Examples
### Bad Domain Names
`idm.local` - This is a bad example as `.local` is an mDNS domain name suffix which means that client
machines if they visit another network *may* try to contact `idm.local` believing they are on their

View file

@ -0,0 +1,64 @@
# Database Maintenance
## Reindexing
In some (rare) cases you may need to reindex.
Please note the server will sometimes reindex on startup as a result of the project
changing its internal schema definitions. This is normal and expected - you may never need
to start a reindex yourself as a result!
You'll likely notice a need to reindex if you add indexes to schema and you see a message in
your logs such as:
Index EQUALITY name not found
Index {type} {attribute} not found
This indicates that an index of type equality has been added for name, but the indexing process
has not been run. The server will continue to operate and the query execution code will correctly
process the query - however it will not be the optimal method of delivering the results as we need to
disregard this part of the query and act as though it's un-indexed.
Reindexing will resolve this by forcing all indexes to be recreated based on their schema
definitions (this works even though the schema is in the same database!)
docker stop <container name>
docker run --rm -i -t -v kanidmd:/data \
kanidm/server:latest /sbin/kanidmd reindex -c /data/server.toml
docker start <container name>
Generally, reindexing is a rare action and should not normally be required.
## Vacuum
[Vacuuming](https://www.sqlite.org/lang_vacuum.html) is the process of reclaiming un-used pages
from the sqlite freelists, as well as performing some data reordering tasks that may make some
queries more efficient . It is recommended that you vacuum after a reindex is performed or
when you wish to reclaim space in the database file.
Vacuum is also able to change the pagesize of the database. After changing `db_fs_type` (which affects
pagesize) in server.toml, you must run a vacuum for this to take effect:
docker stop <container name>
docker run --rm -i -t -v kanidmd:/data \
kanidm/server:latest /sbin/kanidmd vacuum -c /data/server.toml
docker start <container name>
## Verification
The server ships with a number of verification utilities to ensure that data is consistent such
as referential integrity or memberof.
Note that verification really is a last resort - the server does _a lot_ to prevent and self-heal
from errors at run time, so you should rarely if ever require this utility. This utility was
developed to guarantee consistency during development!
You can run a verification with:
docker stop <container name>
docker run --rm -i -t -v kanidmd:/data \
kanidm/server:latest /sbin/kanidmd verify -c /data/server.toml
docker start <container name>
If you have errors, please contact the project to help support you to resolve these.

View file

@ -0,0 +1,33 @@
# Rename the domain
There are some cases where you may need to rename the domain. You should have configured
this initially in the setup, however you may have a situation where a business is changing
name, merging, or other needs which may prompt this needing to be changed.
> **WARNING:** This WILL break ALL u2f/webauthn tokens that have been enrolled, which MAY cause
> accounts to be locked out and unrecoverable until further action is taken. DO NOT CHANGE
> the domain name unless REQUIRED and have a plan on how to manage these issues.
> **WARNING:** This operation can take an extensive amount of time as ALL accounts and groups
> in the domain MUST have their Security Principal Names (SPNs) regenerated. This WILL also cause
> a large delay in replication once the system is restarted.
You should make a backup before proceeding with this operation.
When you have a created a migration plan and strategy on handling the invalidation of webauthn,
you can then rename the domain.
First, stop the instance.
docker stop <container name>
Second, change `domain` and `origin` in `server.toml`.
Third, trigger the database domain rename process.
docker run --rm -i -t -v kanidmd:/data \
kanidm/server:latest /sbin/kanidmd domain rename -c /data/server.toml
Finally, you can now start your instance again.
docker start <container name>

View file

@ -1,127 +1,6 @@
# Installing the Server
> **NOTE** Our preferred deployment method is in containers, the documentation assumes you're running in docker. Kanidm will run in traditional compute, and server builds are available for multiple platforms, or you can build the binaries yourself if you prefer this option.
Currently we have docker images for the server components. They can be found at:
- <https://hub.docker.com/r/kanidm/server>
- <https://hub.docker.com/r/kanidm/radius>
You can fetch these by running the commands:
docker pull kanidm/server:latest
docker pull kanidm/radius:latest
If you wish to use an x86\_64 cpu-optimised version (See System Requirements CPU), you should use:
docker pull kanidm/server:x86_64_latest
You may need to adjust your example commands throughout this document to suit.
## Development Version
If you are interested in running the latest code from development, you can do this by changing the
docker tag to `kanidm/server:devel` or `kanidm/server:x86_64_v3_devel` instead.
## System Requirements
#### CPU
If you are using the x86\_64 cpu-optimised version, you must have a CPU that is from 2013 or newer
(Haswell, Ryzen). The following instruction flags are used.
cmov, cx8, fxsr, mmx, sse, sse2, cx16, sahf, popcnt, sse3, sse4.1, sse4.2, avx, avx2,
bmi, bmi2, f16c, fma, lzcnt, movbe, xsave
Older or unsupported CPUs may raise a SIGIL (Illegal Instruction) on hardware that is not supported
by the project.
In this case, you should use the standard server:latest image.
In the future we may apply a baseline of flags as a requirement for x86\_64 for the server:latest
image. These flags will be:
cmov, cx8, fxsr, mmx, sse, sse2
#### Memory
Kanidm extensively uses memory caching, trading memory consumption to improve parallel throughput.
You should expect to see 64KB of ram per entry in your database, depending on cache tuning and settings.
#### Disk
You should expect to use up to 8KB of disk per entry you plan to store. At an estimate 10,000 entry
databases will consume 40MB, 100,000 entry will consume 400MB.
For best performance, you should use non-volatile memory express (NVME), or other Flash storage media.
## TLS
You'll need a volume where you can place configuration, certificates, and the database:
docker volume create kanidmd
You should have a chain.pem and key.pem in your kanidmd volume. The reason for requiring
Transport Layer Security (TLS, which replaces the deprecated Secure Sockets Layer, SSL) is explained in [why tls](./why_tls.md). In summary, TLS is our root of trust between the
server and clients, and a critical element of ensuring a secure system.
The key.pem should be a single PEM private key, with no encryption. The file content should be
similar to:
-----BEGIN RSA PRIVATE KEY-----
MII...<base64>
-----END RSA PRIVATE KEY-----
The chain.pem is a series of PEM formatted certificates. The leaf certificate, or the certificate
that matches the private key should be the first certificate in the file. This should be followed
by the series of intermediates, and the final certificate should be the CA root. For example:
-----BEGIN CERTIFICATE-----
<leaf certificate>
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
<intermediate certificate>
-----END CERTIFICATE-----
[ more intermediates if needed ]
-----BEGIN CERTIFICATE-----
<ca/croot certificate>
-----END CERTIFICATE-----
> **HINT**
> If you are using Let's Encrypt the provided files "fullchain.pem" and "privkey.pem" are already
> correctly formatted as required for Kanidm.
You can validate that the leaf certificate matches the key with the command:
# openssl rsa -noout -modulus -in key.pem | openssl sha1
d2188932f520e45f2e76153fbbaf13f81ea6c1ef
# openssl x509 -noout -modulus -in chain.pem | openssl sha1
d2188932f520e45f2e76153fbbaf13f81ea6c1ef
If your chain.pem contains the CA certificate, you can validate this file with the command:
openssl verify -CAfile chain.pem chain.pem
If your chain.pem does not contain the CA certificate (Let's Encrypt chains do not contain the CA
for example) then you can validate with this command.
openssl verify -untrusted fullchain.pem fullchain.pem
> **NOTE** Here "-untrusted" flag means a list of further certificates in the chain to build up
> to the root is provided, but that the system CA root should be consulted. Verification is NOT bypassed
> or allowed to be invalid.
If these verifications pass you can now use these certificates with Kanidm. To put the certificates
in place you can use a shell container that mounts the volume such as:
docker run --rm -i -t -v kanidmd:/data -v /my/host/path/work:/work opensuse/leap:latest /bin/sh -c "cp /work/* /data/"
OR for a shell into the volume:
docker run --rm -i -t -v kanidmd:/data opensuse/leap:latest /bin/sh
# Continue on to [Configuring the Server](server_configuration.md)
This chapter will describe how to plan, configure, deploy and update your Kanidm instances.

View file

@ -8,18 +8,15 @@ The intent of the Kanidm project is to:
* Provide a single truth source for accounts, groups and privileges.
* Enable integrations to systems and services so they can authenticate accounts.
* Make system, network, application and web authentication easy and accessible.
* Secure and reliable by default, aiming for the highest levels of quality.
{{#template
templates/kani-warning.md
imagepath=images
title=NOTICE
text=This is a pre-release project. While all effort has been made to ensure no data loss or security flaws, you should still be careful when using this in your environment.
text=Kanidm is still a work in progress. Many features will evolve and change over time which may not be suitable for all users.
}}
## Library documentation
Looking for the `rustdoc` documentation for the libraries themselves? [Click here!](https://kanidm.com/documentation/)
## Why do I want Kanidm?
Whether you work in a business, a volunteer organisation, or are an enthusiast who manages
@ -92,5 +89,8 @@ to a device, if a device is compromised you can revoke its associated credential
specific service is compromised, only the credentials for that service need to be revoked.
Due to this model, and the design of Kanidm to centre the device and to have more per-service credentials,
workflows and automation are added or designed to reduce human handling. An example of this
is the use of QR codes with deployment profiles to automatically enroll wireless credentials.
workflows and automation are added or designed to reduce human handling.
## Library documentation
Looking for the `rustdoc` documentation for the libraries themselves? [Click here!](https://kanidm.com/documentation/)

View file

@ -0,0 +1,140 @@
# Preparing for your Deployment
## Software Installation Method
> **NOTE** Our preferred deployment method is in containers, and this documentation assumes you're running in docker. Kanidm will alternately run as a daemon/service, and server builds are available for multiple platforms if you prefer this option.
We provide docker images for the server components. They can be found at:
- <https://hub.docker.com/r/kanidm/server>
- <https://hub.docker.com/r/kanidm/radius>
You can fetch these by running the commands:
docker pull kanidm/server:x86_64_latest
docker pull kanidm/radius:latest
If you do not meet the [system requirements](#system-requirements) for your CPU you should use:
docker pull kanidm/server:latest
You may need to adjust your example commands throughout this document to suit your desired server type.
## Development Version
If you are interested in running the latest code from development, you can do this by changing the
docker tag to `kanidm/server:devel` or `kanidm/server:x86_64_v3_devel` instead. Many people run the
development version, and it is extremely reliable, but occasional rough patches may occur. If you
report issues, we will make every effort to help resolve them.
## System Requirements
#### CPU
If you are using the x86\_64 cpu-optimised version, you must have a CPU that is from 2013 or newer
(Haswell, Ryzen). The following instruction flags are used.
cmov, cx8, fxsr, mmx, sse, sse2, cx16, sahf, popcnt, sse3, sse4.1, sse4.2, avx, avx2,
bmi, bmi2, f16c, fma, lzcnt, movbe, xsave
Older or unsupported CPUs may raise a SIGIL (Illegal Instruction) on hardware that is not supported
by the project.
In this case, you should use the standard server:latest image.
In the future we may apply a baseline of flags as a requirement for x86\_64 for the server:latest
image. These flags will be:
cmov, cx8, fxsr, mmx, sse, sse2
{{#template
templates/kani-alert.md
imagepath=images
title=Tip
text=You can check your cpu flags on Linux with the command `lscpu`
}}
#### Memory
Kanidm extensively uses memory caching, trading memory consumption to improve parallel throughput.
You should expect to see 64KB of ram per entry in your database, depending on cache tuning and settings.
#### Disk
You should expect to use up to 8KB of disk per entry you plan to store. At an estimate 10,000 entry
databases will consume 40MB, 100,000 entry will consume 400MB.
For best performance, you should use non-volatile memory express (NVME), or other Flash storage media.
## TLS
You'll need a volume where you can place configuration, certificates, and the database:
docker volume create kanidmd
You should have a chain.pem and key.pem in your kanidmd volume. The reason for requiring
Transport Layer Security (TLS, which replaces the deprecated Secure Sockets Layer, SSL) is explained in [why tls](./why_tls.md). In summary, TLS is our root of trust between the
server and clients, and a critical element of ensuring a secure system.
The key.pem should be a single PEM private key, with no encryption. The file content should be
similar to:
-----BEGIN RSA PRIVATE KEY-----
MII...<base64>
-----END RSA PRIVATE KEY-----
The chain.pem is a series of PEM formatted certificates. The leaf certificate, or the certificate
that matches the private key should be the first certificate in the file. This should be followed
by the series of intermediates, and the final certificate should be the CA root. For example:
-----BEGIN CERTIFICATE-----
<leaf certificate>
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
<intermediate certificate>
-----END CERTIFICATE-----
[ more intermediates if needed ]
-----BEGIN CERTIFICATE-----
<ca/croot certificate>
-----END CERTIFICATE-----
> **HINT**
> If you are using Let's Encrypt the provided files "fullchain.pem" and "privkey.pem" are already
> correctly formatted as required for Kanidm.
You can validate that the leaf certificate matches the key with the command:
# ECDSA
openssl ec -in key.pem -pubout | openssl sha1
1c7e7bf6ef8f83841daeedf16093bda585fc5bb0
openssl x509 -in chain.pem -noout -pubkey | openssl sha1
1c7e7bf6ef8f83841daeedf16093bda585fc5bb0
# RSA
# openssl rsa -noout -modulus -in key.pem | openssl sha1
d2188932f520e45f2e76153fbbaf13f81ea6c1ef
# openssl x509 -noout -modulus -in chain.pem | openssl sha1
d2188932f520e45f2e76153fbbaf13f81ea6c1ef
If your chain.pem contains the CA certificate, you can validate this file with the command:
openssl verify -CAfile chain.pem chain.pem
If your chain.pem does not contain the CA certificate (Let's Encrypt chains do not contain the CA
for example) then you can validate with this command.
openssl verify -untrusted fullchain.pem fullchain.pem
> **NOTE** Here "-untrusted" flag means a list of further certificates in the chain to build up
> to the root is provided, but that the system CA root should be consulted. Verification is NOT bypassed
> or allowed to be invalid.
If these verifications pass you can now use these certificates with Kanidm. To put the certificates
in place you can use a shell container that mounts the volume such as:
docker run --rm -i -t -v kanidmd:/data -v /my/host/path/work:/work opensuse/leap:latest /bin/sh -c "cp /work/* /data/"
OR for a shell into the volume:
docker run --rm -i -t -v kanidmd:/data opensuse/leap:latest /bin/sh

View file

@ -1,7 +1,7 @@
# Security Hardening
Kanidm ships with a secure-by-default configuration, however that is only as strong
as the platform that Kanidm operates in. This could be your container environment
as the environment that Kanidm operates in. This could be your container environment
or your Unix-like system.
This chapter will detail a number of warnings and security practices you should
@ -136,7 +136,7 @@ changes to help isolate these changes:
sed -i -e "s/db_path.*/db_path = \"\/data\/db\/kanidm.db\"/g" /data/server.toml
chown root:root /data/server.toml
chmod 644 /data/server.toml
Note that the example commands all run inside the docker container.
You can then use this to run the Kanidm server in docker with a user:
@ -145,4 +145,5 @@ You can then use this to run the Kanidm server in docker with a user:
> **HINT**
> You need to use the UID or GID number with the `-u` argument, as the container can't resolve
> usernames from the host system.
> usernames from the host system.

View file

@ -0,0 +1,62 @@
## Updating the Server
### Preserving the Previous Image
You may wish to preserve the previous image before updating. This is useful if an issue is encountered
in upgrades.
docker tag kanidm/server:latest kanidm/server:<DATE>
docker tag kanidm/server:latest kanidm/server:2022-10-24
### Update your Image
Pull the latest version of Kanidm that matches your CPU profile
docker pull kanidm/server:latest
docker pull kanidm/server:x86_64_latest
### Perform a backup
See [backup and restore](backup_restore.md)
### Update your Instance
{{#template
templates/kani-warning.md
imagepath=images
title=WARNING
text=It is not always guaranteed that downgrades are possible. It is critical you know how to backup and restore before you proceed with this step.
}}
Docker updates by deleting and recreating the instance. All that needs to be preserved in your
storage volume.
docker stop <previous instance name>
You can test that your configuration is correct, and the server should correctly start.
docker run --rm -i -t -v kanidmd:/data \
kanidm/server:latest /sbin/kanidmd configtest -c /data/server.toml
You can then follow through with the upgrade
docker run -p PORTS -v kanidmd:/data \
OTHER_CUSTOM_OPTIONS \
kanidm/server:latest
Once you confirm the upgrade is successful you can delete the previous instance
docker rm <previous instance name>
If you encounter an issue you can revert to the previous version.
docker stop <new instance name>
docker start <previous instance name>
If you deleted the previous instance, you can recreate it from your preserved tag instead.
docker run -p ports -v volumes kanidm/server:<DATE>
In some cases the downgrade to the previous instance may not work. If the server from your previous
version fails to start, you may need to restore from backup.

View file

@ -0,0 +1,9 @@
<table>
<tr>
<td rowspan=2><img src="[[#imagepath]]/kani-alert.png" alt="Kani Alert" /></td>
<td><strong>[[#title]]</strong></td>
</tr>
<tr>
<td>[[#text]]</td>
</tr>
</table>

View file

@ -1,7 +1,6 @@
## Release Schedule
In the alpha phase, Kanidm will be released on a 3 month (quarterly) basis,
starting on July 1st 2020.
Kanidm is released on a 3 month (quarterly) basis.
* February 1st
* May 1st
@ -16,17 +15,13 @@ estimated date for 1.2.0.
## Support
Releases during alpha will not recieve fixes or improvements once released, with the exception of:
Releases during alpha will recieve limited fixes once released. Specifically we will resolve:
* Major security issues
* Moderate security issues and above
* Flaws leading to dataloss or corruption
* Other fixes at the discrestion of the project team
* Other quality fixes at the discrestion of the project team
In the case these issues are found, an out of band alpha snapshot will be made based on the current
stable branch.
Alpha releases are "best effort", and are trial releases, to help get early feedback and improvements
from the community, while still allowing us to make large, breaking changes that may be needed.
These will be backported to the latest stable branch only.
## API stability