20191202 documentation (#156)

Add an initial skeleton and draft of a book, which should be maintained and improved as the server is developed to help guide users.
This commit is contained in:
Firstyear 2019-12-03 16:03:05 +10:00 committed by GitHub
parent 646261ebf7
commit b579c5395c
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
14 changed files with 686 additions and 395 deletions

View file

@ -1,333 +0,0 @@
# Getting Started
WARNING: This document is still in progress, and due to the high rate of change in the cli
tooling, may be OUT OF DATE or otherwise incorrect. If you have questions, please get
in contact!
The goal of this getting started is to give you a quick setup, and overview of how you can setup
a working RADIUS environment with kanidm
# Deploying with docker
Currently we have a docker image based on git master. They can be found at:
https://hub.docker.com/r/firstyear/kanidmd
https://hub.docker.com/r/firstyear/kanidm_radius
First we'll deploy the main server. You'll need a volume where you can put certificates and
the database:
docker volume create kanidmd
You should have a ca.pem, cert.pem and key.pem in your kanidmd volume. The reason for requiring
TLS is explained in [why tls]. To put the certificates in place you can use a shell container
that mounts the volume such as:
[why tls]: https://github.com/Firstyear/kanidm/blob/master/designs/why_tls.rst
docker run --rm -i -t -v kanidmd:/data -v /my/host/path/work:/work opensuse/leap:latest cp /work/* /data/
OR for a shell into the volume:
docker run --rm -i -t -v kanidmd:/data opensuse/leap:latest /bin/sh
Then you can setup the initial admin account and initialise the database into your volume.
docker run --rm -i -t -v kanidmd:/data firstyear/kanidmd:latest /home/kanidm/target/release/kanidmd recover_account -D /data/kanidm.db -n admin
You then want to set your domain name:
docker run --rm -i -t -v kanidmd:/data firstyear/kanidmd:latest /home/kanidm/target/release/kanidmd domain_name_change -D /data/kanidm.db -n idm.example.com
Now we can run the server. Previously we had to specify all options, but now domain_name is part of
the database.
docker run -p 8443:8443 -v /Users/william/development/rsidm/insecure:/data firstyear/kanidmd:latest
# Using the cli
For now, the CLI is still from the source - we'll make a tools container soon!
After you check out the source, navigate to:
cd kanidm_tools
cargo build
Now you can check your instance is working. You may need to provide a CA certificate for verification
with the -C parameter:
cargo run -- self whoami -C ../path/to/ca.pem -H https://localhost:8443 --name anonymous
cargo run -- self whoami -H https://localhost:8443 --name anonymous
Now you can take some time to look at what commands are available - things may still be rough so
please ask for help at anytime.
# Setting up some accounts and groups
The system admin account (the account you recovered in the setup) has limited privileges - only to
manage high-privilege accounts and services. This is to help seperate system administration
from identity administration actions.
You should generate a secure password for the idm_admin account now, by using the admin account to
reset that credential.
cargo run -- account credential generate_password -H ... --name admin idm_admin
Generated password for idm_admin: tqoReZfz....
It's a good idea to use the "generate_password" for high security accounts due to the strong
passwords generated.
We can now use the idm_admin to create groups and accounts.
cargo run -- group create radius_access_allowed -H ... --name idm_admin
cargo run -- account create demo_user "Demonstration User" -H ... --name idm_admin
cargo run -- group add_members radius_access_allowed demo_user -H ... --name idm_admin
cargo run -- group list_members radius_access_allowed -H ... --name idm_admin
cargo run -- account get demo_user -H ... --name idm_admin
You can also use anonymous to view users and groups - note that you won't see as many fields due
to the different anonymous access profile limits!
cargo run -- account get demo_user -H ... --name anonymous
Finally, performa a password reset on the demo_user - we'll be using them from now to show how
accounts can be self sufficent.
cargo run -- account credential set_password demo_user -H ... --name idm_admin
cargo run -- self whoami -H ... --name demo_user
# RADIUS
Let's make it so that demo_user can authenticate to our RADIUS. It's an important concept in kanidm
that accounts can have *multiple* credentials, each with unique functions and claims (permissions)
to limit their scope of access. An example of this is that an account has a distinction between
the interactive (primary) credential and the RADIUS credentials.
When you ran set_password above, you were resetting the primary credential of the account. The
account can now *self manage* it's own RADIUS credential which is isolated from the primary
credential. To demonstrate we can have the account self-generate a new RADIUS credential and
then retrieve that when required.
cargo run -- account radius generate_secret demo_user -H ... --name demo_user
cargo run -- account radius show_secret demo_user -H ... --name demo_user
# Radius secret: lyjr-d8...
To read these secrets, the radius server requires a service account. We can create this and
assign it the appropriate privilege group (note we do this as admin not idm due to modifying a high priviliege group,
which idm_admin is *not* allowed to do):
cargo run -- account create radius_service_account "Radius Service Account" -H ... --name admin
cargo run -- group add_members idm_radius_servers radius_service_account -H ... --name admin
cargo run -- account get radius_service_account -H ... --name admin
cargo run -- account credential generate_password radius_service_account -H ... --name admin
Now that we have a user configured with RADIUS secrets, we can setup a radius container to authenticate
with it. You will need a volume that contains:
data
data/ca.pem # This is the kanidm ca.pem
data/config.ini
data/certs
data/certs/dh # openssl dhparam -out ./dh 2048
data/certs/key.pem # These are the radius ca/cert
data/certs/cert.pem
data/certs/ca.pem
It's up to you to get a key/cert/ca for this purpose. The example config.ini looks like this:
[kanidm_client]
url =
strict = false
ca = /data/ca.crt
user =
secret =
; default vlans for groups that don't specify one.
[DEFAULT]
vlan = 1
; [group.test]
; vlan =
[radiusd]
ca =
key =
cert =
dh =
required_group =
; [client.localhost]
; ipaddr =
; secret =
A fully configured example is:
[kanidm_client]
; be sure to check the listening port is correct, it's the docker internal port
; not the external one!
url = https://<kanidmd container name or ip>:8443
strict = true # adjust this if you have ca validation issues
ca = /data/ca.crt
user = radius_service_account
secret = # The generated password from above
; default vlans for groups that don't specify one.
[DEFAULT]
vlan = 1
; [group.test]
; vlan =
[radiusd]
ca = /data/certs/ca.pem
key = /data/certs/key.pem
cert = /data/certs/cert.pem
dh = /data/certs/dh
required_group = radius_access_allowed
[client.localhost]
ipaddr = 127.0.0.1
secret = testing123
[client.docker]
ipaddr = 172.17.0.0/16
secret = testing123
Now we can launch the radius instance:
docker run --name radiusd -i -t -v ...:/data firstyear/kanidm_radius:latest
...
Listening on auth address 127.0.0.1 port 18120 bound to server inner-tunnel
Listening on auth address * port 1812 bound to server default
Listening on acct address * port 1813 bound to server default
Listening on auth address :: port 1812 bound to server default
Listening on acct address :: port 1813 bound to server default
Listening on proxy address * port 53978
Listening on proxy address :: port 60435
Ready to process requests
You can now test an authentication with:
docker exec -i -t radiusd radtest demo_user badpassword 127.0.0.1 10 testing123
docker exec -i -t radiusd radtest demo_user <radius show_secret value here> 127.0.0.1 10 testing123
You should see Access-Accept or Access-Reject based on your calls.
Finally, to expose this to a wifi infrastructure, add your NAS in config.ini:
[client.access_point]
ipaddr = <some ipadd>
secret = <random value>
And re-create/run your docker instance with `-p 1812:1812 -p 1812:1812/udp` ...
If you have any issues, check the logs from the radius output they tend to indicate the cause
of the problem.
Note the radius container *is* configured to provide Tunnel-Private-Group-ID so if you wish to use
wifi assigned vlans on your infrastructure, you can assign these by groups in the config.ini.
# Backup and Restore
With any idm software, it's important you have the capability to restore in case of a disaster - be
that physical damage or mistake. Kanidm supports backup and restore of the database with two methods.
## Method 1
Method 1 involves taking a backup of the database entry content, which is then re-indexed on restore.
This is the "prefered" method.
To take the backup (assuming our docker environment) you first need to stop the instance:
docker stop <container name>
docker run --rm -i -t -v kanidmd:/data -v kanidmd_backups:/backup \
firstyear/kanidmd:latest /home/kanidm/target/release/kanidmd backup \
/backup/kanidm.backup.json -D /data/kanidm.db
docker start <container name>
You can then restart your instance. It's advised you DO NOT modify the backup.json as it may introduce
data errors into your instance.
To restore from the backup:
docker stop <container name>
docker run --rm -i -t -v kanidmd:/data -v kanidmd_backups:/backup \
firstyear/kanidmd:latest /home/kanidm/target/release/kanidmd restore \
/backup/kanidm.backup.json -D /data/kanidm.db
docker start <container name>
That's it!
## Method 2
This is a simple backup of the data volume.
docker stop <container name>
# Backup your docker's volume folder
docker start <container name>
# Reindexing after schema extension
In some (rare) cases you may need to reindex.
Please note the server will sometimes reindex on startup as a result of the project
changing it's internal schema definitions. This is normal and expected - you may never need
to start a reindex yourself as a result!
You'll likely notice a need to reindex if you add indexes to schema and you see a message in your logs such as:
Index EQUALITY name not found
Index {type} {attribute} not found
This indicates that an index of type equality has been added for name, but the indexing process
has not been run - the server will continue to operate and the query execution code will correctly
process the query however it will not be the optimal method of delivering the results as we need to
disregard this part of the query and act as though it's un-indexed.
Reindexing will resolve this by forcing all indexes to be recreated based on their schema
definitions (this works even though the schema is in the same database!)
docker stop <container name>
docker run --rm -i -t -v kanidmd:/data \
firstyear/kanidmd:latest /home/kanidm/target/release/kanidmd reindex \
-D /data/kanidm.db
docker start <container name>
Generally reindexing is a rare action and should not normally be required.
# Verification
The server ships with a number of verification utilities to ensure that data is consistent such
as referential integrity or memberof.
Note that verification really is a last resort - the server does *a lot* to prevent and self-heal
from errors at run time, so you should rarely if ever require this utility. This utility was
developed to guarantee consistency during development!
You can run a verification with:
docker stop <container name>
docker run --rm -i -t -v kanidmd:/data \
firstyear/kanidmd:latest /home/kanidm/target/release/kanidmd verify \
-D /data/kanidm.db
docker start <container name>
If you have errors, please contact the project to help support you to resolve these.
# Raw actions
The server has a low-level stateful API you can use for more complex or advanced tasks on large numbers
of entries at once. Some examples are below, but generally we advise you to use the apis as listed
above.
# Create from json (group or account)
cargo run -- raw create -H https://localhost:8443 -C ../insecure/ca.pem -D admin example.create.account.json
cargo run -- raw create -H https://localhost:8443 -C ../insecure/ca.pem -D idm_admin example.create.group.json
# Apply a json stateful modification to all entries matching a filter
cargo run -- raw modify -H https://localhost:8443 -C ../insecure/ca.pem -D admin '{"Or": [ {"Eq": ["name", "idm_person_account_create_priv"]}, {"Eq": ["name", "idm_service_account_create_priv"]}, {"Eq": ["name", "idm_account_write_priv"]}, {"Eq": ["name", "idm_group_write_priv"]}, {"Eq": ["name", "idm_people_write_priv"]}, {"Eq": ["name", "idm_group_create_priv"]} ]}' example.modify.idm_admin.json
cargo run -- raw modify -H https://localhost:8443 -C ../insecure/ca.pem -D idm_admin '{"Eq": ["name", "idm_admins"]}' example.modify.idm_admin.json
# Search and show the database representations
cargo run -- raw search -H https://localhost:8443 -C ../insecure/ca.pem -D admin '{"Eq": ["name", "idm_admin"]}'
> Entry { attrs: {"class": ["account", "memberof", "object"], "displayname": ["IDM Admin"], "memberof": ["idm_people_read_priv", "idm_people_write_priv", "idm_group_write_priv", "idm_account_read_priv", "idm_account_write_priv", "idm_service_account_create_priv", "idm_person_account_create_priv", "idm_high_privilege"], "name": ["idm_admin"], "uuid": ["bb852c38-8920-4932-a551-678253cae6ff"]} }
# Delete all entries matching a filter
cargo run -- raw delete -H https://localhost:8443 -C ../insecure/ca.pem -D idm_admin '{"Eq": ["name", "test_account_delete_me"]}'

104
README.md
View file

@ -35,67 +35,25 @@ See our documentation on [rights and ethics]
* Human error occurs - we should be designed to minimise human mistakes and empower people.
* The system should be easy to understand and reason about for users and admins.
## Quick start
## Documentation
Today the server is still in a state of heavy development, and hasn't been packaged or setup for
production usage.
For more see the [kanidm book]
However, we are able to run test or demo servers that are suitable for previews and testing.
After getting the code, you will need a rust environment. Please investigate rustup for your platform
to establish this.
Once you have the source code, you need certificates to use with the server. I recommend using
let's encrypt, but if this is not possible, please use our insecure cert tool:
mkdir insecure
cd insecure
../insecure_generate_tls.sh
You can now build and run the server with:
cd kanidmd
cargo run -- recover_account -D /tmp/kanidm.db -n admin
cargo run -- server -D /tmp/kanidm.db -C ../insecure/ca.pem -c ../insecure/cert.pem -k ../insecure/key.pem --bindaddr 127.0.0.1:8080
In a new terminal, you can now build and run the client tools with:
cd kanidm_tools
cargo run -- --help
cargo run -- self whoami -H https://localhost:8080 -D anonymous -C ../insecure/ca.pem
cargo run -- self whoami -H https://localhost:8080 -D admin -C ../insecure/ca.pem
For more see [getting started]
[getting started]: https://github.com/Firstyear/kanidm/blob/master/GETTING_STARTED.md
## Development and Testing
There are tests of various components through the various components of the project. When developing
it't best if you test in the component you are working on, followed by the full server tests.
There are *no* prerequisites to running these tests or special configurations. cargo test should
just work!
### Using curl with anonymous:
curl -b /tmp/cookie.jar -c /tmp/cookie.jar --cacert ../insecure/ca.pem -X POST -d "{\"step\":{\"Init\":[\"anonymous\",null]}}" https://localhost:8080/v1/auth
curl -b /tmp/cookie.jar -c /tmp/cookie.jar --cacert ../insecure/ca.pem -X POST -d "{\"step\":{\"Creds\":[\"Anonymous\"]}}" https://localhost:8080/v1/auth
[kanidm book]: https://github.com/Firstyear/kanidm/blob/master/kanidm_book/src/SUMMARY.md
## Implemented/Planned features
* RBAC design
* SSH key distribution for servers
* SSH key distribution for servers (done)
* Pam/nsswitch clients (with limited offline auth)
* Sudo rule distribution via nsswitch
* CLI and WebUI for administration
* OIDC/Oauth
* Claims (limited by time and credential scope)
* RBAC/Claims (limited by time and credential scope)
* MFA (Webauthn, TOTP)
* Highly concurrent desgin (MVCC, COW)
* Highly concurrent desgin (MVCC, COW) (done)
* Replication (async multiple active write servers, read only servers)
* Account impersonation
* RADIUS integration
* RADIUS integration (done)
* Self service UI with wifi enrollment, claim management and more.
* Synchronisation to other IDM services
@ -106,13 +64,15 @@ just work!
* Generic database: We don't want to be another NoSQL database, we want to be an IDM solution.
* Being LDAP/GSSAPI/Kerberos: These are all legacy protocols that are hard to use and confine our thinking - we should avoid "being like them".
## Designs
## Development and Testing
### Designs
See the [designs] folder
[designs]: https://github.com/Firstyear/kanidm/tree/master/designs
## Get involved
### Get involved
To get started, you'll need to fork or branch, and we'll merge based on PR's.
@ -136,11 +96,13 @@ start working:
```
git branch <feature-branch-name>
git checkout <feature-branche-name>
cargo test
```
When you are ready for review (even if the feature isn't complete and you just want some advice)
```
cargo test
git commit -m 'Commit message' change_file.rs ...
git push <myfork/origin> <feature-branch-name>
```
@ -172,13 +134,51 @@ always stop and reset with:
git rebase --abort
```
### Development Server Quickstart for Interactive Testing
Today the server is still in a state of heavy development, and hasn't been packaged or setup for
production usage.
However, we are able to run test or demo servers that are suitable for previews and testing.
After getting the code, you will need a rust environment. Please investigate rustup for your platform
to establish this.
Once you have the source code, you need certificates to use with the server. I recommend using
let's encrypt, but if this is not possible, please use our insecure cert tool:
mkdir insecure
cd insecure
../insecure_generate_tls.sh
You can now build and run the server with:
cd kanidmd
cargo run -- recover_account -D /tmp/kanidm.db -n admin
cargo run -- server -D /tmp/kanidm.db -C ../insecure/ca.pem -c ../insecure/cert.pem -k ../insecure/key.pem --bindaddr 127.0.0.1:8080
In a new terminal, you can now build and run the client tools with:
cd kanidm_tools
cargo run -- --help
cargo run -- self whoami -H https://localhost:8080 -D anonymous -C ../insecure/ca.pem
cargo run -- self whoami -H https://localhost:8080 -D admin -C ../insecure/ca.pem
### Using curl with anonymous:
Sometimes you may want to check the json of an endpoint. Before you can do this, you need
a valid session and cookie jar established. To do this with curl and anonymous:
curl -b /tmp/cookie.jar -c /tmp/cookie.jar --cacert ../insecure/ca.pem -X POST -d "{\"step\":{\"Init\":[\"anonymous\",null]}}" https://localhost:8080/v1/auth
curl -b /tmp/cookie.jar -c /tmp/cookie.jar --cacert ../insecure/ca.pem -X POST -d "{\"step\":{\"Creds\":[\"Anonymous\"]}}" https://localhost:8080/v1/auth
## Why do I see rsidm references?
The original project name was rsidm while it was a thought experiment. Now that it's growing
and developing, we gave it a better project name. Kani is Japanese for "crab". Rust's mascot is a crab.
Idm is the common industry term for identity management services.
It all works out in the end.

1
kanidm_book/.gitignore vendored Normal file
View file

@ -0,0 +1 @@
book

6
kanidm_book/book.toml Normal file
View file

@ -0,0 +1,6 @@
[book]
authors = ["William Brown"]
language = "en"
multilingual = false
src = "src"
title = "Kanidm Administration"

View file

@ -0,0 +1,12 @@
# Summary
[Kanidm Administration](./intro.md)
- [Installing the Server](./installing_the_server.md)
- [Administrative Tasks](./administrivia.md)
- [Interacting with the Server](./client_tools.md)
- [Accounts and Groups](./accounts_and_groups.md)
- [SSH Key Distribution](./ssh_key_dist.md)
- [RADIUS](./radius.md)
-----------
[Why TLS?](./why_tls.md)

View file

@ -0,0 +1,120 @@
# Accounts and groups
Accounts and Groups are the primary reason for Kanidm to exist. Kanidm is optimised as a repository
for these data. As a result, they have many concepts and important details to understand.
## Default Accounts and Groups
Kanidm ships with a number of default accounts and groups. This is to give you the best out of
box experience possible, as well as supplying best practice examples related to modern IDM
systems.
The system admin account (the account you recovered in the setup) has limited privileges - only to
manage high-privilege accounts and services. This is to help seperate system administration
from identity administration actions. An idm_admin is also provided that is only for management
of accounts and groups.
Both admin and idm_admin should *NOT* be used for daily activities - they exist for initial
system configuration, and for disaster recovery scenarioes. You should delegate permissions
as required to named user accounts instead.
The majority of the provided content is privilege groups that provide rights over Kanidm
administrative actions. These include groups for account management, person management (personal
and sensitive data), group management, and more.
## Recovering the Initial idm_admin Account
By default the idm_admin has no password, and can not be accessed. You should recover it with the
admin (system admin) account. We recommend the use of "generate_password" as it provides a high
strength, random, machine only password.
kanidm account credential generate_password --name admin idm_admin
Generated password for idm_admin: tqoReZfz....
We can now use the idm_admin to create initial groups and accounts.
kanidm group create demo_group --name idm_admin
kanidm account create demo_user "Demonstration User" --name idm_admin
kanidm group add_members demo_group demo_user --name idm_admin
kanidm group list_members demo_group --name idm_admin
kanidm account get demo_user --name idm_admin
You can also use anonymous to view users and groups - note that you won't see as many fields due
to the different anonymous access profile limits!
kanidm account get demo_user --name anonymous
## Viewing Default Groups
You should take some time to inspect the default groups which are related to
default permissions. These can be viewed with:
kanidm group list
kanidm group get <name>
## Resetting Account Credentials
Members of the `idm_account_manage_priv` group have the rights to manage other users
accounts security and login aspects. This includes resetting account credentials.
We can perform a password reset on the demo_user for example as idm_admin, who is
a default member of this group.
kanidm account credential set_password demo_user --name idm_admin
kanidm self whoami --name demo_user
## Nested Groups
Kanidm supports groups being members of groups, allowing nested groups. These nesting relationships
are shown through the "memberof" attribute on groups and accounts.
Kanidm makes all group-membership determinations by inspecting an entries "memberof" attribute.
An example can be easily shown with:
kanidm group create group_1 --name idm_admin
kanidm group create group_2 --name idm_admin
kanidm account create nest_example "Nesting Account Example" --name idm_admin
kanidm group add_members group_1 group_2 --name idm_admin
kanidm group add_members group2 nest_example --name idm_admin
kanidm account get nest_example --name anonymous
## Why Can't I Change admin With idm_admin?
As a security mechanism there is a distiction between "accounts" and "high permission
accounts". This is to help prevent elevation attacks, where say a member of a
service desk could attempt to reset the password of idm_admin or admin, or even a member of
HR or System Admin teams to move laterally.
Generally, membership of a "privilege" group that ships with kanidm, such as:
* idm_account_manage_priv
* idm_people_read_priv
* idm_schema_manage_priv
* many more ...
Indirectly grants you membership to "idm_high_privilege". If you are a member of
this group, the standard "account" and "people" rights groups are NOT able to
alter, read or manage these accounts. To manage these accounts higher rights
are required, such as those held by the admin account are required.
Further, groups that are considered "idm_high_privilege" can NOT be managed
by the standard "idm_group_manage_priv" group.
Management of high privilege accounts and groups is granted through the
the "hp" variants of all privilieges. For example:
* idm_hp_account_read_priv
* idm_hp_account_manage_priv
* idm_hp_account_write_priv
* idm_hp_group_manage_priv
* idm_hp_group_write_priv
Membership of any of these groups should be considered to be equivalent to
system administration rights in the directory, and by extension, over all network
resources that trust Kanidm.
All groups that are flagged as "idm_high_privilege" should be audited and
monitored to ensure that they are not altered.

View file

@ -0,0 +1,136 @@
# Administration Tasks
There are a number of tasks that you may wish to perform as an administrator of a service like kanidm.
# Backup and Restore
With any idm software, it's important you have the capability to restore in case of a disaster - be
that physical damage or mistake. Kanidm supports backup and restore of the database with two methods.
## Method 1
Method 1 involves taking a backup of the database entry content, which is then re-indexed on restore.
This is the "prefered" method.
To take the backup (assuming our docker environment) you first need to stop the instance:
docker stop <container name>
docker run --rm -i -t -v kanidmd:/data -v kanidmd_backups:/backup \
firstyear/kanidmd:latest /sbin/kanidmd backup \
/backup/kanidm.backup.json -D /data/kanidm.db
docker start <container name>
You can then restart your instance. It's advised you DO NOT modify the backup.json as it may introduce
data errors into your instance.
To restore from the backup:
docker stop <container name>
docker run --rm -i -t -v kanidmd:/data -v kanidmd_backups:/backup \
firstyear/kanidmd:latest /sbin/kanidmd restore \
/backup/kanidm.backup.json -D /data/kanidm.db
docker start <container name>
That's it!
## Method 2
This is a simple backup of the data volume.
docker stop <container name>
# Backup your docker's volume folder
docker start <container name>
# Rename the domain
There are some cases where you may need to rename the domain. You should have configured
this initially in the setup, however you may have a situation where a business is changing
name, merging, or other needs which may prompt this needing to be changed.
WARNING: This WILL break ALL u2f/webauthn tokens that have been enrolled, which MAY cause
accounts to be locked out and unrecoverable until further action is taken. DO NOT CHANGE
the domain_name unless REQUIRED and have a plan on how to manage these issues.
WARNING: This operation can take an extensive amount of time as ALL accounts and groups
in the domain MUST have their SPN's regenerated. This will also cause a large delay in
replication once the system is restarted.
You should take a backup before proceeding with this operation.
When you have a created a migration plan and strategy on handling the invalidation of webauthn,
you can then rename the domain with the commands as follows:
docker stop <container name>
docker run --rm -i -t -v kandimd:/data \
firstyear/kanidm:latest /sbin/kanidmd domain_name_change \
-D /data/kanidm.db -n idm.new.domain.name
docker start <container name>
# Reindexing after schema extension
In some (rare) cases you may need to reindex.
Please note the server will sometimes reindex on startup as a result of the project
changing it's internal schema definitions. This is normal and expected - you may never need
to start a reindex yourself as a result!
You'll likely notice a need to reindex if you add indexes to schema and you see a message in your logs such as:
Index EQUALITY name not found
Index {type} {attribute} not found
This indicates that an index of type equality has been added for name, but the indexing process
has not been run - the server will continue to operate and the query execution code will correctly
process the query however it will not be the optimal method of delivering the results as we need to
disregard this part of the query and act as though it's un-indexed.
Reindexing will resolve this by forcing all indexes to be recreated based on their schema
definitions (this works even though the schema is in the same database!)
docker stop <container name>
docker run --rm -i -t -v kanidmd:/data \
firstyear/kanidmd:latest /sbin/kanidmd reindex \
-D /data/kanidm.db
docker start <container name>
Generally reindexing is a rare action and should not normally be required.
# Verification
The server ships with a number of verification utilities to ensure that data is consistent such
as referential integrity or memberof.
Note that verification really is a last resort - the server does *a lot* to prevent and self-heal
from errors at run time, so you should rarely if ever require this utility. This utility was
developed to guarantee consistency during development!
You can run a verification with:
docker stop <container name>
docker run --rm -i -t -v kanidmd:/data \
firstyear/kanidmd:latest /sbin/kanidmd verify \
-D /data/kanidm.db
docker start <container name>
If you have errors, please contact the project to help support you to resolve these.
# Raw actions
The server has a low-level stateful API you can use for more complex or advanced tasks on large numbers
of entries at once. Some examples are below, but generally we advise you to use the apis as listed
above.
# Create from json (group or account)
kanidm raw create -H https://localhost:8443 -C ../insecure/ca.pem -D admin example.create.account.json
kanidm raw create -H https://localhost:8443 -C ../insecure/ca.pem -D idm_admin example.create.group.json
# Apply a json stateful modification to all entries matching a filter
kanidm raw modify -H https://localhost:8443 -C ../insecure/ca.pem -D admin '{"Or": [ {"Eq": ["name", "idm_person_account_create_priv"]}, {"Eq": ["name", "idm_service_account_create_priv"]}, {"Eq": ["name", "idm_account_write_priv"]}, {"Eq": ["name", "idm_group_write_priv"]}, {"Eq": ["name", "idm_people_write_priv"]}, {"Eq": ["name", "idm_group_create_priv"]} ]}' example.modify.idm_admin.json
kanidm raw modify -H https://localhost:8443 -C ../insecure/ca.pem -D idm_admin '{"Eq": ["name", "idm_admins"]}' example.modify.idm_admin.json
# Search and show the database representations
kanidm raw search -H https://localhost:8443 -C ../insecure/ca.pem -D admin '{"Eq": ["name", "idm_admin"]}'
> Entry { attrs: {"class": ["account", "memberof", "object"], "displayname": ["IDM Admin"], "memberof": ["idm_people_read_priv", "idm_people_write_priv", "idm_group_write_priv", "idm_account_read_priv", "idm_account_write_priv", "idm_service_account_create_priv", "idm_person_account_create_priv", "idm_high_privilege"], "name": ["idm_admin"], "uuid": ["bb852c38-8920-4932-a551-678253cae6ff"]} }
# Delete all entries matching a filter
kanidm raw delete -H https://localhost:8443 -C ../insecure/ca.pem -D idm_admin '{"Eq": ["name", "test_account_delete_me"]}'

View file

@ -0,0 +1,64 @@
# Interacting with the Server
To interact with Kanidm as an administration, you'll need to use our command line tools
## From (experimental) packages
Today we support Fedora 30/31 and OpenSUSE leap 15.1 and Tumbleweed.
### SUSE
Using zypper you can add the repository with:
zypper ar obs://home:firstyear:kanidm home_firstyear_kanidm
Then you need to referesh your metadata and install the clients.
zypper ref
zypper in kanidm-clients
### Fedora
On fedora you need to add the repos into the correct directory
cd /etc/yum.repos.d
30:
wget https://download.opensuse.org/repositories/home:/firstyear:/kanidm/Fedora_30/home:firstyear:kanidm.repo
31:
wget https://download.opensuse.org/repositories/home:/firstyear:/kanidm/Fedora_31/home:firstyear:kanidm.repo
Now you can add the packages:
dnf install kanidm-clients
## From source
After you check out the source (see github), navigate to:
cd kanidm_tools
cargo build
cargo install --path ./
## Check the tools work.
Now you can check your instance is working. You may need to provide a CA certificate for verification
with the -C parameter:
kanidm self whoami -C ../path/to/ca.pem -H https://localhost:8443 --name anonymous
kanidm self whoami -H https://localhost:8443 --name anonymous
Now you can take some time to look at what commands are available - things may still be rough so
please ask for help at anytime.
## Kandim configuration
You can configure kanidm to help make commands simpler by modifying ~/.config/kanidm OR /etc/kanidm/config
uri = "https://idm.example.com"
verify_ca = true|false
verify_hostnames = true|false
ca_path = "/path/to/ca.pem"
Once configured, you can test this with:
kanidm self whoami --name anonymous

View file

@ -0,0 +1,31 @@
# Installing the Server
Currently we have a pre-release docker image based on git master. They can be found at:
https://hub.docker.com/r/firstyear/kanidmd
https://hub.docker.com/r/firstyear/kanidm_radius
You'll need a volume where you can put certificates and the database:
docker volume create kanidmd
You should have a ca.pem, cert.pem and key.pem in your kanidmd volume. The reason for requiring
TLS is explained in [why tls](./why_tls.md) . To put the certificates in place you can use a shell container
that mounts the volume such as:
docker run --rm -i -t -v kanidmd:/data -v /my/host/path/work:/work opensuse/leap:latest cp /work/* /data/
OR for a shell into the volume:
docker run --rm -i -t -v kanidmd:/data opensuse/leap:latest /bin/sh
Then you can setup the initial admin account and initialise the database into your volume.
docker run --rm -i -t -v kanidmd:/data firstyear/kanidmd:latest /sbin/kanidmd recover_account -D /data/kanidm.db -n admin
You then want to set your domain name so that spn's are generated correctly.
docker run --rm -i -t -v kanidmd:/data firstyear/kanidmd:latest /sbin/kanidmd domain_name_change -D /data/kanidm.db -n idm.example.com
Now we can run the server so that it can accept connections.
docker run -p 8443:8443 -v /Users/william/development/rsidm/insecure:/data firstyear/kanidmd:latest

18
kanidm_book/src/intro.md Normal file
View file

@ -0,0 +1,18 @@
# Kanidm Administration
Kanidm is an identity management server, acting as an authority on accounts and authorisation
within a technical environment.
WARNING: This project is still under heavy development, and has not had a production ready
release yet. It may lose your data, be offline for some periods of time, or otherwise cause
disruptions if you aren't ready.
The intent of the Kanidm project is:
* To provide a single truth source for accounts, groups and privileges.
* To enable integrations to systems and services so they can authenticate accounts.
* To make system, network, application and web authentication easy and accessible.

180
kanidm_book/src/radius.md Normal file
View file

@ -0,0 +1,180 @@
# RADIUS
RADIUS is a network-protocol that is commonly used to allow wifi devices or
vpn's to authenticate users to a network boundary. While it should not be a
sole point of trust/authentication to an identity, it's still an important
control for improving barriers to attackers access to network resources.
Kanidm has a philosophy that each account can have multiple credentials which
are related to their devices and limited to specific resources. RADIUS is
no exception, and has a seperate credential for each account to use for
RADIUS access.
## Disclaimer
It's worth noting some disclaimers about Kanidm's RADIUS integration here
### One Credential - One Account
Kanidm normally attempts to have credentials for each *device* and *application*
rather than the legacy model of one to one.
RADIUS as a protocol is only able to attest a *single* credential in an authentication
attempt, which limits us to storing a single RADIUS credential per account. However
despite this limitation, it still greatly improves the situation by isolating the
RADIUS credential from the primary or application credentials of the account. This
solves many common security concerns around credential loss or disclosure
and prevents rogue devices from locking out accounts as they attempt to
authenticate to wifi with expired credentials.
### Cleartext Credential Storage
RADIUS offers many different types of tunnels and authentication mechanisms.
However, most client devices "out of the box" when you select a WPA2-Enterprise
network only attempt a single type: MSCHAPv2 with PEAP. This is a challenge
response protocol which requires cleartext or ntlm credentials.
As MSCHAPv2 with PEAP is the only practical, universal RADIUS type supported
on all devices with "minimal" configuration, we consider it imperitive
that it MUST be supported as the default. Esoteric RADIUS types can be used
as well, but this is up to administrators to test and configure.
Due to this requirement, we must store the RADIUS material as cleartext or
ntlm hashes. It would be silly to think that ntlm is "secure" as it's md4
which is only an illusion of security.
This means, Kanidm stores RADIUS credentials in the database is cleartext.
We believe this is a reasonable decision and is a low risk to security as:
* The access controls around radius secret by default are "strong", limited to only self-account read and radius-server read.
* As RADIUS credentials are seperate to the primary account credentials, and have no other rights, their disclosure is not going to lead to a fully compromise account.
* Having the credentials in cleartext allows a better user experience as clients can view the credentials at anytime to enroll further devices.
## Account Credential Configuration
For an account to use RADIUS they must first generate a RADIUS secret unique to
that account. By default all accounts can self-create this secret.
kanidm account radius generate_secret --name william william
kanidm account radius show_secret --name william william
## Account group configuration
Kanidm enforces that accounts which can authenticate to RADIUS must be a member
of an allowed group. This allows you to define which users or groups may use
wifi or VPN infrastructure, and gives a path for "revoking" access to the resources
through group management. The key point of this, is that service accounts should
not be part of this group.
kanidm group create --name idm_admin radius_access_allowed
kanidm group add_members --name idm_admin radius_access_allowed william
## RADIUS Server Service Account
To read these secrets, the radius server requires an account with the
correct privileges. This can be created and assigned through the group
"idm_radius_servers" which is provided by default.
kanidm account create --name admin radius_service_account "Radius Service Account"
kanidm group add_members --name admin idm_radius_servers radius_service_account
kanidm account credential generate_password --name admin radius_service_account
## Deploying a RADIUS Container
We provide a RADIUS container that has all the needed integrations. This container
requires some cryptographic material, laid out in a volume like so:
data
data/ca.pem # This is the kanidm ca.pem
data/config.ini # This is the kanidm-radius configuration.
data/certs
data/certs/dh # openssl dhparam -out ./dh 2048
data/certs/key.pem # These are the radius ca/cert/key
data/certs/cert.pem
data/certs/ca.pem
The config.ini has the following template:
[kanidm_client]
url = # URL to the kanidm server
strict = false # Strict CA verification
ca = /data/ca.pem # Path to the kanidm ca
user = # Username of the RADIUS service account
secret = # Generated secret for the service account
; default vlans for groups that don't specify one.
[DEFAULT]
vlan = 1
; [group.test] # group.<name> will have these options applied
; vlan =
[radiusd]
ca = # Path to the radius server's CA
key = # Path to the radius servers key
cert = # Path to the radius servers cert
dh = # Path to the radius servers dh params
required_group = # name of a kanidm group which you must be a member of to
# use radius.
; [client.localhost] # client.<nas name> configures wifi/vpn consumers
; ipaddr = # ipv4 or ipv6 address of the NAS
; secret = # shared secret
A fully configured example is:
[kanidm_client]
; be sure to check the listening port is correct, it's the docker internal port
; not the external one if these containers are on the same host.
url = https://<kanidmd container name or ip>:8443
strict = true # adjust this if you have ca validation issues
ca = /data/ca.crt
user = radius_service_account
secret = # The generated password from above
; default vlans for groups that don't specify one.
[DEFAULT]
vlan = 1
[group.network_admins]
vlan = 10
[radiusd]
ca = /data/certs/ca.pem
key = /data/certs/key.pem
cert = /data/certs/cert.pem
dh = /data/certs/dh
required_group = radius_access_allowed
[client.localhost]
ipaddr = 127.0.0.1
secret = testing123
[client.docker]
ipaddr = 172.17.0.0/16
secret = testing123
You can then run the container with:
docker run --name radiusd -i -t -v ...:/data firstyear/kanidm_radius:latest
Authentication can be tested through the client.localhost nas configuration with:
docker exec -i -t radiusd radtest <username> badpassword 127.0.0.1 10 testing123
docker exec -i -t radiusd radtest <username> <radius show_secret value here> 127.0.0.1 10 testing123
Finally, to expose this to a wifi infrastructure, add your NAS in config.ini:
[client.access_point]
ipaddr = <some ipadd>
secret = <random value>
And re-create/run your docker instance with `-p 1812:1812 -p 1812:1812/udp` ...
If you have any issues, check the logs from the radius output they tend to indicate the cause
of the problem.
Note the radius container *is* configured to provide Tunnel-Private-Group-ID so if you wish to use
wifi assigned vlans on your infrastructure, you can assign these by groups in the config.ini as
shown in the above examples.

View file

@ -0,0 +1,60 @@
# SSH Key Distribution
To support SSH authentication securely to a large set of hosts running SSH, we support distribution
of SSH public keys via the kanidm server.
## pre-release warning
Currently the tools involved on the client machines do *not* cache the SSH public keys. This means
that if your primary kanidm server is offline you will *not* be able to SSH to these machines. You
should adapt and maintain a disaster recovery plan that allows you to access machines if or when
this situation occurs.
## Configuring accounts
To view the current ssh public keys on accounts, you can use:
kanidm account ssh list_publickeys --name <login user> <account to view>
kanidm account ssh list_publickeys --name idm_admin william
All users by default can self-manage their ssh public keys. To upload a key, a command like this
is the best way to do so:
kanidm account ssh add_publickey --name william william 'test-key' "`cat ~/.ssh/id_rsa.pub`"
To remove (revoke) an ssh publickey, you delete them by the tag name:
kanidm account ssh delete_publickey --name william william 'test-key'
## Security notes
As a security feature, kanidm validates *all* publickeys to ensure they are valid ssh publickeys.
Uploading a private key or other data will be rejected. For example:
kanidm account ssh add_publickey --name william william 'test-key' "invalid"
Enter password:
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Http(400, Some(SchemaViolation(InvalidAttributeSyntax)))', src/libcore/result.rs:1084:5
## Server Configuration
The kanidm_ssh_authorizedkeys command is part of the kanidm-clients package, so should be installed
on the servers.
To configure the tool, you should edit /etc/kanidm/config, as documented in [clients](./client_tools.md)
You can test this is configured correctly by running:
kanidm_ssh_authorizedkeys -D anonymous <account name>
If the account has ssh public keys you should see them listed, one per line.
To configure servers to accept these keys, you must change their /etc/ssh/sshd_config to
contain the lines:
AuthorizedKeysCommand /usr/bin/kanidm_ssh_authorizedkeys -D anonymous %u
AuthorizedKeysCommandUser nobody
Restart sshd, and then attempt to authenticate with the keys.
It's highly recommended you keep your client configuration and sshd_configuration in a configuration
management tool such as salt or ansible.

View file

@ -1,13 +1,12 @@
Why TLS is required?
--------------------
# Why TLS?
In the getting started you may notice that we require TLS to be configure in
your container - or that you provide something *with* TLS in front like haproxy.
This is due to a single setting on the server - secure_cookies
What are secure cookies?
------------------------
## What are secure cookies?
Secure Cookies is a flag set in cookies that "asks" a client only to transmit them
back to the origin site if and only if https is present in the URL.
@ -16,8 +15,7 @@ CA verification is *not* checked - you can use invalid, out of date certificates
or even certificates where the subjectAltName does not match. But the client
must see https:// as the destination else it *will not* send the cookies.
How does that affect kanidm?
----------------------------
## How does that affect kanidm?
Kanidm's authentication system is a stepped challenge response design, where you
initially request an "intent" to authenticated. Once you establish this intent

View file

@ -1,12 +1,10 @@
FROM opensuse/tumbleweed:latest AS builder
MAINTAINER william@blackhats.net.au
RUN zypper install -y timezone cargo rust gcc sqlite3-devel libopenssl-devel
COPY . /home/kanidm/
WORKDIR /home/kanidm/
RUN zypper install -y timezone cargo rust gcc sqlite3-devel libopenssl-devel && \
RUSTC_BOOTSTRAP=1 cargo build --release
RUN cargo build --release
FROM opensuse/tumbleweed:latest