Docs rework ()

* more markdowny linty things
* Fixes  by replacing mdbook-template with github-flavoured and more markdowny alerts
This commit is contained in:
James Hodgkinson 2024-07-22 19:21:56 -07:00 committed by GitHub
parent 9a4ca18913
commit e1a1bff94d
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
39 changed files with 305 additions and 419 deletions

View file

@ -52,7 +52,7 @@ jobs:
- name: Build the docs
run: |
cargo install mdbook-template mdbook-mermaid
cargo install mdbook-alerts mdbook-mermaid
cargo doc --no-deps
mdbook build *book
rm -rf ./docs/

View file

@ -134,9 +134,9 @@ finish our production components and the stability of the API's for longer term
- Improve account sync import, including mail attrs and better session handling
- Bug fix in unixd when certain operation orders could cause group cache to be ignored
- pre-compress all wasm to improve loading times
- Add preflight headers for SPA oauth2 clients
- Add preflight headers for SPA OAuth2 clients
- Persist nonce through refresh tokens to support public clients
- Allow public (pkce) oauth2 clients
- Allow public (PKCE) OAuth2 clients
- Add client UX for external credential portals for synchronised accounts
- Improve migration durability with a global transaction
- Cli now shows spn instead of username to allow better multidomain admin
@ -144,7 +144,7 @@ finish our production components and the stability of the API's for longer term
- Add tls certgen to main binary to improve developer and quickstart setup
- Unixd now blocks all local account names and id's resolving prevent priv-esc
- Fix bug with service-account session logout access
- Oauth2 app list shows when no applications are available
- OAuth2 app list shows when no applications are available
- Improve ip audit logging
- Improve cli with re-auth when session is expired
- Support legacy cron syntax in backup config
@ -171,7 +171,7 @@ production usage.
### Release Highlights
- Allow full server content replication in testing (yes we're finally working on replication!)
- Improve oauth2 to allow scoped members to see RS they can access for UI flows
- Improve OAuth2 to allow scoped members to see RS they can access for UI flows
- Performance improvement by reducing clones
- Track credential uuid used for session authentication in the session
- Remove the legacy webauthn types for newer attributes
@ -182,7 +182,7 @@ production usage.
- Improve exit codes of unixd tools
- Restrict valid chars in some string contexts in entries
- Allow configuration of ldap basedn
- Extend oauth2 session lifetimes, add refresh token support
- Extend OAuth2 session lifetimes, add refresh token support
- Improve user experience of credential updates via intent tokens
- Consolidate unix tools
- Add exclusive process lock to daemon
@ -205,13 +205,13 @@ There are still things we want to change there. Otherwise the server is stable a
- Support windows for server tests
- Add a kanidm tools container
- Initial support for live sync/import of users and groups from FreeIPA
- Oauth2 session logout and global logout support
- OAuth2 session logout and global logout support
- UI polish based on hint flags to dynamically enable/disable elements
- Oauth2 single sign on application portal
- OAuth2 single sign on application portal
- Support dn=token for ldap client binds
- Trap more signals for daemons
- Mail read permission group
- Oauth2 add a groups claim
- OAuth2 add a groups claim
- LDAP support for mail primary and alternate address selectors in queries
- Fix handling of virtual attrs with '\*' searches in ldap
- Support multiple TOTP on accounts
@ -244,7 +244,7 @@ proxy. You should be ready for this change when you upgrade to the latest versio
- TLS enforced as a requirement for all servers
- Support API service account tokens
- Make name rules stricter due to issues found in production
- Improve Oauth2 PKCE testing
- Improve OAuth2 PKCE testing
- Add support for new password import hashes
- Allow configuration of trusting x forward for headers
- Components for account permission elevation modes
@ -270,7 +270,7 @@ The project is shaping up very nicely, and a beta will be coming soon!
- Performance improvements in builds
- Windows development and service support
- WebUI polish and improvements
- Consent is remembered in oauth2 improving access flows
- Consent is remembered in OAuth2 improving access flows
- Replication changelog foundations
- Compression middleware for static assests to reduce load times
- User on boarding now possible with self service credential reset
@ -308,14 +308,14 @@ better for a future supported release.
### Release Highlights
- Oauth2 scope to group mappings
- OAuth2 scope to group mappings
- Webauthn subdomain support
- Oauth2 rfc7662 token introspection
- OAuth2 RFC7662 token introspection
- Basic OpenID Connect support
- Improve performance of domain rename
- Refactor of entry value internals to improve performance
- Addition of email address attributes
- Web UI improvements for Oauth2
- Web UI improvements for OAuth2
## 2021-10-01 - Kanidm 1.1.0-alpha6
@ -333,7 +333,7 @@ bring the project this far! 🎉 🦀
- Dynamic menus on CLI for auth factors when choices exist
- Better handle missing resources for web ui elements at server startup
- Add WAL checkpointing to improve disk usage
- Oauth2 user interface flows for simple authorisation scenarios
- OAuth2 user interface flows for simple authorisation scenarios
- Improve entry memory usage based on valueset rewrite
- Allow online backups to be scheduled and taken
- Reliability improvements for unixd components with missing sockets
@ -360,7 +360,7 @@ for a future supported release.
- Password badlist caching
- Orca, a kanidm and ldap load testing system
- TOTP usability improvements
- Oauth2 foundations
- OAuth2 foundations
- CLI tool session management improvements
- Default shell falls back if the requested shell is not found
- Optional backup codes in case of lost MFA device

View file

@ -16,7 +16,7 @@ git-repository-icon = "fa-github"
additional-css = ["theme.css"]
additional-js = ["mermaid.min.js", "mermaid-init.js"]
[preprocessor.template]
[preprocessor.alerts]
[preprocessor.mermaid]
command = "mdbook-mermaid"

View file

@ -42,7 +42,7 @@
- [Troubleshooting](integrations/pam_and_nsswitch/troubleshooting.md)
- [SSSD](integrations/sssd.md)
- [SSH Key Distribution](integrations/ssh_key_distribution.md)
- [Oauth2](integrations/oauth2.md)
- [OAuth2](integrations/oauth2.md)
- [LDAP](integrations/ldap.md)
- [RADIUS](integrations/radius.md)
@ -79,7 +79,7 @@
- [Cryptography Key Domains (2024)](developers/designs/cryptography_key_domains.md)
- [Domain Join - Machine Accounts](developers/designs/domain_join_machine_accounts.md)
- [Elevated Priv Mode](developers/designs/elevated_priv_mode.md)
- [Oauth2 Refresh Tokens](developers/designs/oauth2_refresh_tokens.md)
- [OAuth2 Refresh Tokens](developers/designs/oauth2_refresh_tokens.md)
- [Replication Coordinator](developers/designs/replication_coordinator.md)
- [Replication Design and Notes](developers/designs/replication_design_and_notes.md)
- [REST Interface](developers/designs/rest_interface.md)

View file

@ -65,7 +65,7 @@ groups.
| `idm_access_control_admins` | write access controls |
| `idm_account_policy_admins` | modify account policy requirements for user authentication |
| `idm_group_admins` | create and modify groups |
| `idm_oauth2_admins` | create and modify oauth2 integrations |
| `idm_oauth2_admins` | create and modify OAuth2 integrations |
| `idm_people_admins` | create and modify persons |
| `idm_people_on_boarding` | create (but not modify) persons. Intended for use with service accounts |
| `idm_people_pii_read` | allow read to personally identifying information |

View file

@ -184,7 +184,7 @@ kanidm system denied-names show
To allow a name to be used again it can be removed from the list:
```
```shell
kanidm system denied-names remove <name> [<name> ...]
```

View file

@ -14,15 +14,10 @@ Windows Hello, TPM's and more.
These devices are unphishable, self contained multifactor authenticators and are considered the most
secure method of authentication in Kanidm.
<!-- deno-fmt-ignore-start -->
{{#template ../templates/kani-warning.md
imagepath=images
title=Warning!
text=Kanidm's definition of Passkeys may differ from that of other systems. This is because we adopted the term very early, before it has changed and evolved.
}}
<!-- deno-fmt-ignore-end -->
> [!WARNING]
>
> Kanidm's definition of Passkeys may differ from that of other systems. This is because
> we adopted the term very early, before it has changed and evolved.
### Attested Passkeys
@ -104,15 +99,10 @@ You can perform a password reset on the `demo_user`, for example, as the `idm_ad
default member of this group. The lines below prefixed with `#` are the interactive credential
update interface. This allows the user to directly manage the credentials of another account.
<!-- deno-fmt-ignore-start -->
{{#template ../templates/kani-warning.md
imagepath=images
title=Warning!
text=Don't use the direct credential reset to lock or invalidate an account. You should expire the account instead.
}}
<!-- deno-fmt-ignore-end -->
> [!WARNING]
>
> Don't use the direct credential reset to lock or invalidate an account. You should
> expire the account instead.
```bash
kanidm person credential update demo_user --name idm_admin

View file

@ -39,7 +39,7 @@ kanidm person get nest_example --name anonymous
This should result in output similar to:
```
```text
memberof: idm_all_persons@localhost
memberof: idm_all_accounts@localhost
memberof: group_2@localhost

View file

@ -35,18 +35,18 @@ kanidm login --name anonymous
kanidm person get demo_user --name anonymous
```
> NOTE: only members of `idm_people_pii_read` and `idm_people_admins` may read personal information
> by default.
> [!NOTE]
>
> Only members of `idm_people_pii_read` and `idm_people_admins` may read personal
> information by default.
<!-- deno-fmt-ignore-start -->
Also
{{#template ../templates/kani-warning.md
imagepath=images
title=Warning!
text=Persons may change their own displayname, name and legal name at any time. You MUST NOT use these values as primary keys in external systems. You MUST use the `uuid` attribute present on all entries as an external primary key.
}}
<!-- deno-fmt-ignore-end -->
> [!WARNING]
>
> Persons may change their own displayname, name and legal name at any time. You MUST NOT
> use these values as primary keys in external systems. You MUST use the `uuid` attribute present on
> all entries as an external primary key.
## Account Validity

View file

@ -4,15 +4,11 @@ Through out this book, Kanidm will make reference to a "domain name". This is yo
name that you intend to use for Kanidm. Choosing this domain name however is not simple as there are
a number of considerations you need to be careful of.
<!-- deno-fmt-ignore-start -->
{{#template templates/kani-warning.md
imagepath=images/
title=Take note!
text=Incorrect choice of the domain name may have security impacts on your Kanidm instance, not limited to credential phishing, theft, session leaks and more. It is critical you follow the advice in this chapter.
}}
<!-- deno-fmt-ignore-end -->
> [!WARNING]
>
> Incorrect choice of the domain name may have security impacts on your Kanidm instance, not limited
> to credential phishing, theft, session leaks and more. It is critical you follow the advice in
> this chapter.
## Considerations

View file

@ -9,7 +9,7 @@ expected - you may never need to start a reindex yourself as a result!
You only need to reindex if you add custom schema elements and you see a message in your logs such
as:
```
```text
Index EQUALITY name not found
Index {type} {attribute} not found
```

View file

@ -2,7 +2,7 @@
- Do we need some kind of permission atoms to allow certain tasks?
## Use Cases:
## Use Cases
- User sign-up portal (need service account that can create users and do cred reset)
- Role for service account generation.
@ -54,7 +54,7 @@ IdmAdmin("IDM Admin") --> RadiusAccountModify("Radius Account Modify")
```mermaid
graph LR
IntegrationAdmin("Integration Admin") --> Oauth2Admin("Oauth2 Admin")
IntegrationAdmin("Integration Admin") --> OAuth2Admin("OAuth2 Admin")
IntegrationAdmin("Integration Admin") --> PosixAccountConsumer("POSIX Account Consumer")
IntegrationAdmin("Integration Admin") --> RadiusServiceAdmin("Radius Service Admin")
```
@ -131,8 +131,8 @@ GroupAdmin -.-> |"Inherits"| HPGroupAdmin
```mermaid
graph LR
Oauth2Admin("Oauth2 Admin") --> |"Creates Modifies Delegates"| Oauth2RS("Oauth2 RS")
ScopedMember("Scoped Member") --> |"Reads"| Oauth2RS
OAuth2Admin("OAuth2 Admin") --> |"Creates Modifies Delegates"| OAuth2RS("OAuth2 RS")
ScopedMember("Scoped Member") --> |"Reads"| OAuth2RS
```
## POSIX-Specific

View file

@ -42,7 +42,7 @@ description from the users returned (But that implies they DID match, which is a
More concrete:
```
```text
search {
action: allow
targetscope: Eq("class", "group")
@ -67,7 +67,7 @@ SearchRequest {
A potential defense is:
```
```text
acp class group: Pres(name) and Pres(desc) both in target attr, allow
acp class user: Pres(name) allow, Pres(desc) deny. Invert and Append
```
@ -113,7 +113,7 @@ search {
Now we have a single user where we can read `description`. So the compiled filter above as:
```
```text
And: {
AndNot: {
Eq("class", "user")

View file

@ -36,7 +36,7 @@ receives the access it is granting.
Alternately an access profile could target "self" so that self-update rules can still be expressed.
An access profile could target an oauth2 definition for the purpose of allowing reads to members of
An access profile could target an OAuth2 definition for the purpose of allowing reads to members of
a set of scopes that can access the service.
The access profile receiver would be group based only. This allows specifying that "X group of
@ -103,7 +103,7 @@ The "admins" role is responsible to manage:
- The name of the domain
- Configuration of the servers and replication
- Management of external integrations (oauth2)
- Management of external integrations (OAuth2)
#### Service Account Admin
@ -151,12 +151,12 @@ Satisfied by:
- Access profiles target specifiers instead of filters
- Sudo Mode
### Oauth2 Service Read (Nice to Have)
### OAuth2 Service Read (Nice to Have)
For ux/ui integration, being able to list oauth2 applications that are accessible to the user would
be a good feature. To limit "who" can see the oauth2 applications that an account can access a way
to "allow read" but by proxy of the related users of the oauth2 service. This will require access
controls to be able to interpret the oauth2 config and provide rights based on that.
For UX/UI integration, being able to list OAuth2 applications that are accessible to the user would
be a good feature. To limit "who" can see the OAuth2 applications that an account can access a way
to "allow read" but by proxy of the related users of the OAuth2 service. This will require access
controls to be able to interpret the OAuth2 config and provide rights based on that.
Satisfied by:

View file

@ -4,7 +4,7 @@ Kanidm exists to provide an authentication source for external applications. The
to have standardised ways to integrate with Kanidm to allow that application to interact and trust
Kanidm's authentication results.
For web based applications we offer Oauth2/OIDC. For Linux machines we offer a Kanidm specific HTTPS
For web based applications we offer OAuth2/OIDC. For Linux machines we offer a Kanidm-specific HTTPS
channel for identifying users (UNIX integration). Currently, for applications that don't support
other protocols we offer an LDAPS gateway that allows users to bind using their UNIX password.
@ -64,10 +64,10 @@ The application must bind with its api-token if it wishes to read extended user
this, only basic info limited to anonymous rights are granted.
(NOTE: We can't assume these DNs are private - I did consider making these
app=<secret key>,dc=example,dc=com, but client applications may disclose this basedn in ui
`app=<secret key>,dc=example,dc=com`, but client applications may disclose this basedn in UI
elements).
When a user authenticates the binddn of the account is set to spn=user,app=name,dc=example,dc=com.
When a user authenticates the binddn of the account is set to `spn=user,app=name,dc=example,dc=com`.
This difference in base DN triggers Kanidm to re-route the authentication to the application
specific password, rather than the UNIX one.
@ -102,7 +102,7 @@ their app password?)
The user may wish to have multiple passwords per application. Each password must have, at minimum, a
label to identify it. For example:
```
```text
MAIL
iphone: abcd...
laptop: bcde...
@ -114,7 +114,7 @@ Person accounts will need a new `Attribute::ApplicationPassword` that stores a
`ValueSetApplicationPassword`. Each value in the set is a new type to manage these secrets and their
labeling and the references to the applications.
```
```text
struct ApplicationPAssword {
label: String,
password: Password,

View file

@ -2,15 +2,15 @@
Within Kanidm we have to manage a number of private keys with various cryptographic purposes. In the
current design, we have evolved where for each purposes keys are managed in unique ways. However we
need to improve this for a number reasons including shared keys for Oauth2 domains and a future
need to improve this for a number reasons including shared keys for OAuth2 domains and a future
integration with PKCS11.
## Current System
In the current system we store keys in database records in a variety of bespoke ways for different
uses. Often this means that keys are associated to the object they are integrated with, such as the
oauth2 client configuration. However this limits us since we may have users that wish to combine
multiple oauth2 clients into a single security domain, where access tokens may be exchanged between
OAuth2 client configuration. However this limits us since we may have users that wish to combine
multiple OAuth2 clients into a single security domain, where access tokens may be exchanged between
clients for forwarded authentication.
Another example is api-tokens for service accounts. In the past we associated a private key to each
@ -37,7 +37,7 @@ possible.
Entities that rely on a cryptographic key will relate to a Key Object.
```
```text
┌─────────────────────┐
│ │
│ │
@ -75,7 +75,7 @@ Entities that rely on a cryptographic key will relate to a Key Object.
```
Key Objects have a Key Type denoting the type of material they contain. The types will be named
after the JWA algorithms from [rfc7518](https://www.rfc-editor.org/rfc/rfc7518). This allows easy
after the JWA algorithms from [RFC7518](https://www.rfc-editor.org/rfc/rfc7518). This allows easy
mapping to OAuth2 concepts and PKCS11 in the future.
- `ES256` (ECDSA using P-256 and SHA-256, `CKM_ECDSA_SHA256`)
@ -107,7 +107,7 @@ defines the currently active signing key for the object.
We have 3 keys defined with:
```
```text
k1 { status: valid, valid_from: 10 }
k2 { status: valid, valid_from: 14 }
k3 { status: valid, valid_from: 19 }
@ -135,7 +135,7 @@ On rotation the private key is _discarded_ to prevent future use of a rotated ke
Keys must be merged, and not deleted.
```
```text
class: KeyObject
uuid: ...
key_object_type: ECDSA_SHA256
@ -151,7 +151,7 @@ hs256_private: { id: ..., status: valid, public_key, private_key }
rs256_public: { id: ..., status: valid, public_key }
```
```
```text
┌─────────────────────┐ ┌─────────────────────┐
┌┴────────────────────┐│ ┌┴────────────────────┐│
┌─┴───────────────────┐ ││ ┌─┴───────────────────┐ ││
@ -187,7 +187,7 @@ these keys at run time. The object store must extract and have key-id lookup to
Entries that use a keyObject have a reference to it.
```
```text
class: oauth2_rs
key_object: Refer( ... )
```

View file

@ -1,4 +1,4 @@
## Indexing
# Indexing
Indexing is deeply tied to the concept of filtering. Indexes exist to make the application of a
search term (filter) faster.
@ -25,10 +25,12 @@ components. However the ID is very important to indexing :)
If we wanted to find `Eq(name, john)` here, what do we need to do? A full table scan is where we
perform:
data = sqlite.do(SELECT * from id2entry);
for row in data:
entry = deserialise(row)
entry.match_filter(...) // check Eq(name, john)
```python
data = sqlite.do(SELECT * from id2entry);
for row in data:
entry = deserialise(row)
entry.match_filter(...) // check Eq(name, john)
```
For a small database (maybe up to 20 objects), this is probably fine. But once you start to get much
larger this is really costly. We continually load, deserialise, check and free data that is not
@ -71,22 +73,21 @@ loads and one compare. That's 30000x faster (potentially ;) )!
To improve on this, if we had a query like Or(Eq(name, john), Eq(name, kris)) we can use our indexes
to speed this up.
We would query index_eq_name again, and we would perform the search for both john, and kris. Because
this is an OR we then union the two idl's, and we would have:
We would query `index_eq_name` again, and we would perform the search for both john, and kris.
Because this is an OR we then union the two idl's, and we would have:
```
```text
[04, 05,]
```
Now we just have to get entries 04,05 from id2entry, and we have our matching query. This means
Now we just have to get entries `04,05` from `id2entry`, and we have our matching query. This means
filters are often applied as idl set operations.
## Compressed ID lists
In order to make idl loading faster, and the set operations faster there is an idl library
(developed by me, firstyear), which will be used for this. To read more see:
https://github.com/Firstyear/idlset
<https://github.com/Firstyear/idlset>
## Filter Optimisation
@ -107,13 +108,13 @@ However, for targeted searches, filter optimisation really helps.
Imagine a query like:
```
```text
And(Eq(class, person), Eq(name, claire))
```
In this case, with our database of 250,000 persons, our idl's would have:
```
```text
And( idl[250,000 ids], idl(1 id))
```
@ -127,13 +128,13 @@ testing.
When we have this test threshold, there exists two possibilities for this filter.
```
```text
And( idl[250,000 ids], idl(1 id))
```
We load 250,000 idl and then perform the intersection with the idl of 1 value, and result in 1 or 0.
```
```text
And( idl(1 id), idl[250,000 ids])
```
@ -156,16 +157,16 @@ longer IDLs.
Before we discuss the details of the states and update processes, we need to consider the index
types we require.
# Index types
## Index types
The standard index is a key-value, where the key is the lookup, and the value is the idl set of the
candidates. The examples follow the above.
For us, we will format the table names as:
- idx_eq_<attrname>
- idx_sub_<attrname>
- idx_pres_<attrname>
- `idx_eq_<attrname>`
- `idx_sub_<attrname>`
- `idx_pres_<attrname>`
These will be string, blob for SQL. The string is the pkey.
@ -177,16 +178,16 @@ We also require a special name to uuid, and uuid to name index. These are to acc
name2uuid and uuid2name functions which are common in resolving on search. These will be named in
the tables as:
- idx_name2uuid
- idx_uuid2name
- `idx_name2uuid`
- `idx_uuid2name`
They will be structured as string, string for both - where the uuid and name column matches the
correct direction, and is the primary key. We could use a single table, but if we change to sled we
need to split this, so we preempt this change and duplicate the data here.
# Indexing States
## Indexing States
- Reindex
### Reindex
A reindex is the only time when we create the tables needed for indexing. In all other phases if we
do not have the table for the insertion, we log the error, and move on, instructing in the logs to
@ -198,13 +199,13 @@ for the first time. This means we need an "initial indexed" flag or similar.
For all intents, a reindex is likely the same as "create" but just without replacing the entry. We
would just remove all the index tables before hand.
- Write operation index metadata
### Write operation index metadata
At the start of a write transaction, the schema passes us a map of the current attribute index
states so that on filter application or modification we are aware of what attrs are indexed. It is
assumed that `name2uuid` and `uuid2name` are always indexed.
- Search Index Metadata
### Search Index Metadata
When filters are resolved they are tagged by their indexed state to allow optimisation to occur. We
then process each filter element and their tag to determine the indexes needed to built a candidate
@ -215,20 +216,20 @@ and the `entry_match_no_index` routine.
shortcut where if the outermost term is a full indexed term, then we can avoid the
`entry_match_no_index` Scall.
- Create
### Create
This is one of the simplest steps. On create we iterate over the entries ava's and referencing the
index metadata of the transaction, we create the indexes as needed from the values (before dbv
conversion).
- Delete
### Delete
Given the Entry to delete, we remove the ava's and id's from each set as needed. Generally this will
only be for tombstones, but we still should check the process works. Important to check will be
entries with and without names, ensuring the name2uuid/uuid2name is correctly changed, and removal
of all the other attributes.
- Modify
### Modify
This is the truly scary and difficult situation. The simple method would be to "delete" all indexes
based on the pre-entry state, and then to create again. However the current design of Entry and

View file

@ -1,4 +1,4 @@
# Oauth2 Application Listing
# OAuth2 Application Listing
A feature of some other IDM systems is to also double as a portal to linked applications. This
allows a convenient access point for users to discover and access linked applications without having
@ -14,20 +14,20 @@ already authenticated, and the IDM becomes the single "gateway" to accessing oth
## Access Control
The current design of the oauth2 resource servers (oauth2rs) is modeled around what the oauth2
protocol requires. This defines that in an oauth2 request, all of the requested scopes need be
The current design of the OAuth2 resource servers (oauth2rs) is modeled around what the OAuth2
protocol requires. This defines that in an OAuth2 request, all of the requested scopes need be
granted else it can not proceed. The current design is:
- scope maps - a relation of groups to the set of scopes that they grant
- implicit scopes - a set of scopes granted to all persons
While this works well for the oauth2 authorisation design, it doesn't work well from the kanidm side
While this works well for the OAuth2 authorisation design, it doesn't work well from the kanidm side
for managing _our_ knowledge of who is granted access to the application.
In order to limit who can see what applications we will need a new method to define who is allowed
access to the resource server on the Kanidm side, while also preserving OAuth2 semantics.
To fix this the current definition of scopes on oauth2 resource servers need to change.
To fix this the current definition of scopes on OAuth2 resource servers need to change.
- access scopes - a list of scopes (similar to implicit) that are used by the resource server for
granting access to the resource.
@ -38,10 +38,10 @@ To fix this the current definition of scopes on oauth2 resource servers need to
By changing to this method this removes the arbitrary implicit scope/scope map rules, and clearly
defines the set of scopes that grant access to the application, while also allow extended scopes to
be sent that can attenuate the application behaviour. This also allows the access members reference
to be used to generate knowledge on the kanidm side of "who can access this oauth2 resource". This
can be used to limit the listed applications to these oauth2 applications. In addition we can then
use these access members to create access controls to strictly limit who can see what oauth2
applications to the admins of oauth2 applications, and the users of them.
to be used to generate knowledge on the kanidm side of "who can access this OAuth2 resource". This
can be used to limit the listed applications to these OAuth2 applications. In addition we can then
use these access members to create access controls to strictly limit who can see what OAuth2
applications to the admins of OAuth2 applications, and the users of them.
To support this, we should allow dynamic groups to be created so that the 'implicit scope' behaviour
which allow all persons to access an application can be emulated by making all persons a member of

View file

@ -1,10 +1,10 @@
# Oauth2 Refresh Tokens
# OAuth2 Refresh Tokens
Due to how Kanidm authentication sessions were originally implemented they had short session times
(1 hour) due to the lack of privilege separation in tokens. Now with privilege separation being
implemented session lengths have been extended to 8 hours with possible increases in the future.
However, this leaves us with an issue with oauth2 - oauth2 access tokens are considered valid until
However, this leaves us with an issue with OAuth2 - OAuth2 access tokens are considered valid until
their expiry and we should not issue tokens with a validity of 8 hours or longer since that would
allow rogue users to have a long window of usage of the token before they were forced to re-auth. It
also means that in the case that an account must be forcefully terminated then the user would retain
@ -16,7 +16,7 @@ validity.
This is performed with access tokens and refresh tokens. The access token has a short lifespan
(proposed 15 minutes) and must be refreshed with Kanidm which can check the true session validity
and if the session has been revoked. This creates a short window for revocation to propagate to
oauth2 applications since each oauth2 application must periodically check in to keep their access
OAuth2 applications since each OAuth2 application must periodically check in to keep their access
token alive.
## Risks
@ -38,7 +38,7 @@ and
Refresh tokens must only be used by the client application associated. Kanidm strictly enforces this
already with our client authorisation checks. This is discussed in
[rfc6749 section 10.4](https://www.rfc-editor.org/rfc/rfc6749#section-10.4).
[RFC6749 section 10.4](https://www.rfc-editor.org/rfc/rfc6749#section-10.4).
## Design

View file

@ -14,7 +14,7 @@ replicas as easy as possible for new sites.
The intent of the replication coordinator (KRC) is to allow nodes to subscribe to the KRC which
configures the state of replication across the topology.
```
```text
1. Out of band - ┌────────────────┐
issue KRC ca + ────────────────┤ │
Client JWT. │ │
@ -92,14 +92,14 @@ There are two nodes, A and B.
The administrator configures both kanidm servers with replication urls.
```
```toml
# Server A
[replication]
origin = "repl://kanidmd_a:8444"
bindaddress = "[::]:8444"
```
```
```toml
# Server B
[replication]
origin = "repl://kanidmd_b:8444"
@ -109,7 +109,7 @@ bindaddress = "[::]:8444"
The administrator extracts their replication certificates with the kanidmd binary admin features.
This will reflect the `node_url` in the certificate.
```
```shell
kanidmd replication get-certificate
```
@ -117,7 +117,7 @@ For each node, a replication configuration is created in json.
For A pulling from B.
```
```toml
[replication."repl://kanidmd_b:8444"]
type = "mutual-pull"
partner_cert = "M..."
@ -126,7 +126,7 @@ automatic_refresh = false
For B pulling from A.
```
```toml
[replication."repl://kanidmd_a:8444"]
type = "mutual-pull"
partner_cert = "M..."
@ -142,7 +142,7 @@ The KRC is enabled as a replication parameter. This informs the node that it mus
nodes for its replication topology, and it prepares the node for serving that replication metadata.
This is analogous to a single node operation configuration.
```
```toml
[replication]
origin = "repl://kanidmd_a:8444"
bindaddress = "[::]:8444"
@ -155,7 +155,7 @@ krc_enable = true
All other nodes will have a configuration of:
```
```toml
[replication]
origin = "repl://kanidmd_b:8444"
bindaddress = "[::]:8444"
@ -215,7 +215,7 @@ another node.
Imagine the following example. Here, Node A is acting as the KRC.
```
```text
┌─────────────────┐ ┌─────────────────┐
│ │ │ │
│ │ │ │
@ -244,7 +244,7 @@ This would allow Node A to be aware of B, C, D and then create a full mesh.
We wish to decommission Node A and promote Node B to become the new KRC. Imagine at this point we
cut over Node D to point its KRC at Node B.
```
```text
┌─────────────────┐ ┌─────────────────┐
│ │ │ │
│ │ │ │
@ -280,9 +280,8 @@ This allows a time window where servers can be moved from Node A to Node B.
Server Start Up Process
```
Token is read from a file defined in the env.
works with systemd + docker secrets
```text
Token is read from a file defined in the env - this works with systemd + docker secrets
Token is JWT with HS256. (OR JWE + AES-GCM)
@ -298,40 +297,36 @@ No TOKEN -> Implies KRC role.
Client Process
```
connect to KRC
- provide token for site binding
- submit my server_uuid
- submit my public cert with the request
- submit current domain_uuid + generation if possible
- Connect to KRC
- Provide token for site binding
- Submit my server_uuid
- Submit my public cert with the request
- Submit current domain_uuid + generation if possible
- reply from KRC -> repl config map.
- config_map contains issuing KRC server uuid.
- if config_map generation > current config_map
- reload config.
- if config_map == None
- current map remains valid.
```
- Reply from KRC -> repl config map.
- Config_map contains issuing KRC server uuid.
- If config_map generation > current config_map
- Reload config.
- If config_map == None
- Current map remains valid.
KRC Process
```
- Validate Token
- is server_uuid present as a server entry?
- if no: add it with site association
- if yes: verify site associated to token
- is server_uuid certificate the same as before?
- if no: replace it.
- compare domain_uuid + generation
- if different supply config
- else None (no change)
```
- Is server_uuid present as a server entry?
- If no: add it with site association
- If yes: verify site associated to token
- Is server_uuid certificate the same as before?
- If no: replace it.
- Compare domain_uuid + generation
- If different supply config
- Else None (no change)
### FUTURE: Possible Read Only nodes
For R/O nodes, we need to define how R/W will pass through. I can see a possibility like
```
```text
No direct line
┌ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ of sight─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┐

View file

@ -1,14 +1,9 @@
# REST Interface
<!-- deno-fmt-ignore-start -->
{{#template ../../templates/kani-warning.md
imagepath=../../images/
title=Note!
text=This is a work in progress and not all endpoints have perfect schema definitions, but they're all covered!
}}
<!-- deno-fmt-ignore-end -->
> [!NOTE]
>
> This is a work in progress and not all endpoints have perfect schema definitions, but
> they're all covered!
We're generating an OpenAPI specification file and Swagger interface using
[utoipa](https://crates.io/crates/utoipa).

View file

@ -140,7 +140,7 @@ associated state identifier (cookie).
### Batch Operations
Per [rfc7644 section 3.7](https://datatracker.ietf.org/doc/html/rfc7644#section-3.7)
Per [RFC7644 section 3.7](https://datatracker.ietf.org/doc/html/rfc7644#section-3.7)
A requirement of the sync account will be a PATCH request to update the state identifier as the
first operation of the batch request. Failure to do so will result in an error.

View file

@ -0,0 +1 @@
# RADIUS Module Development

View file

@ -167,15 +167,10 @@ Tested with Ubuntu 20.04 and 22.04.
### Windows
<!-- deno-fmt-ignore-start -->
{{#template ../templates/kani-warning.md
imagepath=images
title=NOTICE
text=Our support for Windows is still in development, so you may encounter some compilation or build issues.
}}
<!-- deno-fmt-ignore-end -->
> [!CAUTION]
>
> Our support for Windows is still in development, so you may encounter some compilation
> or build issues.
You need [rustup](https://rustup.rs/) to install a Rust toolchain.
@ -230,15 +225,10 @@ git checkout <feature-branch-name>
cargo test
```
<!-- deno-fmt-ignore-start -->
{{#template ../templates/kani-warning.md
imagepath=images
title=IMPORTANT
text=Kanidm is unable to accept code that is generated by an AI for legal reasons. copilot and other tools that generate code in this way can not be used in Kanidm.
}}
<!-- deno-fmt-ignore-end -->
> [!IMPORTANT]
>
> Kanidm is unable to accept code that is generated by an AI for legal reasons. copilot
> and other tools that generate code in this way can not be used in Kanidm.
When you are ready for review (even if the feature isn't complete and you just want some advice):
@ -286,7 +276,7 @@ git rebase --abort
You'll need `mdbook` and the extensions to build the book:
```shell
cargo install mdbook mdbook-mermaid mdbook-template
cargo install mdbook mdbook-mermaid mdbook-alerts
```
To build it:
@ -404,15 +394,10 @@ You may find it easier to modify `~/.config/kanidm` per the
### Raw actions
<!-- deno-fmt-ignore-start -->
{{#template ../templates/kani-warning.md
imagepath=images
title=NOTICE
text=It's not recommended to use these tools outside of extremely complex or advanced development requirements. These are a last resort!
}}
<!-- deno-fmt-ignore-end -->
> [!NOTICE]
>
> It's not recommended to use these tools outside of extremely complex or advanced
> development requirements. These are a last resort!
The server has a low-level stateful API you can use for more complex or advanced tasks on large
numbers of entries at once. Some examples are below, but generally we advise you to use the APIs or

View file

@ -5,18 +5,18 @@ for a production deployment you follow the steps in the
[installation chapter](installing_the_server.html) instead as there are a number of security
considerations you should be aware of for production deployments.
### Requirements
## Requirements
- docker or podman
- `x86_64` cpu supporting `x86_64_v2` OR `aarch64` cpu supporting `neon`
### Get the software
## Get the software
```bash
docker pull docker.io/kanidm/server:latest
```
### Configure the container
## Configure the container
```bash
docker volume create kanidmd
@ -27,7 +27,7 @@ docker create --name kanidmd \
docker.io/kanidm/kanidm/server:latest
```
### Configure the server
## Configure the server
Create server.toml
@ -35,13 +35,13 @@ Create server.toml
{{#rustdoc_include ../../examples/server_container.toml}}
```
### Add configuration to container
## Add configuration to container
```bash
docker cp server.toml kanidmd:/data/server.toml
```
### Generate evaluation certificates
## Generate evaluation certificates
```bash
docker run --rm -i -t -v kanidmd:/data \
@ -49,13 +49,13 @@ docker run --rm -i -t -v kanidmd:/data \
kanidmd cert-generate
```
### Start Kanidmd Container
## Start Kanidmd Container
```bash
docker start kanidmd
```
### Recover the Admin Role Passwords
## Recover the Admin Role Passwords
The `admin` account is used to configure Kanidm itself.
@ -66,12 +66,12 @@ docker exec -i -t kanidmd \
The `idm_admin` account is used to manage persons and groups.
```
```shell
docker exec -i -t kanidmd \
kanidmd recover-account idm_admin
```
### Setup the client configuration
## Setup the client configuration
```toml
# ~/.config/kanidm
@ -80,26 +80,26 @@ uri = "https://localhost:443"
verify_ca = false
```
### Check you can login
## Check you can login
```bash
kanidm login --name idm_admin
```
### Create an account for yourself
## Create an account for yourself
```
```shell
kanidm person create <your username> <Your Displayname>
```
### Setup your account credentials
## Set up your account credentials
```
```shell
kanidm person credential create-reset-token <your username>
```
Then follow the presented steps.
### What next?
## What next?
You can now follow the steps in the [administration section](administration.md)

View file

@ -108,7 +108,7 @@ review, assessment and improvement.
No, it is not possible swap out the SQLite database for any other type of SQL server.
_ATTEMPTING THIS WILL BREAK YOUR KANIDM INSTANCE IRREPARABLY_
**_ATTEMPTING THIS WILL BREAK YOUR KANIDM INSTANCE IRREPARABLY_**
This question is normally asked because people want to setup multiple Kanidm servers connected to a
single database.

View file

@ -62,15 +62,11 @@ brew install kanidm
### Fedora / Centos Stream
<!-- deno-fmt-ignore-start -->
{{#template templates/kani-warning.md
imagepath=images
title=Take Note!
text=Kanidm frequently uses new Rust versions and features, however Fedora and Centos frequently are behind in Rust releases. As a result, they may not always have the latest Kanidm versions available.
}}
<!-- deno-fmt-ignore-end -->
> [!NOTE]
>
> Kanidm frequently uses new Rust versions and features, however Fedora and CentOS
> frequently are behind in Rust releases. As a result, they may not always have the latest Kanidm
> versions available.
Fedora has limited support through the development repository. You need to add the repository
metadata into the correct directory:

View file

@ -1,20 +1,16 @@
# LDAP
While many applications can support external authentication and identity services through Oauth2,
While many applications can support external authentication and identity services through OAuth2,
not all services can. Lightweight Directory Access Protocol (LDAP) has been the "universal language"
of authentication for many years, with almost every application in the world being able to search
and bind to LDAP. As many organisations still rely on LDAP, Kanidm can host a read-only LDAP
interface for these legacy applications and services.
<!-- deno-fmt-ignore-start -->
{{#template ../templates/kani-warning.md
imagepath=../images
title=Warning!
text=The LDAP server in Kanidm is not a full LDAP server. This is intentional, as Kanidm wants to cover the common use cases - simple bind and search. The parts we do support are RFC compliant however.
}}
<!-- deno-fmt-ignore-end -->
> [!WARNING]
>
> The LDAP server in Kanidm is not a complete LDAP implementation. This is intentional,
> as Kanidm wants to cover the common use cases - simple bind and search. The parts we do support
> are RFC compliant however.
## What is LDAP
@ -125,7 +121,7 @@ ldapsearch ... -x '(name=admin)' cn objectClass displayname memberof
## Group Memberships
Group membership is defined in rfc2307bis or Active Directory style. This means groups are
Group membership is defined in RFC2307bis or Active Directory style. This means groups are
determined from the "memberof" attribute which contains a DN to a group.
## People Accounts
@ -163,15 +159,10 @@ of `idm.example.com` will become `dc=idm,dc=example,dc=com`.
However, you may wish to change this to something shorter or at a higher level within your domain
name.
<!-- deno-fmt-ignore-start -->
{{#template ../templates/kani-warning.md
imagepath=../images
title=Warning!
text=Changing the LDAP Basedn will require you to reconfigure your client applications so they search the correct basedn. Be careful when changing this value!
}}
<!-- deno-fmt-ignore-end -->
> [!WARNING]
>
> Changing the LDAP Basedn will require you to reconfigure your client applications so
> they search the correct basedn. Be careful when changing this value!
As an admin you can change the domain ldap basedn with:
@ -191,7 +182,7 @@ replicated topology, you must restart all servers.
If you do not have applications that require LDAP password binds, then you should disable this
function to limit access.
```
```shell
kanidm system domain set-ldap-allow-unix-password-bind [true|false]
kanidm system domain set-ldap-allow-unix-password-bind -D admin false
```

View file

@ -2,7 +2,7 @@
OAuth is a web authorisation protocol that allows "single sign on". It's key to note OAuth only
provides authorisation, as the protocol in its default forms do not provide identity or
authentication information. All that Oauth2 provides is information that an entity is authorised for
authentication information. All that OAuth2 provides is information that an entity is authorised for
the requested resources.
OAuth can tie into extensions allowing an identity provider to reveal information about authorised
@ -69,13 +69,13 @@ Kanidm will expose its OAuth2 APIs at the following URLs:
- user auth url: `https://idm.example.com/ui/oauth2`
- api auth url: `https://idm.example.com/oauth2/authorise`
- token url: `https://idm.example.com/oauth2/token`
- rfc7662 token introspection url: `https://idm.example.com/oauth2/token/introspect`
- rfc7009 token revoke url: `https://idm.example.com/oauth2/token/revoke`
- RFC7662 token introspection URL: `https://idm.example.com/oauth2/token/introspect`
- RFC7009 token revoke URL: `https://idm.example.com/oauth2/token/revoke`
Oauth2 Server Metadata - you need to substitute your OAuth2 `:client_id:` in the following urls:
OAuth2 Server Metadata - you need to substitute your OAuth2 `:client_id:` in the following urls:
- Oauth2 issuer uri: `https://idm.example.com/oauth2/openid/:client_id:/`
- Oauth2 rfc8414 discovery:
- OAuth2 issuer URL: `https://idm.example.com/oauth2/openid/:client_id:/`
- OAuth2 RFC8414 discovery:
`https://idm.example.com/oauth2/openid/:client_id:/.well-known/oauth-authorization-server`
OpenID Connect discovery - you need to substitute your OAuth2 `:client_id:` in the following urls:
@ -153,17 +153,16 @@ kanidm system oauth2 update-scope-map <name> <kanidm_group_name> [scopes]...
kanidm system oauth2 update-scope-map nextcloud nextcloud_admins admin
```
<!-- deno-fmt-ignore-start -->
> [!WARNING]
>
> If you are creating an OpenID Connect (OIDC) client you **MUST** provide a scope map
> named `openid`. Without this, OpenID Connect clients **WILL NOT WORK**!
{{#template ../templates/kani-warning.md
imagepath=../images
title=WARNING
text=If you are creating an OpenID Connect (OIDC) client you <b>MUST</b> provide a scope map named <code>openid</code>. Without this, OpenID Connect clients <b>WILL NOT WORK</b>!
}}
Also...
<!-- deno-fmt-ignore-end -->
> **HINT** OpenID connect allows a number of scopes that affect the content of the resulting
> [!TIP]
>
> OpenID connect allows a number of scopes that affect the content of the resulting
> authorisation token. If one of the following scopes are requested by the OpenID client, then the
> associated claims may be added to the authorisation token. It is not guaranteed that all of the
> associated claims will be added.
@ -235,7 +234,7 @@ groups that would receive the same claim, the values of these maps are merged.
To create or update a claim map on a client:
```
```shell
kanidm system oauth2 update-claim-map <name> <claim_name> <kanidm_group_name> [values]...
kanidm system oauth2 update-claim-map nextcloud account_role nextcloud_admins admin login ...
```
@ -243,13 +242,14 @@ kanidm system oauth2 update-claim-map nextcloud account_role nextcloud_admins ad
To change the join strategy for a claim name. Valid strategies are csv (comma separated value), ssv
(space separated value) and array (a native json array). The default strategy is array.
```
```shell
kanidm system oauth2 update-claim-map-join <name> <claim_name> [csv|ssv|array]
kanidm system oauth2 update-claim-map-join nextcloud account_role csv
```
```
# Example claim formats
Example claim formats:
```text
# csv
claim: "value_a,value_b"
@ -262,7 +262,7 @@ claim: ["value_a", "value_b"]
To delete a group from a claim map
```
```shell
kanidm system oauth2 delete-claim-map <name> <claim_name> <kanidm_group_name>
kanidm system oauth2 delete-claim-map nextcloud account_role nextcloud_admins
```
@ -317,6 +317,10 @@ exchange, the system can redirect to the native application.
To support this Kanidm allows supplemental opaque origins to be configured on clients.
> [!WARNING]
>
> The ability to configure multiple origins is NOT intended to allow you to share a single Kanidm client definition between multiple OAuth2 clients. This fundamentally breaks the OAuth2 security model and is NOT SUPPORTED as a configuration. Multiple origins is only to allow supplemental redirects within the _same_ client application.
```bash
kanidm system oauth2 add-redirect-url <name> <url>
kanidm system oauth2 remove-redirect-url <name> <url>
@ -364,15 +368,10 @@ Not all clients support modern standards like PKCE or ECDSA. In these situations
to disable these on a per-client basis. Disabling these on one client will not affect others. These
settings are explained in detail in [our FAQ](../frequently_asked_questions.html#oauth2)
<!-- deno-fmt-ignore-start -->
{{#template ../templates/kani-warning.md
imagepath=../images
title=WARNING
text=Changing these settings MAY have serious consequences on the security of your services. You should avoid changing these if at all possible!
}}
<!-- deno-fmt-ignore-end -->
> [!WARNING]
>
> Changing these settings MAY have serious consequences on the security of your services.
> You should avoid changing these if at all possible!
To disable PKCE for a confidential client:
@ -444,7 +443,7 @@ Miniflux is a feedreader that supports OAuth 2.0 and OpenID connect. It automati
`.well-known` parts to the discovery endpoint. The application name in the redirect URL needs to
match the `OAUTH2_PROVIDER` name.
```
```conf
OAUTH2_PROVIDER = "oidc";
OAUTH2_CLIENT_ID = "miniflux";
OAUTH2_CLIENT_SECRET = "<oauth2_rs_basic_secret>";
@ -537,37 +536,37 @@ charts, graphs and alerts when connected to supported data source.
Prepare the environment:
```bash
$ kanidm system oauth2 create grafana "grafana.domain.name" https://grafana.domain.name
$ kanidm system oauth2 update-scope-map grafana grafana_users email openid profile groups
$ kanidm system oauth2 enable-pkce grafana
$ kanidm system oauth2 get grafana
$ kanidm system oauth2 show-basic-secret grafana
kanidm system oauth2 create grafana "grafana.domain.name" https://grafana.domain.name
kanidm system oauth2 update-scope-map grafana grafana_users email openid profile groups
kanidm system oauth2 enable-pkce grafana
kanidm system oauth2 get grafana
kanidm system oauth2 show-basic-secret grafana
<SECRET>
```
Create Grafana user groups:
```bash
$ kanidm group create 'grafana_superadmins'
$ kanidm group create 'grafana_admins'
$ kanidm group create 'grafana_editors'
$ kanidm group create 'grafana_users'
kanidm group create 'grafana_superadmins'
kanidm group create 'grafana_admins'
kanidm group create 'grafana_editors'
kanidm group create 'grafana_users'
```
Setup the claim-map that will set what role each group will map to in Grafana:
```bash
$ kanidm system oauth2 update-claim-map-join 'grafana' 'grafana_role' array
$ kanidm system oauth2 update-claim-map 'grafana' 'grafana_role' 'grafana_superadmins' 'GrafanaAdmin'
$ kanidm system oauth2 update-claim-map 'grafana' 'grafana_role' 'grafana_admins' 'Admin'
$ kanidm system oauth2 update-claim-map 'grafana' 'grafana_role' 'grafana_editors' 'Editor'
kanidmm oauth2 update-claim-map-join 'grafana' 'grafana_role' array
kanidm system oauth2 update-claim-map 'grafana' 'grafana_role' 'grafana_superadmins' 'GrafanaAdmin'
kanidm system oauth2 update-claim-map 'grafana' 'grafana_role' 'grafana_admins' 'Admin'
kanidm system oauth2 update-claim-map 'grafana' 'grafana_role' 'grafana_editors' 'Editor'
```
Don't forget that every Grafana user needs be member of one of above group and have name and e-mail:
```bash
$ kanidm person update <user> --legalname "Personal Name" --mail "user@example.com"
$ kanidm group add-members 'grafana_users' 'my_user_group_or_user_name'
kanidm person update <user> --legalname "Personal Name" --mail "user@example.com"
kanidm group add-members 'grafana_users' 'my_user_group_or_user_name'
```
And add the following to your Grafana config:

View file

@ -3,15 +3,10 @@
[SSSD](https://sssd.io/) is an alternative [PAM and nsswitch](./pam_and_nsswitch) provider that is
commonly available on Linux.
<!-- deno-fmt-ignore-start -->
{{#template ../templates/kani-warning.md
imagepath=../images
title=WARNING
text=SSSD should be considered a "last resort". If possible, always use the native Kanidm pam and nsswitch tools instead.
}}
<!-- deno-fmt-ignore-end -->
> [!WARNING]
>
> SSSD should be considered a "last resort". If possible, always use the native Kanidm
> pam and nsswitch tools instead.
## Limitations
@ -48,7 +43,7 @@ compatibility and issue resolution.
An example configuration for SSSD is provided.
```
```toml
# Example configuration for SSSD to resolve accounts via Kanidm
#
# This should always be a "last resort". If possible you should always use the

View file

@ -41,15 +41,9 @@ Kanidm relies on modern CPU optimisations for many operations. As a result your
Older or unsupported CPUs may raise a `SIGILL` (Illegal Instruction) on hardware that is not
supported by the project.
<!-- deno-fmt-ignore-start -->
{{#template templates/kani-alert.md
imagepath=images
title=Tip
text=You can check your cpu flags on Linux with the command `lscpu`
}}
<!-- deno-fmt-ignore-end -->
> [!TIP]
>
> You can check your CPU flags on Linux with the command `lscpu`
#### Memory

View file

@ -3,15 +3,10 @@
The recycle bin is a storage of deleted entries from the server. This allows recovery from mistakes
for a period of time.
<!-- deno-fmt-ignore-start -->
{{#template templates/kani-warning.md
imagepath=images
title=Warning!
text=The recycle bin is a best effort - when recovering in some cases not everything can be "put back" the way it was. Be sure to check your entries are valid once they have been revived.
}}
<!-- deno-fmt-ignore-end -->
> [!WARNING]
>
> The recycle bin is a best effort - when recovering in some cases not everything can be
> "put back" the way it was. Be sure to check your entries are valid once they have been revived.
## Where is the Recycle Bin?
@ -52,7 +47,7 @@ kanidm recycle-bin revive --name admin <uuid>
The recycle bin is a best effort to restore your data - there are some cases where the revived
entries may not be the same as their were when they were deleted. This generally revolves around
reference types such as group membership, or when the reference type includes supplemental map data
such as the oauth2 scope map type.
such as the OAuth2 scope map type.
An example of this data loss is the following steps:

View file

@ -1,14 +1,9 @@
# Deployment
<!-- deno-fmt-ignore-start -->
{{#template ../templates/kani-warning.md
imagepath=../images
title=WARNING
text=Replication is a newely developed feature. This means it requires manual configuration and careful monitoring. You should take regular backups if you choose to proceed.
}}
<!-- deno-fmt-ignore-end -->
> [!WARNING]
>
> Replication is a newly developed feature. This means it requires manual configuration
> and careful monitoring. You should take regular backups if you choose to proceed.
## Node Setup

View file

@ -1,14 +1,9 @@
# Planning
<!-- deno-fmt-ignore-start -->
{{#template ../templates/kani-warning.md
imagepath=../images
title=WARNING
text=Replication is a newely developed feature. This means it requires manual configuration and careful monitoring. You should keep backups if you choose to proceed.
}}
<!-- deno-fmt-ignore-end -->
> [!WARNING]
>
> Replication is a newly developed feature. This means it requires manual configuration
> and careful monitoring. You should keep backups if you choose to proceed.
It is important that you plan your replication deployment before you proceed. You may have a need
for high availability within a datacentre, geographic redundancy, or improvement of read scaling.

View file

@ -16,15 +16,10 @@ files.The full options and explanations are in the
[kanidmd_core::config::ServerConfig](https://kanidm.github.io/kanidm/master/rustdoc/kanidmd_core/config/struct.ServerConfig.html)
docs page for your particular build.
<!-- deno-fmt-ignore-start -->
{{#template templates/kani-warning.md
imagepath=images
title=Warning!
text=You MUST set the "domain", "origin", "tls_chain" and "tls_path" options via one method or the other, or the server cannot start!
}}
<!-- deno-fmt-ignore-end -->
> [!WARNING]
>
> You MUST set the "domain", "origin", "tls_chain" and "tls_path" options via one method
> or the other, or the server cannot start!
The following is a commented example configuration.
@ -35,15 +30,10 @@ The following is a commented example configuration.
This example is located in
[examples/server_container.toml](https://github.com/kanidm/kanidm/blob/master/examples/server_container.toml).
<!-- deno-fmt-ignore-start -->
{{#template templates/kani-warning.md
imagepath=images
title=Warning!
text=You MUST set the "domain" name correctly, aligned with your "origin", else the server may refuse to start or some features (e.g. WebAuthn, OAuth2) may not work correctly!
}}
<!-- deno-fmt-ignore-end -->
> [!WARNING]
>
> You MUST set the "domain" name correctly, aligned with your "origin", else the server
> may refuse to start or some features (e.g. WebAuthn, OAuth2) may not work correctly!
### Check the configuration is valid
@ -77,15 +67,10 @@ docker run --cap-add NET_BIND_SERVICE \
kanidm/server:latest
```
<!-- deno-fmt-ignore-start -->
{{#template templates/kani-alert.md
imagepath=images
title=Tip
text=However you choose to run your server, you should document and keep note of the docker run / create command you chose to start the instance. This will be used in the upgrade procedure.
}}
<!-- deno-fmt-ignore-end -->
> [!TIP]
>
> However you choose to run your server, you should document and keep note of the docker run
> / create command you chose to start the instance. This will be used in the upgrade procedure.
### Default Admin Accounts

View file

@ -30,15 +30,11 @@ Docker doesn't follow a "traditional" method of updates. Rather you remove the o
container and recreate it with a newer version. This document will help walk you through that
process.
<!-- deno-fmt-ignore-start -->
{{#template templates/kani-alert.md
imagepath=images
title=Tip
text=You should have documented and preserved your kanidm container create / run command from the server preparation guide. If not, you'll need to use "docker inspect" to work out how to recreate these parameters.
}}
<!-- deno-fmt-ignore-end -->
> [!TIP]
>
> You should have documented and preserved your Kanidm container create / run command from
> the server preparation guide. If not, you'll need to use `docker inspect` to work out how to
> recreate these parameters.
### Upgrade Check
@ -82,15 +78,10 @@ See [backup and restore](backup_restore.md)
### Update your Instance
<!-- deno-fmt-ignore-start -->
{{#template templates/kani-warning.md
imagepath=images
title=WARNING
text=Downgrades are not possible. It is critical you know how to backup and restore before you proceed with this step.
}}
<!-- deno-fmt-ignore-end -->
> [!WARNING]
>
> Downgrades are not possible. It is critical you know how to backup and restore before
> you proceed with this step.
Docker updates operate by deleting and recreating the container. All state that needs to be
preserved is within your storage volume.

View file

@ -53,15 +53,10 @@ kanidm system sync generate-token ipasync mylabel
token: eyJhbGci...
```
<!-- deno-fmt-ignore-start -->
{{#template ../templates/kani-warning.md
imagepath=../images
title=Warning!
text=The sync account token has a high level of privilege, able to create new accounts and groups. It should be treated carefully as a result!
}}
<!-- deno-fmt-ignore-end -->
> [!WARNING]
>
> The sync account token has a high level of privilege, able to create new accounts and
> groups. It should be treated carefully as a result!
If you need to revoke the token, you can do so with:
@ -123,15 +118,10 @@ If you are performing a migration from an external IDM to Kanidm, when that migr
you can nominate that Kanidm now owns all of the imported data. This is achieved by finalising the
sync account.
<!-- deno-fmt-ignore-start -->
{{#template ../templates/kani-warning.md
imagepath=../images
title=Warning!
text=You can not undo this operation. Once you have finalised an agreement, Kanidm owns all of the synchronised data, and you can not resume synchronisation.
}}
<!-- deno-fmt-ignore-end -->
> [!WARNING]
>
> You can not undo this operation. Once you have finalised an agreement, Kanidm owns all
> of the synchronised data, and you can not resume synchronisation.
```bash
kanidm system sync finalise <sync account name>
@ -146,15 +136,10 @@ Once finalised, imported accounts can now be fully managed by Kanidm.
If you decide to cease importing accounts or need to remove all imported accounts from a sync
account, you can choose to terminate the agreement removing all data that was imported.
<!-- deno-fmt-ignore-start -->
{{#template ../templates/kani-warning.md
imagepath=../images
title=Warning!
text=You can not undo this operation. Once you have terminated an agreement, Kanidm deletes all of the synchronised data, and you can not resume synchronisation.
}}
<!-- deno-fmt-ignore-end -->
> [!WARNING]
>
> You can not undo this operation. Once you have terminated an agreement, Kanidm deletes
> all of the synchronised data, and you can not resume synchronisation.
```bash
kanidm system sync terminate <sync account name>

View file

@ -1,6 +1,6 @@
# LDAP
If you have an LDAP server that supports sync repl (rfc4533 content synchronisation) then you are
If you have an LDAP server that supports sync repl (RFC4533 content synchronisation) then you are
able to synchronise from it to Kanidm for the purposes of coexistence or migration.
If there is a specific Kanidm sync tool for your LDAP server, you should use that instead of the
@ -34,14 +34,14 @@ enable synchronisation.
You must enable the syncprov overlay in slapd.conf
```
```text
moduleload syncprov.la
overlay syncprov
```
In addition you must grant an account full read access and raise its search limits.
```
```text
access to *
by dn.base="cn=sync,dc=example,dc=com" read
by * break

View file

@ -1,4 +1,6 @@
```
# Running DHAT profiling
```shell
cargo test --features=dhat-heap test_idm_authsession_simple_password_mech
cargo install cargo-flamegraph