diff --git a/.editorconfig b/.editorconfig new file mode 100644 index 000000000..7eb6f3702 --- /dev/null +++ b/.editorconfig @@ -0,0 +1,10 @@ +# Documentation: https://editorconfig.org/ + +root = true + +[*.md] +charset = utf-8 +end_of_line = lf +indent_size = 2 +max_line_length = 100 +trim_trailing_whitespace = true diff --git a/.github/ISSUE_TEMPLATE/an-idea-or-question.md b/.github/ISSUE_TEMPLATE/an-idea-or-question.md index 40bd85f46..12af7bea1 100644 --- a/.github/ISSUE_TEMPLATE/an-idea-or-question.md +++ b/.github/ISSUE_TEMPLATE/an-idea-or-question.md @@ -8,13 +8,17 @@ assignees: '' --- **Is your feature request related to a problem? Please describe.** + A clear description of what the problem is. Ex. I'm confused by, or would like to know how to... **Describe the solution you'd like** + A description of what you'd expect to happen. **Describe alternatives you've considered** + Are there any alternative solutions or features you've considered. **Additional context** + Add any other context or screenshots about the feature request here. diff --git a/.github/ISSUE_TEMPLATE/security_report.md b/.github/ISSUE_TEMPLATE/security_report.md index c062a9be4..cf2b2b98d 100644 --- a/.github/ISSUE_TEMPLATE/security_report.md +++ b/.github/ISSUE_TEMPLATE/security_report.md @@ -7,7 +7,7 @@ assignees: '' --- - -This is also a concern for modification, where the modification attempt may or may not -fail depending on the entries and if you can/can't see them. - -**IDEA:** You can only `delete`/`modify` within the read scope you have. If you can't -read it (based on the read rules of `search`), you can't `delete` it. This is in addition to the filter -rules of the `delete` applying as well. So performing a `delete` of `Pres(class)`, will only delete -in your `read` scope and will never disclose if you are denied access. +This is also a concern for modification, where the modification attempt may or may not fail +depending on the entries and if you can/can't see them. +**IDEA:** You can only `delete`/`modify` within the read scope you have. If you can't read it (based +on the read rules of `search`), you can't `delete` it. This is in addition to the filter rules of +the `delete` applying as well. So performing a `delete` of `Pres(class)`, will only delete in your +`read` scope and will never disclose if you are denied access. -"Create" Application ------------------- +## "Create" Application Create seems like the easiest to apply. Ensure that only the attributes in `createattr` are in the `createevent`, ensure the classes only contain the set in `createclass`, then finally apply `filter_no_index` to the entry to entry. If all of this passes, the create is allowed. A key point is that there is no union of `create` ACI's - the WHOLE ACI must pass, not parts of -multiple. This means if a control say "allows creating group with member" and "allows creating -user with name", creating a group with `name` is not allowed - despite your ability to create -an entry with `name`, its classes don't match. This way, the administrator of the service can define -create controls with specific intent for how they will be used without the risk of two -controls causing unintended effects (`users` that are also `groups`, or allowing invalid values. +multiple. This means if a control say "allows creating group with member" and "allows creating user +with name", creating a group with `name` is not allowed - despite your ability to create an entry +with `name`, its classes don't match. This way, the administrator of the service can define create +controls with specific intent for how they will be used without the risk of two controls causing +unintended effects (`users` that are also `groups`, or allowing invalid values. -An important consideration is how to handle overlapping ACI. If two ACI *could* match the create +An important consideration is how to handle overlapping ACI. If two ACI _could_ match the create should we enforce both conditions are upheld? Or only a single upheld ACI allows the create? -In some cases it may not be possible to satisfy both, and that would block creates. The intent -of the access profile is that "something like this CAN" be created, so I believe that provided -only a single control passes, the create should be allowed. +In some cases it may not be possible to satisfy both, and that would block creates. The intent of +the access profile is that "something like this CAN" be created, so I believe that provided only a +single control passes, the create should be allowed. -"Modify" Application ------------------- +## "Modify" Application Modify is similar to Create, however we specifically filter on the `modlist` action of `present`, -`removed` or `purged` with the action. The rules of create still apply; provided all requirements -of the modify are permitted, then it is allowed once at least one profile allows the change. +`removed` or `purged` with the action. The rules of create still apply; provided all requirements of +the modify are permitted, then it is allowed once at least one profile allows the change. -A key difference is that if the modify ACP lists multiple `presentattr` types, the modify request -is valid if it is only modifying one attribute. IE we say `presentattr: name, email`, but we -only attempt to modify `email`. +A key difference is that if the modify ACP lists multiple `presentattr` types, the modify request is +valid if it is only modifying one attribute. IE we say `presentattr: name, email`, but we only +attempt to modify `email`. -Considerations --------------- +## Considerations -* When should access controls be applied? During an operation, we only validate schema after - pre* Plugin application, so likely it has to be "at that point", to ensure schema-based - validity of the entries that are allowed to be changed. -* Self filter keyword should compile to `eq("uuid", "....")`. When do we do this and how? -* `memberof` could take `name` or `uuid`, we need to be able to resolve this correctly, but this is +- When should access controls be applied? During an operation, we only validate schema after pre* + Plugin application, so likely it has to be "at that point", to ensure schema-based validity of the + entries that are allowed to be changed. +- Self filter keyword should compile to `eq("uuid", "....")`. When do we do this and how? +- `memberof` could take `name` or `uuid`, we need to be able to resolve this correctly, but this is likely an issue in `memberof` which needs to be addressed, ie `memberof uuid` vs `memberof attr`. -* Content controls in `create` and `modify` will be important to get right to avoid the security issues - of LDAP access controls. Given that `class` has special importance, it's only right to give it extra - consideration in these controls. -* In the future when `recyclebin` is added, a `re-animation` access profile should be created allowing - revival of entries given certain conditions of the entry we are attempting to revive. A service-desk user - should not be able to revive a deleted high-privilege user. +- Content controls in `create` and `modify` will be important to get right to avoid the security + issues of LDAP access controls. Given that `class` has special importance, it's only right to give + it extra consideration in these controls. +- In the future when `recyclebin` is added, a `re-animation` access profile should be created + allowing revival of entries given certain conditions of the entry we are attempting to revive. A + service-desk user should not be able to revive a deleted high-privilege user. diff --git a/kanidm_book/src/developers/designs/access_profiles_rework_2022.md b/kanidm_book/src/developers/designs/access_profiles_rework_2022.md index 5628c7411..4bf66b6ce 100644 --- a/kanidm_book/src/developers/designs/access_profiles_rework_2022.md +++ b/kanidm_book/src/developers/designs/access_profiles_rework_2022.md @@ -1,4 +1,3 @@ - # Access Profiles Rework 2022 Access controls are critical for a project like Kanidm to determine who can access what on other @@ -10,69 +9,71 @@ a complete and useful IDM. The original design of the access control system was intended to satisfy our need for flexibility, but we have begun to discover a number of limitations. The design incorporating filter queries makes them hard to administer as we have not often publicly talked about the filter language and how it -internally works. Because of their use of filters it is hard to see on an entry "what" access controls -will apply to entries, making it hard to audit without actually calling the ACP subsystem. Currently -the access control system has a large impact on performance, accounting for nearly 35% of the time taken -in a search operation. +internally works. Because of their use of filters it is hard to see on an entry "what" access +controls will apply to entries, making it hard to audit without actually calling the ACP subsystem. +Currently the access control system has a large impact on performance, accounting for nearly 35% of +the time taken in a search operation. -Additionally, the default access controls that we supply have started to run into limits and rough cases -due to changes as we have improved features. Some of this was due to limited design with user cases -in mind during development. +Additionally, the default access controls that we supply have started to run into limits and rough +cases due to changes as we have improved features. Some of this was due to limited design with user +cases in mind during development. -To resolve this a number of coordinating features need implementation to improve this situation. These -features will be documented *first*, and the use cases *second* with each use case linking to the -features that satisfy it. +To resolve this a number of coordinating features need implementation to improve this situation. +These features will be documented _first_, and the use cases _second_ with each use case linking to +the features that satisfy it. ## Required Features to Satisfy ### Refactor of default access controls -The current default privileges will need to be refactored to improve seperation of privilege -and improved delegation of finer access rights. +The current default privileges will need to be refactored to improve seperation of privilege and +improved delegation of finer access rights. ### Access profiles target specifiers instead of filters -Access profiles should target a list of groups for who the access profile applies to, and who recieves -the access it is granting. +Access profiles should target a list of groups for who the access profile applies to, and who +recieves the access it is granting. Alternately an access profile could target "self" so that self-update rules can still be expressed. -An access profile could target an oauth2 definition for the purpose of allowing reads to members -of a set of scopes that can access the service. +An access profile could target an oauth2 definition for the purpose of allowing reads to members of +a set of scopes that can access the service. -The access profile receiver would be group based only. This allows specifying that "X group of members -can write self" meaning that any member of that group can write to themself and only themself. +The access profile receiver would be group based only. This allows specifying that "X group of +members can write self" meaning that any member of that group can write to themself and only +themself. -In the future we could also create different target/receiver specifiers to allow other extended management -and delegation scenarioes. This improves the situation making things more flexible from the current -filter system. It also may allow filters to be simplified to remove the SELF uuid resolve step in some cases. +In the future we could also create different target/receiver specifiers to allow other extended +management and delegation scenarioes. This improves the situation making things more flexible from +the current filter system. It also may allow filters to be simplified to remove the SELF uuid +resolve step in some cases. ### Filter based groups -These are groups who's members are dynamicly allocated based on a filter query. This allows a similar -level of dynamic group management as we have currently with access profiles, but with the additional -ability for them to be used outside of the access control context. This is the "bridge" allowing us to -move from filter based access controls to "group" targetted. +These are groups who's members are dynamicly allocated based on a filter query. This allows a +similar level of dynamic group management as we have currently with access profiles, but with the +additional ability for them to be used outside of the access control context. This is the "bridge" +allowing us to move from filter based access controls to "group" targetted. -A risk of filter based groups is "infinite churn" because of recursion. This can occur if you -had a rule such a "and not memberof = self" on a dynamic group. Because of this, filters on -dynamic groups may not use "memberof" unless they are internally provided by the kanidm project so -that we can vet these rules as correct and without creating infinite recursion scenarioes. +A risk of filter based groups is "infinite churn" because of recursion. This can occur if you had a +rule such a "and not memberof = self" on a dynamic group. Because of this, filters on dynamic groups +may not use "memberof" unless they are internally provided by the kanidm project so that we can vet +these rules as correct and without creating infinite recursion scenarioes. ### Access rules extracted to ACI entries on targets The access control profiles are an excellent way to administer access where you can specific whom has access to what, but it makes it harder for the reverse query which is "who has access to this -specific entity". Since this is needed for both search and auditing, by specifying our access profiles -in the current manner, but using them to generate ACE rules on the target entry will allow the search -and audit paths to answer the question of "who has access to this entity" much faster. +specific entity". Since this is needed for both search and auditing, by specifying our access +profiles in the current manner, but using them to generate ACE rules on the target entry will allow +the search and audit paths to answer the question of "who has access to this entity" much faster. ### Sudo Mode -A flag should exist on a session defining "sudo" mode which requires a special account policy membership -OR a re-authentication to enable. This sudo flag is a time window on a session token which can -allow/disallow certain behaviours. It would be necessary for all write paths to have access to this -value. +A flag should exist on a session defining "sudo" mode which requires a special account policy +membership OR a re-authentication to enable. This sudo flag is a time window on a session token +which can allow/disallow certain behaviours. It would be necessary for all write paths to have +access to this value. ### Account Policy @@ -84,13 +85,14 @@ mode and this enforces rules on session expiry. ### Default Roles / Seperation of Privilege -By default we attempt to seperate privileges so that "no single account" has complete authority -over the system. +By default we attempt to seperate privileges so that "no single account" has complete authority over +the system. Satisfied by: -* Refactor of default access controls -* Filter based groups -* Sudo Mode + +- Refactor of default access controls +- Filter based groups +- Sudo Mode #### System Admin @@ -99,39 +101,39 @@ users or accounts. The "admins" role is responsible to manage: -* The name of the domain -* Configuration of the servers and replication -* Management of external integrations (oauth2) +- The name of the domain +- Configuration of the servers and replication +- Management of external integrations (oauth2) #### Service Account Admin The role would be called "sa\_admins" and would be responsible for top level management of service accounts, and delegating authority for service account administration to managing users. -* Create service accounts -* Delegate service account management to owners groups -* Migrate service accounts to persons +- Create service accounts +- Delegate service account management to owners groups +- Migrate service accounts to persons The service account admin is capable of migrating service accounts to persons as it is "yielding" control of the entity, rather than an idm admin "taking" the entity which may have security impacts. #### Service Desk -This role manages a subset of persons. The helpdesk roles are precluded from modification of -"higher privilege" roles like service account, identity and system admins. This is due to potential +This role manages a subset of persons. The helpdesk roles are precluded from modification of "higher +privilege" roles like service account, identity and system admins. This is due to potential privilege escalation attacks. -* Can create credential reset links -* Can lock and unlock accounts and their expiry. +- Can create credential reset links +- Can lock and unlock accounts and their expiry. #### Idm Admin -This role manages identities, or more specifically person accounts. In addition in is a -"high privilege" service desk role and can manage high privilege users as well. +This role manages identities, or more specifically person accounts. In addition in is a "high +privilege" service desk role and can manage high privilege users as well. -* Create persons -* Modify and manage persons -* All roles of service desk for all persons +- Create persons +- Modify and manage persons +- All roles of service desk for all persons ### Self Write / Write Privilege @@ -146,19 +148,19 @@ authentication sessions as a result of this. Satisfied by: -* Access profiles target specifiers instead of filters -* Sudo Mode +- Access profiles target specifiers instead of filters +- Sudo Mode ### Oauth2 Service Read (Nice to Have) -For ux/ui integration, being able to list oauth2 applications that are accessible to the user -would be a good feature. To limit "who" can see the oauth2 applications that an account can access -a way to "allow read" but by proxy of the related users of the oauth2 service. This will require -access controls to be able to interept the oauth2 config and provide rights based on that. +For ux/ui integration, being able to list oauth2 applications that are accessible to the user would +be a good feature. To limit "who" can see the oauth2 applications that an account can access a way +to "allow read" but by proxy of the related users of the oauth2 service. This will require access +controls to be able to interept the oauth2 config and provide rights based on that. Satisfied by: -* Access profiles target specifiers instead of filters +- Access profiles target specifiers instead of filters ### Administration @@ -166,9 +168,9 @@ Access controls should be easier to manage and administer, and should be group b filter based. This will make it easier for administrators to create and define their own access rules. -* Refactor of default access controls -* Access profiles target specifiers instead of filters -* Filter based groups +- Refactor of default access controls +- Access profiles target specifiers instead of filters +- Filter based groups ### Service Account Access @@ -176,17 +178,15 @@ Service accounts should be able to be "delegated" administration, where a group a service account. This should not require administrators to create unique access controls for each service account, but a method to allow mapping of the service account to "who manages it". -* Sudo Mode -* Account Policy -* Access profiles target specifiers instead of filters -* Refactor of default access controls +- Sudo Mode +- Account Policy +- Access profiles target specifiers instead of filters +- Refactor of default access controls ### Auditing of Access It should be easier to audit whom has access to what by inspecting the entry to view what can access it. -* Access rules extracted to ACI entries on targets -* Access profiles target specifiers instead of filters - - +- Access rules extracted to ACI entries on targets +- Access profiles target specifiers instead of filters diff --git a/kanidm_book/src/developers/designs/oauth2_app_listing.md b/kanidm_book/src/developers/designs/oauth2_app_listing.md index 76dd1d4cc..3365b35a7 100644 --- a/kanidm_book/src/developers/designs/oauth2_app_listing.md +++ b/kanidm_book/src/developers/designs/oauth2_app_listing.md @@ -1,69 +1,60 @@ +# Oauth2 Application Listing -Oauth2 Application Listing -========================== +A feature of some other IDM systems is to also double as a portal to linked applications. This +allows a convinent access point for users to discover and access linked applications without having +to navigate to them manually. This naturally works quite well since it means that the user is +already authenticated, and the IDM becomes the single "gateway" to accessing other applications. -A feature of some other IDM systems is to also double as a portal to linked applications. This allows -a convinent access point for users to discover and access linked applications without having to -navigate to them manually. This naturally works quite well since it means that the user is already -authenticated, and the IDM becomes the single "gateway" to accessing other applications. +## How it should look -How it should look ------------------- +- The user should ONLY see a list of applications they _can_ access +- The user should see a list of applications with "friendly" display names +- The list of applications _may_ have an icon/logo +- Clicking the application should take them to the location -* The user should ONLY see a list of applications they *can* access -* The user should see a list of applications with "friendly" display names -* The list of applications *may* have an icon/logo -* Clicking the application should take them to the location +## Access Control +The current design of the oauth2 resource servers (oauth2rs) is modeled around what the oauth2 +protocol requires. This defines that in an oauth2 request, all of the requested scopes need be +granted else it can not proceed. The current design is: -Access Control --------------- - -The current design of the oauth2 resource servers (oauth2rs) is modeled around what -the oauth2 protocol requires. This defines that in an oauth2 request, all of the requested -scopes need be granted else it can not proceed. The current design is: - -* scope maps - a relation of groups to the set of scopes that they grant -* implicit scopes - a set of scopes granted to all persons +- scope maps - a relation of groups to the set of scopes that they grant +- implicit scopes - a set of scopes granted to all persons While this works well for the oauth2 authorisation design, it doesn't work well from the kanidm side -for managing *our* knowledge of who is granted access to the application. +for managing _our_ knowledge of who is granted access to the application. In order to limit who can see what applications we will need a new method to define who is allowed access to the resource server on the kanidm side, while also preserving ouath2 semantics. To fix this the current definition of scopes on oauth2 resource servers need to change. -* access scopes - a list of scopes (similar to implicit) that are used by the resource server for granting access to the resource. -* access members - a list of groups that are granted access -* supplementary scopes - definitions of scope maps that grant scopes which are not access related, but may provide extra details for the account using the resource +- access scopes - a list of scopes (similar to implicit) that are used by the resource server for + granting access to the resource. +- access members - a list of groups that are granted access +- supplementary scopes - definitions of scope maps that grant scopes which are not access related, + but may provide extra details for the account using the resource By changing to this method this removes the arbitrary implicit scope/scope map rules, and clearly -defines the set of scopes that grant access to the application, while also allow extended scopes -to be sent that can attenuate the application behaviour. This also allows the access members reference +defines the set of scopes that grant access to the application, while also allow extended scopes to +be sent that can attenuate the application behaviour. This also allows the access members reference to be used to generate knowledge on the kanidm side of "who can access this oauth2 resource". This can be used to limit the listed applications to these oauth2 applications. In addition we can then -use these access members to create access controls to strictly limit who can see what oauth2 applications -to the admins of oauth2 applications, and the users of them. +use these access members to create access controls to strictly limit who can see what oauth2 +applications to the admins of oauth2 applications, and the users of them. To support this, we should allow dynamic groups to be created so that the 'implicit scope' behaviour -which allow all persons to access an application can be emulated by making all persons a member of access members. +which allow all persons to access an application can be emulated by making all persons a member of +access members. Migration of the current scopes and implicit scopes is likely not possible with this change, so we -may have to delete these which will require admins to re-configure these permissions, but that is a better -option than allowing "too much" access. +may have to delete these which will require admins to re-configure these permissions, but that is a +better option than allowing "too much" access. -Display Names / Logos ---------------------- +## Display Names / Logos Display names already exist. Logos will require upload and storage. A binary type exists in the db that can be used for storing blobs, or we could store something like svg. I think it's too risky to "validate" images in these uploads, so we could just store the blob and display it? - - - - - - diff --git a/kanidm_book/src/developers/designs/rest_interface.md b/kanidm_book/src/developers/designs/rest_interface.md index 1df1b349b..f26d5c6d6 100644 --- a/kanidm_book/src/developers/designs/rest_interface.md +++ b/kanidm_book/src/developers/designs/rest_interface.md @@ -1,15 +1,13 @@ # REST Interface -{{#template - ../../templates/kani-warning.md - imagepath=../../images/ - title=Note! - text=Here begins some early notes on the REST interface - much better ones are in the repository's designs directory. -}} +{{#template\ +../../templates/kani-warning.md imagepath=../../images/ title=Note! text=Here begins some early +notes on the REST interface - much better ones are in the repository's designs directory. }} -There's an endpoint at `//routemap` (for example, https://localhost/v1/routemap) which is based on the API routes as they get instantiated. +There's an endpoint at `//routemap` (for example, https://localhost/v1/routemap) which +is based on the API routes as they get instantiated. -It's *very, very, very* early work, and should not be considered stable at all. +It's _very, very, very_ early work, and should not be considered stable at all. An example of some elements of the output is below: @@ -46,4 +44,4 @@ An example of some elements of the output is below: } ] } -``` \ No newline at end of file +``` diff --git a/kanidm_book/src/developers/designs/scim_migration_planning.md b/kanidm_book/src/developers/designs/scim_migration_planning.md index 724bf5562..02d29641a 100644 --- a/kanidm_book/src/developers/designs/scim_migration_planning.md +++ b/kanidm_book/src/developers/designs/scim_migration_planning.md @@ -1,24 +1,31 @@ - # Scim and Migration Tooling -We need to be able to synchronise content from other directory or identity management systems. -To do this, we need the capability to have "pluggable" synchronisation drivers. This is because -not all deployments will be able to use our generic versions, or may have customisations they -wish to perform that are unique to them. +We need to be able to synchronise content from other directory or identity management systems. To do +this, we need the capability to have "pluggable" synchronisation drivers. This is because not all +deployments will be able to use our generic versions, or may have customisations they wish to +perform that are unique to them. To achieve this we need a layer of seperation - This effectively becomes an "extract, transform, -load" process. In addition this process must be *stateful* where it can be run multiple times -or even continuously and it will bring kanidm into synchronisation. +load" process. In addition this process must be _stateful_ where it can be run multiple times or +even continuously and it will bring kanidm into synchronisation. We refer to a "synchronisation" as meaning a complete successful extract, transform and load cycle. There are three expected methods of using the synchronisation tools for Kanidm -* Kanidm as a "read only" portal allowing access to it's specific features and integrations. This is less of a migration, and more of a way to "feed" data into Kanidm without relying on it's internal administration features. -* "Big Bang" migration. This is where all the data from another IDM is synchronised in a single execution and applications are swapped to Kanidm. This is rare in larger deployments, but may be used in smaller sites. -* Gradual migration. This is where data is synchronised to Kanidm and then both the existing IDM and Kanidm co-exist. Applications gradually migrate to Kanidm. At some point a "final" synchronisation is performed where Kanidm 'gains authority' over all identity data and the existing IDM is disabled. +- Kanidm as a "read only" portal allowing access to it's specific features and integrations. This is + less of a migration, and more of a way to "feed" data into Kanidm without relying on it's internal + administration features. +- "Big Bang" migration. This is where all the data from another IDM is synchronised in a single + execution and applications are swapped to Kanidm. This is rare in larger deployments, but may be + used in smaller sites. +- Gradual migration. This is where data is synchronised to Kanidm and then both the existing IDM and + Kanidm co-exist. Applications gradually migrate to Kanidm. At some point a "final" synchronisation + is performed where Kanidm 'gains authority' over all identity data and the existing IDM is + disabled. -In these processes there may be a need to "reset" the synchronsied data. The diagram below shows the possible work flows which account for the above. +In these processes there may be a need to "reset" the synchronsied data. The diagram below shows the +possible work flows which account for the above. ┏━━━━━━━━━━━━━━━━━┓ ┃ ┃ @@ -45,9 +52,10 @@ In these processes there may be a need to "reset" the synchronsied data. The dia Kanidm starts in a "detached" state from the extern IDM source. -For Kanidm as a "read only" application source the Initial synchronisation is performed followed by periodic -active (partial) synchronisations. At anytime a full initial synchronisation can re-occur to reset the data of the -provider. The provider can be reset and removed by a purge which reset's Kanidm to a detached state. +For Kanidm as a "read only" application source the Initial synchronisation is performed followed by +periodic active (partial) synchronisations. At anytime a full initial synchronisation can re-occur +to reset the data of the provider. The provider can be reset and removed by a purge which reset's +Kanidm to a detached state. For a gradual migration, this process is the same as the read only application. However when ready to perform the final cut over a final synchronisation is performed, which retains the data of the @@ -61,43 +69,43 @@ step required, where all data is loaded and then immediately granted authority t ### Extract -First a user must be able to retrieve their data from their supplying IDM source. Initially -we will target LDAP and systems with LDAP interfaces, but in the future there is no barrier -to supporting other transports. +First a user must be able to retrieve their data from their supplying IDM source. Initially we will +target LDAP and systems with LDAP interfaces, but in the future there is no barrier to supporting +other transports. To achieve this, we initially provide synchronisation primitives in the [ldap3 crate](https://github.com/kanidm/ldap3). ### Transform -This process will be custom developed by the user, or may have a generic driver that we provide. -Our generic tools may provide attribute mapping abilitys so that we can allow some limited +This process will be custom developed by the user, or may have a generic driver that we provide. Our +generic tools may provide attribute mapping abilitys so that we can allow some limited customisation. ### Load -Finally to load the data into Kanidm, we will make a SCIM interface available. SCIM is a -"spiritual successor" to LDAP, and aligns with Kani's design. SCIM allows structured data -to be uploaded (unlike LDAP which is simply strings). Because of this SCIM will allow us to -expose more complex types that previously we have not been able to provide. +Finally to load the data into Kanidm, we will make a SCIM interface available. SCIM is a "spiritual +successor" to LDAP, and aligns with Kani's design. SCIM allows structured data to be uploaded +(unlike LDAP which is simply strings). Because of this SCIM will allow us to expose more complex +types that previously we have not been able to provide. -The largest benefit to SCIM's model is it's ability to perform "batched" operations, which work -with Kanidm's transactional model to ensure that during load events, that content is always valid -and correct. +The largest benefit to SCIM's model is it's ability to perform "batched" operations, which work with +Kanidm's transactional model to ensure that during load events, that content is always valid and +correct. ## Configuring a Synchronisation Provider in Kanidm Kanidm has a strict transactional model with full ACID compliance. Attempting to create an external model that needs to interoperate with Kanidm's model and ensure both are compliant is fraught with -danger. As a result, Kanidm sync providers *should* be stateless, acting only as an ETL bridge. +danger. As a result, Kanidm sync providers _should_ be stateless, acting only as an ETL bridge. Additionally syncproviders need permissions to access and write to content in Kanidm, so it also necessitates Kanidm being aware of the sync relationship. For this reason a syncprovider is a derivative of a service account, which also allows storage of -the *state* of the synchronisation operation. An example of this is that LDAP syncrepl provides a -cookie defining the "state" of what has been "consumed up to" by the ETL bridge. During the -load phase the modified entries *and* the cookie are persisted. This means that if the operation fails +the _state_ of the synchronisation operation. An example of this is that LDAP syncrepl provides a +cookie defining the "state" of what has been "consumed up to" by the ETL bridge. During the load +phase the modified entries _and_ the cookie are persisted. This means that if the operation fails the cookie also rolls back allowing a retry of the sync. If it suceeds the next sync knows that kanidm is in the correct state. Graphically: @@ -119,16 +127,16 @@ kanidm is in the correct state. Graphically: │ │ │ │◀─────Result───────│ │ └────────────┘ └────────────┘ └────────────┘ -At any point the operation *may* fail, so by locking the state with the upload of entries this +At any point the operation _may_ fail, so by locking the state with the upload of entries this guarantees correct upload has suceeded and persisted. A success really means it! ## SCIM ### Authentication to the endpoint -This will be based on Kanidm's existing authentication infrastructure, allowing service accounts -to use bearer tokens. These tokens will internally bind that changes from the account MUST contain -the associated state identifier (cookie). +This will be based on Kanidm's existing authentication infrastructure, allowing service accounts to +use bearer tokens. These tokens will internally bind that changes from the account MUST contain the +associated state identifier (cookie). ### Batch Operations @@ -153,26 +161,27 @@ source is the authority on the information. ## Internal Batch Update Operation Phases -We have to consider in our batch updates that there are multiple stages of the update. This is because -we need to consider that at any point the lifecycle of a presented entry may change within a single -batch. Because of this, we have to treat the operation differently within kanidm to ensure a consistent outcome. +We have to consider in our batch updates that there are multiple stages of the update. This is +because we need to consider that at any point the lifecycle of a presented entry may change within a +single batch. Because of this, we have to treat the operation differently within kanidm to ensure a +consistent outcome. -Additionally we have to "fail fast". This means that on any conflict the sync will abort and the administrator -must intervene. +Additionally we have to "fail fast". This means that on any conflict the sync will abort and the +administrator must intervene. To understand why we chose this, we have to look at what happens in a "soft fail" condition. In this example we have an account named X and a group named Y. The group contains X as a member. When we submit this for an initial sync, or after the account X is created, if we had a "soft" fail -during the import of the account, we would reject it from being added to Kanidm but would then continue -with the synchronisation. Then the group Y would be imported. Since the member pointing to X would -not be valid, it would be silently removed. +during the import of the account, we would reject it from being added to Kanidm but would then +continue with the synchronisation. Then the group Y would be imported. Since the member pointing to +X would not be valid, it would be silently removed. -At this point we would have group Y imported, but it has no members and the account X would not -have been imported. The administrator may intervene and fix the account X to allow sync to proceed. However -this would not repair the missing group membership. To repair the group membership a change to group Y -would need to be triggered to also sync the group status. +At this point we would have group Y imported, but it has no members and the account X would not have +been imported. The administrator may intervene and fix the account X to allow sync to proceed. +However this would not repair the missing group membership. To repair the group membership a change +to group Y would need to be triggered to also sync the group status. Since the admin may not be aware of this, it would silently mean the membership is missing. @@ -182,8 +191,8 @@ group Y would sync and the membership would be intact. ### Phase 1 - Validation of Update State -In this phase we need to assert that the batch operation can proceed and is consistent with the expectations -we have of the server's state. +In this phase we need to assert that the batch operation can proceed and is consistent with the +expectations we have of the server's state. Assert the token provided is valid, and contains the correct access requirements. @@ -199,31 +208,32 @@ Retrieve the sync\_authority value from the sync entry. ### Phase 2 - Entry Location, Creation and Authority In this phase we are ensuring that all the entries within the operation are within the control of -this sync domain. We also ensure that entries we intend to act upon exist with our authority -markers such that the subsequent operations are all "modifications" rather than mixed create/modify +this sync domain. We also ensure that entries we intend to act upon exist with our authority markers +such that the subsequent operations are all "modifications" rather than mixed create/modify For each entry in the sync request, if an entry with that uuid exists retrieve it. -* If an entry exists in the database, assert that it's sync\_parent\_uuid is the same as our agreements. - * If there is no sync\_parent\_uuid or the sync\_parent\_uuid does not match, reject the operation. +- If an entry exists in the database, assert that it's sync\_parent\_uuid is the same as our + agreements. + - If there is no sync\_parent\_uuid or the sync\_parent\_uuid does not match, reject the + operation. -* If no entry exists in the database, create a "stub" entry with our sync\_parent\_uuid - * Create the entry immediately, and then retrieve it. +- If no entry exists in the database, create a "stub" entry with our sync\_parent\_uuid + - Create the entry immediately, and then retrieve it. ### Phase 3 - Entry Assertion Remove all attributes in the sync that are overlapped with our sync\_authority value. -For all uuids in the entry present set - Assert their attributes match what was synced in. - Resolve types that need resolving (name2uuid, externalid2uuid) +For all uuids in the entry present set Assert their attributes match what was synced in. Resolve +types that need resolving (name2uuid, externalid2uuid) Write all ### Phase 4 - Entry Removal -For all uuids in the delete\_uuids set: - if their sync\_parent\_uuid matches ours, assert they are deleted (recycled). +For all uuids in the delete\_uuids set: if their sync\_parent\_uuid matches ours, assert they are +deleted (recycled). ### Phase 5 - Commit @@ -232,14 +242,3 @@ Write the updated "state" from the request to\_state to our current state of the Write an updated "authority" value to the agreement of what attributes we can change. Commit the txn. - - - - - - - - - - - diff --git a/kanidm_book/src/developers/python.md b/kanidm_book/src/developers/python.md index 0675c4d0a..2f5424e7f 100644 --- a/kanidm_book/src/developers/python.md +++ b/kanidm_book/src/developers/python.md @@ -15,11 +15,14 @@ TODO: a lot of things. Setting up a dev environment can be a little complex because of the mono-repo. -1. Install poetry: `python -m pip install poetry`. This is what we use to manage the packages, and allows you to set up virtual python environments easier. -2. Build the base environment. From within the `pykanidm` directory, run: `poetry install` This'll set up a virtual environment and install all the required packages (and development-related ones) +1. Install poetry: `python -m pip install poetry`. This is what we use to manage the packages, and + allows you to set up virtual python environments easier. +2. Build the base environment. From within the `pykanidm` directory, run: `poetry install` This'll + set up a virtual environment and install all the required packages (and development-related ones) 3. Start editing! -Most IDEs will be happier if you open the kanidm_rlm_python or pykanidm directories as the base you are working from, rather than the kanidm repository root, so they can auto-load integrations etc. +Most IDEs will be happier if you open the kanidm_rlm_python or pykanidm directories as the base you +are working from, rather than the kanidm repository root, so they can auto-load integrations etc. ## Building the documentation diff --git a/kanidm_book/src/developers/radius.md b/kanidm_book/src/developers/radius.md index a27db36a2..0dda34c44 100644 --- a/kanidm_book/src/developers/radius.md +++ b/kanidm_book/src/developers/radius.md @@ -2,40 +2,48 @@ Setting up a dev environment has some extra complexity due to the mono-repo design. -1. Install poetry: `python -m pip install poetry`. This is what we use to manage the packages, and allows you to set up virtual python environments easier. +1. Install poetry: `python -m pip install poetry`. This is what we use to manage the packages, and + allows you to set up virtual python environments easier. 2. Build the base environment. From within the kanidm_rlm_python directory, run: `poetry install` 3. Install the `kanidm` python library: `poetry run python -m pip install ../pykanidm` 4. Start editing! -Most IDEs will be happier if you open the `kanidm_rlm_python` or `pykanidm` directories as the base you are working from, rather than the `kanidm` repository root, so they can auto-load integrations etc. +Most IDEs will be happier if you open the `kanidm_rlm_python` or `pykanidm` directories as the base +you are working from, rather than the `kanidm` repository root, so they can auto-load integrations +etc. ## Running a test RADIUS container From the root directory of the Kanidm repository: -1. Build the container - this'll give you a container image called `kanidm/radius` with the tag `devel`: +1. Build the container - this'll give you a container image called `kanidm/radius` with the tag + `devel`: -```shell +```bash make build/radiusd ``` 2. Once the process has completed, check the container exists in your docker environment: -```shell +```bash ➜ docker image ls kanidm/radius REPOSITORY TAG IMAGE ID CREATED SIZE kanidm/radius devel 5dabe894134c About a minute ago 622MB ``` -*Note:* If you're just looking to play with a pre-built container, images are also automatically built based on the development branch and available at `ghcr.io/kanidm/radius:devel` -3. Generate some self-signed certificates by running the script - just hit enter on all the prompts if you don't want to customise them. This'll put the files in `/tmp/kanidm`: +_Note:_ If you're just looking to play with a pre-built container, images are also automatically +built based on the development branch and available at `ghcr.io/kanidm/radius:devel` -```shell +3. Generate some self-signed certificates by running the script - just hit enter on all the prompts + if you don't want to customise them. This'll put the files in `/tmp/kanidm`: + +```bash ./insecure_generate_tls.sh ``` -4. Run the container: -```shell +4. Run the container: + +```bash cd kanidm_rlm_python && ./run_radius_container.sh ``` @@ -46,7 +54,7 @@ You can pass the following environment variables to `run_radius_container.sh` to For example: -```shell +```bash IMAGE=ghcr.io/kanidm/radius:devel \ CONFIG_FILE=~/.config/kanidm \ ./run_radius_container.sh @@ -54,13 +62,14 @@ IMAGE=ghcr.io/kanidm/radius:devel \ ## Testing authentication -Authentication can be tested through the client.localhost Network Access Server (NAS) configuration with: +Authentication can be tested through the client.localhost Network Access Server (NAS) configuration +with: -```shell +```bash docker exec -i -t radiusd radtest \ badpassword \ 127.0.0.1 10 testing123 - + docker exec -i -t radiusd radtest \ \ 127.0.0.1 10 testing123 diff --git a/kanidm_book/src/domain_rename.md b/kanidm_book/src/domain_rename.md index 79688f93a..faec9c55d 100644 --- a/kanidm_book/src/domain_rename.md +++ b/kanidm_book/src/domain_rename.md @@ -1,33 +1,39 @@ # Rename the domain -There are some cases where you may need to rename the domain. You should have configured -this initially in the setup, however you may have a situation where a business is changing -name, merging, or other needs which may prompt this needing to be changed. +There are some cases where you may need to rename the domain. You should have configured this +initially in the setup, however you may have a situation where a business is changing name, merging, +or other needs which may prompt this needing to be changed. > **WARNING:** This WILL break ALL u2f/webauthn tokens that have been enrolled, which MAY cause -> accounts to be locked out and unrecoverable until further action is taken. DO NOT CHANGE -> the domain name unless REQUIRED and have a plan on how to manage these issues. +> accounts to be locked out and unrecoverable until further action is taken. DO NOT CHANGE the +> domain name unless REQUIRED and have a plan on how to manage these issues. -> **WARNING:** This operation can take an extensive amount of time as ALL accounts and groups -> in the domain MUST have their Security Principal Names (SPNs) regenerated. This WILL also cause -> a large delay in replication once the system is restarted. +> **WARNING:** This operation can take an extensive amount of time as ALL accounts and groups in the +> domain MUST have their Security Principal Names (SPNs) regenerated. This WILL also cause a large +> delay in replication once the system is restarted. You should make a backup before proceeding with this operation. -When you have a created a migration plan and strategy on handling the invalidation of webauthn, -you can then rename the domain. +When you have a created a migration plan and strategy on handling the invalidation of webauthn, you +can then rename the domain. First, stop the instance. - docker stop +```bash +docker stop +``` Second, change `domain` and `origin` in `server.toml`. Third, trigger the database domain rename process. - docker run --rm -i -t -v kanidmd:/data \ - kanidm/server:latest /sbin/kanidmd domain rename -c /data/server.toml +```bash +docker run --rm -i -t -v kanidmd:/data \ + kanidm/server:latest /sbin/kanidmd domain rename -c /data/server.toml +``` Finally, you can now start your instance again. - docker start +```bash +docker start +``` diff --git a/kanidm_book/src/examples/k8s_ingress_example.md b/kanidm_book/src/examples/k8s_ingress_example.md index 8b9a92751..35bfe6dd1 100644 --- a/kanidm_book/src/examples/k8s_ingress_example.md +++ b/kanidm_book/src/examples/k8s_ingress_example.md @@ -2,247 +2,259 @@ Guard your Kubernetes ingress with Kanidm authentication and authorization. - ## Prerequisites We recommend you have the following before continuing: -- [Kanidm](../installing_the_server.html) +- [Kanidm](../installing_the_server.html) - [Kubernetes v1.23 or above](https://docs.k0sproject.io/v1.23.6+k0s.2/install/) - [Nginx Ingress](https://kubernetes.github.io/ingress-nginx/deploy/) - A fully qualified domain name with an A record pointing to your k8s ingress. - [CertManager with a Cluster Issuer installed.](https://cert-manager.io/docs/installation/) - ## Instructions 1. Create a Kanidm account and group: - 1. Create a Kanidm account. Please see the section [Creating Accounts](../accounts_and_groups.md). - 1. Give the account a password. Please see the section [Resetting Account Credentials](../accounts_and_groups.md). - 2. Make the account a person. Please see the section [People Accounts](../accounts_and_groups.md). - 3. Create a Kanidm group. Please see the section [Creating Accounts](../accounts_and_groups.md). - 4. Add the account you created to the group you create. Please see the section [Creating Accounts](../accounts_and_groups.md). + 1. Create a Kanidm account. Please see the section + [Creating Accounts](../accounts_and_groups.md). + 1. Give the account a password. Please see the section + [Resetting Account Credentials](../accounts_and_groups.md). + 1. Make the account a person. Please see the section + [People Accounts](../accounts_and_groups.md). + 1. Create a Kanidm group. Please see the section [Creating Accounts](../accounts_and_groups.md). + 1. Add the account you created to the group you create. Please see the section + [Creating Accounts](../accounts_and_groups.md). 2. Create a Kanidm OAuth2 resource: - 1. Create the OAuth2 resource for your domain. Please see the section [Create the Kanidm Configuration](../integrations/oauth2.md). - 2. Add a scope mapping from the resource you created to the group you create with the openid, profile, and email scopes. Please see the section [Create the Kanidm Configuration](../integrations/oauth2.md). + 1. Create the OAuth2 resource for your domain. Please see the section + [Create the Kanidm Configuration](../integrations/oauth2.md). + 2. Add a scope mapping from the resource you created to the group you create with the openid, + profile, and email scopes. Please see the section + [Create the Kanidm Configuration](../integrations/oauth2.md). 3. Create a `Cookie Secret` to for the placeholder `` in step 4: - ```shell - docker run -ti --rm python:3-alpine python -c 'import secrets,base64; print(base64.b64encode(base64.b64encode(secrets.token_bytes(16))).decode("utf-8"));' - ``` -4. Create a file called `k8s.kanidm-nginx-auth-example.yaml` with the block below. Replace every `` (drop the `<>`) with appropriate values: + ```shell + docker run -ti --rm python:3-alpine python -c 'import secrets,base64; print(base64.b64encode(base64.b64encode(secrets.token_bytes(16))).decode("utf-8"));' + ``` +4. Create a file called `k8s.kanidm-nginx-auth-example.yaml` with the block below. Replace every + `` (drop the `<>`) with appropriate values: 1. ``: The fully qualified domain name with an A record pointing to your k8s ingress. 2. ``: The fully qualified domain name of your Kanidm deployment. 3. ``: The output from step 3. - 4. ``: Please see the output from step 2.1 or [get](../integrations/oauth2.md) the OAuth2 resource you create from that step. - 5. ``: Please see the output from step 2.1 or [get](../integrations/oauth2.md) the OAuth2 resource you create from that step. + 4. ``: Please see the output from step 2.1 or [get](../integrations/oauth2.md) + the OAuth2 resource you create from that step. + 5. ``: Please see the output from step 2.1 or + [get](../integrations/oauth2.md) the OAuth2 resource you create from that step. - This will deploy the following to your cluster: - - [modem7/docker-starwars](https://github.com/modem7/docker-starwars) - An example web site. - - [OAuth2 Proxy](https://oauth2-proxy.github.io/oauth2-proxy/) - A OAuth2 proxy is used as an OAuth2 client with NGINX [Authentication Based on Subrequest Result](https://docs.nginx.com/nginx/admin-guide/security-controls/configuring-subrequest-authentication/). + This will deploy the following to your cluster: + - [modem7/docker-starwars](https://github.com/modem7/docker-starwars) - An example web site. + - [OAuth2 Proxy](https://oauth2-proxy.github.io/oauth2-proxy/) - A OAuth2 proxy is used as an + OAuth2 client with NGINX + [Authentication Based on Subrequest Result](https://docs.nginx.com/nginx/admin-guide/security-controls/configuring-subrequest-authentication/). - ```yaml - --- - apiVersion: v1 - kind: Namespace - metadata: - name: kanidm-example - labels: - pod-security.kubernetes.io/enforce: restricted + ```yaml + --- + apiVersion: v1 + kind: Namespace + metadata: + name: kanidm-example + labels: + pod-security.kubernetes.io/enforce: restricted - --- - apiVersion: apps/v1 - kind: Deployment - metadata: - namespace: kanidm-example - name: website - labels: - app: website - spec: - revisionHistoryLimit: 1 - replicas: 1 - selector: - matchLabels: - app: website - template: - metadata: - labels: - app: website - spec: - containers: - - name: website - image: modem7/docker-starwars - imagePullPolicy: Always - ports: - - containerPort: 8080 - securityContext: - allowPrivilegeEscalation: false - capabilities: - drop: ["ALL"] - securityContext: - runAsNonRoot: true - seccompProfile: - type: RuntimeDefault + --- + apiVersion: apps/v1 + kind: Deployment + metadata: + namespace: kanidm-example + name: website + labels: + app: website + spec: + revisionHistoryLimit: 1 + replicas: 1 + selector: + matchLabels: + app: website + template: + metadata: + labels: + app: website + spec: + containers: + - name: website + image: modem7/docker-starwars + imagePullPolicy: Always + ports: + - containerPort: 8080 + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: ["ALL"] + securityContext: + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault - --- - apiVersion: v1 - kind: Service - metadata: - namespace: kanidm-example - name: website - spec: - selector: - app: website - ports: - - protocol: TCP - port: 8080 - targetPort: 8080 + --- + apiVersion: v1 + kind: Service + metadata: + namespace: kanidm-example + name: website + spec: + selector: + app: website + ports: + - protocol: TCP + port: 8080 + targetPort: 8080 - --- - apiVersion: networking.k8s.io/v1 - kind: Ingress - metadata: - annotations: - cert-manager.io/cluster-issuer: lets-encrypt-cluster-issuer - nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth" - nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri" - name: website - namespace: kanidm-example - spec: - ingressClassName: nginx - tls: - - hosts: - - - secretName: -ingress-tls # replace . with - in the hostname - rules: - - host: - http: - paths: - - path: / - pathType: Prefix - backend: - service: - name: website - port: - number: 8080 + --- + apiVersion: networking.k8s.io/v1 + kind: Ingress + metadata: + annotations: + cert-manager.io/cluster-issuer: lets-encrypt-cluster-issuer + nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth" + nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri" + name: website + namespace: kanidm-example + spec: + ingressClassName: nginx + tls: + - hosts: + - + secretName: -ingress-tls # replace . with - in the hostname + rules: + - host: + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: website + port: + number: 8080 - --- - apiVersion: apps/v1 - kind: Deployment - metadata: - labels: - k8s-app: oauth2-proxy - name: oauth2-proxy - namespace: kanidm-example - spec: - replicas: 1 - selector: - matchLabels: - k8s-app: oauth2-proxy - template: - metadata: - labels: - k8s-app: oauth2-proxy - spec: - containers: - - args: - - --provider=oidc - - --email-domain=* - - --upstream=file:///dev/null - - --http-address=0.0.0.0:4182 - - --oidc-issuer-url=https:///oauth2/openid/ - - --code-challenge-method=S256 - env: - - name: OAUTH2_PROXY_CLIENT_ID - value: - - name: OAUTH2_PROXY_CLIENT_SECRET - value: - - name: OAUTH2_PROXY_COOKIE_SECRET - value: - image: quay.io/oauth2-proxy/oauth2-proxy:latest - imagePullPolicy: Always - name: oauth2-proxy - ports: - - containerPort: 4182 - protocol: TCP - securityContext: - allowPrivilegeEscalation: false - capabilities: - drop: ["ALL"] - securityContext: - runAsNonRoot: true - seccompProfile: - type: RuntimeDefault - --- - apiVersion: v1 - kind: Service - metadata: - labels: - k8s-app: oauth2-proxy - name: oauth2-proxy - namespace: kanidm-example - spec: - ports: - - name: http - port: 4182 - protocol: TCP - targetPort: 4182 - selector: - k8s-app: oauth2-proxy + --- + apiVersion: apps/v1 + kind: Deployment + metadata: + labels: + k8s-app: oauth2-proxy + name: oauth2-proxy + namespace: kanidm-example + spec: + replicas: 1 + selector: + matchLabels: + k8s-app: oauth2-proxy + template: + metadata: + labels: + k8s-app: oauth2-proxy + spec: + containers: + - args: + - --provider=oidc + - --email-domain=* + - --upstream=file:///dev/null + - --http-address=0.0.0.0:4182 + - --oidc-issuer-url=https:///oauth2/openid/ + - --code-challenge-method=S256 + env: + - name: OAUTH2_PROXY_CLIENT_ID + value: + - name: OAUTH2_PROXY_CLIENT_SECRET + value: + - name: OAUTH2_PROXY_COOKIE_SECRET + value: + image: quay.io/oauth2-proxy/oauth2-proxy:latest + imagePullPolicy: Always + name: oauth2-proxy + ports: + - containerPort: 4182 + protocol: TCP + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: ["ALL"] + securityContext: + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault + --- + apiVersion: v1 + kind: Service + metadata: + labels: + k8s-app: oauth2-proxy + name: oauth2-proxy + namespace: kanidm-example + spec: + ports: + - name: http + port: 4182 + protocol: TCP + targetPort: 4182 + selector: + k8s-app: oauth2-proxy - --- - apiVersion: networking.k8s.io/v1 - kind: Ingress - metadata: - name: oauth2-proxy - namespace: kanidm-example - spec: - ingressClassName: nginx - rules: - - host: - http: - paths: - - path: /oauth2 - pathType: Prefix - backend: - service: - name: oauth2-proxy - port: - number: 4182 - tls: - - hosts: - - - secretName: -ingress-tls # replace . with - in the hostname - ``` + --- + apiVersion: networking.k8s.io/v1 + kind: Ingress + metadata: + name: oauth2-proxy + namespace: kanidm-example + spec: + ingressClassName: nginx + rules: + - host: + http: + paths: + - path: /oauth2 + pathType: Prefix + backend: + service: + name: oauth2-proxy + port: + number: 4182 + tls: + - hosts: + - + secretName: -ingress-tls # replace . with - in the hostname + ``` 5. Apply the configuration by running the following command: - - ```shell - kubectl apply -f k8s.kanidm-nginx-auth-example.yaml - ``` + ```bash + kubectl apply -f k8s.kanidm-nginx-auth-example.yaml + ``` 6. Check your deployment succeeded by running the following commands: - ```shell - kubectl -n kanidm-example get all - kubectl -n kanidm-example get ingress - kubectl -n kanidm-example get Certificate - ``` + ```bash + kubectl -n kanidm-example get all + kubectl -n kanidm-example get ingress + kubectl -n kanidm-example get Certificate + ``` - You may use kubectl's describe and log for troubleshooting. If there are ingress errors see the Ingress NGINX documentation's [troubleshooting page](https://kubernetes.github.io/ingress-nginx/troubleshooting/). If there are certificate errors see the CertManger documentation's [troubleshooting page](https://cert-manager.io/docs/faq/troubleshooting/). - - Once it has finished deploying, you will be able to access it at `https://` which will prompt you for authentication. + You may use kubectl's describe and log for troubleshooting. If there are ingress errors see the + Ingress NGINX documentation's + [troubleshooting page](https://kubernetes.github.io/ingress-nginx/troubleshooting/). If there are + certificate errors see the CertManger documentation's + [troubleshooting page](https://cert-manager.io/docs/faq/troubleshooting/). + Once it has finished deploying, you will be able to access it at `https://` which will + prompt you for authentication. ## Cleaning Up 1. Remove the resources create for this example from k8s: - ```shell - kubectl delete namespace kanidm-example - ``` + ```bash + kubectl delete namespace kanidm-example + ``` 2. Remove the objects created for this example from Kanidm: 1. Delete the account created in section Instructions step 1. 2. Delete the group created in section Instructions step 2. 3. Delete the OAuth2 resource created in section Instructions step 3. - ## References 1. [NGINX Ingress Controller: External OAUTH Authentication](https://kubernetes.github.io/ingress-nginx/examples/auth/oauth-external-auth/) diff --git a/kanidm_book/src/frequently_asked_questions.md b/kanidm_book/src/frequently_asked_questions.md index da3edbbf1..5c546797e 100644 --- a/kanidm_book/src/frequently_asked_questions.md +++ b/kanidm_book/src/frequently_asked_questions.md @@ -1,11 +1,11 @@ # Frequently Asked Questions -... or ones we think people *might* ask. +... or ones we think people _might_ ask. ## Why disallow HTTP (without TLS) between my load balancer and Kanidm? -Because Kanidm is one of the keys to a secure network, and insecure connections -to them are not best practice. +Because Kanidm is one of the keys to a secure network, and insecure connections to them are not best +practice. Please refer to [Why TLS?](why_tls.md) for a longer explanation. @@ -15,11 +15,13 @@ It's [a rust thing](https://rustacean.net). ## Will you implement -insert protocol here- -Probably, on an infinite time-scale! As long as it's not Kerberos. Or involves SSL or STARTTLS. Please log an issue and start the discussion! +Probably, on an infinite time-scale! As long as it's not Kerberos. Or involves SSL or STARTTLS. +Please log an issue and start the discussion! ## Why do the crabs have knives? -Don't [ask](https://www.youtube.com/watch?v=0QaAKi0NFkA). They just [do](https://www.youtube.com/shorts/WizH5ae9ozw). +Don't [ask](https://www.youtube.com/watch?v=0QaAKi0NFkA). They just +[do](https://www.youtube.com/shorts/WizH5ae9ozw). ## Why won't you take this FAQ thing seriously? diff --git a/kanidm_book/src/glossary.md b/kanidm_book/src/glossary.md index 3bfdea4d6..8b7611953 100644 --- a/kanidm_book/src/glossary.md +++ b/kanidm_book/src/glossary.md @@ -1,33 +1,46 @@ # Glossary -This is a glossary of terms used through out this book. While we make every effort to -explains terms and acronyms when they are used, this may be a useful reference if something -feels unknown to you. +This is a glossary of terms used through out this book. While we make every effort to explains terms +and acronyms when they are used, this may be a useful reference if something feels unknown to you. ## Domain Names -* domain - This is the domain you "own". It is the highest level entity. An example would be `example.com` (since you do not own `.com`). -* subdomain - A subdomain is a domain name space under the domain. A subdomains of `example.com` are `a.example.com` and `b.example.com`. Each subdomain can have further subdomains. -* domain name - This is any named entity within your domain or its subdomains. This is the umbrella term, referring to all entities in the domain. `example.com`, `a.example.com`, `host.example.com` are all valid domain names with the domain `example.com`. -* origin - An origin defines a URL with a protocol scheme, optional port number and domain name components. An example is `https://host.example.com` -* effective domain - This is the extracted domain name from an origin excluding port and scheme. +- domain - This is the domain you "own". It is the highest level entity. An example would be + `example.com` (since you do not own `.com`). +- subdomain - A subdomain is a domain name space under the domain. A subdomains of `example.com` are + `a.example.com` and `b.example.com`. Each subdomain can have further subdomains. +- domain name - This is any named entity within your domain or its subdomains. This is the umbrella + term, referring to all entities in the domain. `example.com`, `a.example.com`, `host.example.com` + are all valid domain names with the domain `example.com`. +- origin - An origin defines a URL with a protocol scheme, optional port number and domain name + components. An example is `https://host.example.com` +- effective domain - This is the extracted domain name from an origin excluding port and scheme. ## Accounts -* trust - A trust is when two Kanidm domains have a relationship to each other where accounts can be used between the domains. The domains retain their administration boundaries, but allow cross authentication. -* replication - This is the process where two or more Kanidm servers in a domain can synchronise their database content. -* UAT - User Authentication Token. This is a token issue by Kanidm to an account after it has authenticated. -* SPN - Security Principal Name. This is a name of an account comprising it's name and domain name. This allows distinction between accounts with identical names over a trust boundary +- trust - A trust is when two Kanidm domains have a relationship to each other where accounts can be + used between the domains. The domains retain their administration boundaries, but allow cross + authentication. +- replication - This is the process where two or more Kanidm servers in a domain can synchronise + their database content. +- UAT - User Authentication Token. This is a token issue by Kanidm to an account after it has + authenticated. +- SPN - Security Principal Name. This is a name of an account comprising it's name and domain name. + This allows distinction between accounts with identical names over a trust boundary ## Internals -* entity, object, entry - Any item in the database. Generally these terms are interchangeable, but internally they are referred to as Entry. -* account - An entry that may authenticate to the server, generally allowing extended permissions and actions to be undertaken. +- entity, object, entry - Any item in the database. Generally these terms are interchangeable, but + internally they are referred to as Entry. +- account - An entry that may authenticate to the server, generally allowing extended permissions + and actions to be undertaken. ### Access Control -* privilege - An expression of what actions an account may perform if granted -* target - The entries that will be affected by a privilege -* receiver - The entries that will be able to use a privilege -* acp - an Access Control Profile which defines a set of privileges that are granted to receivers to affect target entries. -* role - A term used to express a group that is the receiver of an access control profile allowing it's members to affect the target entries. +- privilege - An expression of what actions an account may perform if granted +- target - The entries that will be affected by a privilege +- receiver - The entries that will be able to use a privilege +- acp - an Access Control Profile which defines a set of privileges that are granted to receivers to + affect target entries. +- role - A term used to express a group that is the receiver of an access control profile allowing + it's members to affect the target entries. diff --git a/kanidm_book/src/installing_client_tools.md b/kanidm_book/src/installing_client_tools.md index 765752c73..cc1874bf7 100644 --- a/kanidm_book/src/installing_client_tools.md +++ b/kanidm_book/src/installing_client_tools.md @@ -1,50 +1,56 @@ # Installing Client Tools -> **NOTE** As this project is in a rapid development phase, running different -release versions will likely present incompatibilities. Ensure you're running -matching release versions of client and server binaries. If you have any issues, -check that you are running the latest software. +> **NOTE** As this project is in a rapid development phase, running different release versions will +> likely present incompatibilities. Ensure you're running matching release versions of client and +> server binaries. If you have any issues, check that you are running the latest software. ## From packages Kanidm currently is packaged for the following systems: - * OpenSUSE Tumbleweed - * OpenSUSE Leap 15.3/15.4 - * MacOS - * Arch Linux - * NixOS - * Fedora 36 - * CentOS Stream 9 +- OpenSUSE Tumbleweed +- OpenSUSE Leap 15.3/15.4 +- MacOS +- Arch Linux +- NixOS +- Fedora 36 +- CentOS Stream 9 The `kanidm` client has been built and tested from Windows, but is not (yet) packaged routinely. ### OpenSUSE Tumbleweed -Kanidm has been part of OpenSUSE Tumbleweed since October 2020. You can install -the clients with: +Kanidm has been part of OpenSUSE Tumbleweed since October 2020. You can install the clients with: - zypper ref - zypper in kanidm-clients +```bash +zypper ref +zypper in kanidm-clients +``` ### OpenSUSE Leap 15.3/15.4 Using zypper you can add the Kanidm leap repository with: - zypper ar -f obs://network:idm network_idm +```bash +zypper ar -f obs://network:idm network_idm +``` Then you need to refresh your metadata and install the clients. - zypper ref - zypper in kanidm-clients +```bash +zypper ref +zypper in kanidm-clients +``` ### MacOS - Brew [Homebrew](https://brew.sh/) allows addition of third party repositories for installing tools. On MacOS you can use this to install the Kanidm tools. - brew tap kanidm/kanidm - brew install kanidm +```bash +brew tap kanidm/kanidm +brew install kanidm +``` ### Arch Linux @@ -56,60 +62,69 @@ MacOS you can use this to install the Kanidm tools. ### Fedora / Centos Stream -{{#template - templates/kani-warning.md - imagepath=images - title=Take Note! - text=Kanidm frequently uses new Rust versions and features, however Fedora and Centos frequently are behind in Rust releases. As a result, they may not always have the latest Kanidm versions available. -}} +{{#template templates/kani-warning.md imagepath=images title=Take Note! text=Kanidm frequently uses +new Rust versions and features, however Fedora and Centos frequently are behind in Rust releases. As +a result, they may not always have the latest Kanidm versions available. }} Fedora has limited support through the development repository. You need to add the repository metadata into the correct directory: - # Fedora - wget https://download.opensuse.org/repositories/network:/idm/Fedora_36/network:idm.repo - # Centos Stream 9 - wget https://download.opensuse.org/repositories/network:/idm/CentOS_9_Stream/network:idm.repo +```bash +# Fedora +wget https://download.opensuse.org/repositories/network:/idm/Fedora_36/network:idm.repo +# Centos Stream 9 +wget https://download.opensuse.org/repositories/network:/idm/CentOS_9_Stream/network:idm.repo +``` You can then install with: - dnf install kanidm-clients +```bash +dnf install kanidm-clients +``` ## Cargo -The tools are available as a cargo download if you have a rust tool chain available. To install -rust you should follow the documentation for [rustup](https://rustup.rs/). These will be installed -into your home directory. To update these, re-run the install command with the new version. +The tools are available as a cargo download if you have a rust tool chain available. To install rust +you should follow the documentation for [rustup](https://rustup.rs/). These will be installed into +your home directory. To update these, re-run the install command with the new version. - cargo install --version 1.1.0-alpha.10 kanidm_tools +```bash +cargo install --version 1.1.0-alpha.10 kanidm_tools +``` ## Tools Container In some cases if your distribution does not have native kanidm-client support, and you can't access cargo for the install for some reason, you can use the cli tools from a docker container instead. - docker pull kanidm/tools:latest - docker run --rm -i -t \ - -v /etc/kanidm/config:/etc/kanidm/config:ro \ - -v ~/.config/kanidm:/home/kanidm/.config/kanidm:ro \ - -v ~/.cache/kanidm_tokens:/home/kanidm/.cache/kanidm_tokens \ - kanidm/tools:latest \ - /sbin/kanidm --help +```bash +docker pull kanidm/tools:latest +docker run --rm -i -t \ + -v /etc/kanidm/config:/etc/kanidm/config:ro \ + -v ~/.config/kanidm:/home/kanidm/.config/kanidm:ro \ + -v ~/.cache/kanidm_tokens:/home/kanidm/.cache/kanidm_tokens \ + kanidm/tools:latest \ + /sbin/kanidm --help +``` If you have a ca.pem you may need to bind mount this in as required. > **TIP** You can alias the docker run command to make the tools easier to access such as: - alias kanidm="docker run ..." +```bash +alias kanidm="docker run ..." +``` ## Checking that the tools work -Now you can check your instance is working. You may need to provide a CA certificate for verification -with the -C parameter: +Now you can check your instance is working. You may need to provide a CA certificate for +verification with the -C parameter: - kanidm login --name anonymous - kanidm self whoami -H https://localhost:8443 --name anonymous - kanidm self whoami -C ../path/to/ca.pem -H https://localhost:8443 --name anonymous +```bash +kanidm login --name anonymous +kanidm self whoami -H https://localhost:8443 --name anonymous +kanidm self whoami -C ../path/to/ca.pem -H https://localhost:8443 --name anonymous +``` Now you can take some time to look at what commands are available - please [ask for help at any time](https://github.com/kanidm/kanidm#getting-in-contact--questions). diff --git a/kanidm_book/src/installing_the_server.md b/kanidm_book/src/installing_the_server.md index 9a3f13b83..76050430e 100644 --- a/kanidm_book/src/installing_the_server.md +++ b/kanidm_book/src/installing_the_server.md @@ -1,6 +1,3 @@ - # Installing the Server This chapter will describe how to plan, configure, deploy and update your Kanidm instances. - - diff --git a/kanidm_book/src/integrations/ldap.md b/kanidm_book/src/integrations/ldap.md index 12a3ba456..d77577283 100644 --- a/kanidm_book/src/integrations/ldap.md +++ b/kanidm_book/src/integrations/ldap.md @@ -1,66 +1,58 @@ # LDAP -While many applications can support external authentication and identity services through -Oauth2, not all services can. -Lightweight Directory Access Protocol (LDAP) has been the "lingua franca" of -authentication for many years, with almost every application in the world being -able to search and bind to LDAP. As many organisations still rely on LDAP, Kanidm -can host a read-only LDAP interface for these legacy applications. - -{{#template - ../templates/kani-warning.md - imagepath=../images - title=Warning! - text=The LDAP server in Kanidm is not a fully RFC-compliant LDAP server. This is intentional, as Kanidm wants to cover the common use cases - simple bind and search. -}} +While many applications can support external authentication and identity services through Oauth2, +not all services can. Lightweight Directory Access Protocol (LDAP) has been the "lingua franca" of +authentication for many years, with almost every application in the world being able to search and +bind to LDAP. As many organisations still rely on LDAP, Kanidm can host a read-only LDAP interface +for these legacy applications. +{{#template\ +../templates/kani-warning.md imagepath=../images title=Warning! text=The LDAP server in Kanidm is +not a fully RFC-compliant LDAP server. This is intentional, as Kanidm wants to cover the common use +cases - simple bind and search. }} ## What is LDAP -LDAP is a protocol to read data from a directory of information. It is not -a server, but a way to communicate to a server. There are many famous LDAP -implementations such as Active Directory, 389 Directory Server, DSEE, -FreeIPA, and many others. Because it is a standard, applications can use -an LDAP client library to authenticate users to LDAP, given "one account" for -many applications - an IDM just like Kanidm! +LDAP is a protocol to read data from a directory of information. It is not a server, but a way to +communicate to a server. There are many famous LDAP implementations such as Active Directory, 389 +Directory Server, DSEE, FreeIPA, and many others. Because it is a standard, applications can use an +LDAP client library to authenticate users to LDAP, given "one account" for many applications - an +IDM just like Kanidm! ## Data Mapping -Kanidm cannot be mapped 100% to LDAP's objects. This is because LDAP -types are simple key-values on objects which are all UTF8 strings (or subsets -thereof) based on validation (matching) rules. Kanidm internally implements complex -data types such as tagging on SSH keys, or multi-value credentials. These can not -be represented in LDAP. +Kanidm cannot be mapped 100% to LDAP's objects. This is because LDAP types are simple key-values on +objects which are all UTF8 strings (or subsets thereof) based on validation (matching) rules. Kanidm +internally implements complex data types such as tagging on SSH keys, or multi-value credentials. +These can not be represented in LDAP. -Many of the structures in Kanidm do not correlate closely to LDAP. For example -Kanidm only has a GID number, where LDAP's schemas define both a UID number and a -GID number. +Many of the structures in Kanidm do not correlate closely to LDAP. For example Kanidm only has a GID +number, where LDAP's schemas define both a UID number and a GID number. -Entries in the database also have a specific name in LDAP, related to their path -in the directory tree. Kanidm is a flat model, so we have to emulate some tree-like -elements, and ignore others. +Entries in the database also have a specific name in LDAP, related to their path in the directory +tree. Kanidm is a flat model, so we have to emulate some tree-like elements, and ignore others. For this reason, when you search the LDAP interface, Kanidm will make some mapping decisions. -* The Kanidm domain name is used to generate the DN of the suffix. -* The domain\_info object becomes the suffix root. -* All other entries are direct subordinates of the domain\_info for DN purposes. -* Distinguished Names (DNs) are generated from the spn, name, or uuid attribute. -* Bind DNs can be remapped and rewritten, and may not even be a DN during bind. -* The '\*' and '+' operators can not be used in conjuction with attribute lists in searches. +- The Kanidm domain name is used to generate the DN of the suffix. +- The domain\_info object becomes the suffix root. +- All other entries are direct subordinates of the domain\_info for DN purposes. +- Distinguished Names (DNs) are generated from the spn, name, or uuid attribute. +- Bind DNs can be remapped and rewritten, and may not even be a DN during bind. +- The '\*' and '+' operators can not be used in conjuction with attribute lists in searches. -These decisions were made to make the path as simple and effective as possible, -relying more on the Kanidm query and filter system than attempting to generate a tree-like -representation of data. As almost all clients can use filters for entry selection -we don't believe this is a limitation for the consuming applications. +These decisions were made to make the path as simple and effective as possible, relying more on the +Kanidm query and filter system than attempting to generate a tree-like representation of data. As +almost all clients can use filters for entry selection we don't believe this is a limitation for the +consuming applications. ## Security ### TLS -StartTLS is not supported due to security risks. LDAPS is the only secure method -of communicating to any LDAP server. Kanidm, when configured with certificates, will -use them for LDAPS (and will not listen on a plaintext LDAP port). +StartTLS is not supported due to security risks. LDAPS is the only secure method of communicating to +any LDAP server. Kanidm, when configured with certificates, will use them for LDAPS (and will not +listen on a plaintext LDAP port). ### Writes @@ -69,60 +61,67 @@ contains. As a result, writes are rejected for all users via the LDAP interface. ### Access Controls -LDAP only supports password authentication. As LDAP is used heavily in POSIX environments -the LDAP bind for any DN will use its configured posix password. +LDAP only supports password authentication. As LDAP is used heavily in POSIX environments the LDAP +bind for any DN will use its configured posix password. -As the POSIX password is not equivalent in strength to the primary credentials of Kanidm -(which may be multi-factor authentication, MFA), the LDAP bind does not grant -rights to elevated read permissions. All binds have the permissions of "Anonymous" -even if the anonymous account is locked. +As the POSIX password is not equivalent in strength to the primary credentials of Kanidm (which may +be multi-factor authentication, MFA), the LDAP bind does not grant rights to elevated read +permissions. All binds have the permissions of "Anonymous" even if the anonymous account is locked. -The exception is service accounts which can use api-tokens during an LDAP bind for elevated -read permissions. +The exception is service accounts which can use api-tokens during an LDAP bind for elevated read +permissions. ## Server Configuration To configure Kanidm to provide LDAP, add the argument to the `server.toml` configuration: - ldapbindaddress = "127.0.0.1:3636" +```toml +ldapbindaddress = "127.0.0.1:3636" +``` -You should configure TLS certificates and keys as usual - LDAP will re-use the Web -server TLS material. +You should configure TLS certificates and keys as usual - LDAP will re-use the Web server TLS +material. ## Showing LDAP Entries and Attribute Maps -By default Kanidm is limited in what attributes are generated or remapped into -LDAP entries. However, the server internally contains a map of extended attribute -mappings for application specific requests that must be satisfied. +By default Kanidm is limited in what attributes are generated or remapped into LDAP entries. +However, the server internally contains a map of extended attribute mappings for application +specific requests that must be satisfied. An example is that some applications expect and require a 'CN' value, even though Kanidm does not -provide it. If the application is unable to be configured to accept "name" it may be necessary -to use Kanidm's mapping feature. Currently these are compiled into the server, so you may need to open +provide it. If the application is unable to be configured to accept "name" it may be necessary to +use Kanidm's mapping feature. Currently these are compiled into the server, so you may need to open an issue with your requirements for attribute maps. To show what attribute maps exists for an entry you can use the attribute search term '+'. - # To show Kanidm attributes - ldapsearch ... -x '(name=admin)' '*' - # To show all attribute maps - ldapsearch ... -x '(name=admin)' '+' +```bash +# To show Kanidm attributes +ldapsearch ... -x '(name=admin)' '*' +# To show all attribute maps +ldapsearch ... -x '(name=admin)' '+' +``` Attributes that are in the map can be requested explicitly, and this can be combined with requesting Kanidm native attributes. - ldapsearch ... -x '(name=admin)' cn objectClass displayname memberof +```bash +ldapsearch ... -x '(name=admin)' cn objectClass displayname memberof +``` ## Service Accounts -If you have [issued api tokens for a service account](../accounts_and_groups.html#using-api-tokens-with-service-accounts) +If you have +[issued api tokens for a service account](../accounts_and_groups.html#using-api-tokens-with-service-accounts) they can be used to gain extended read permissions for those service accounts. Api tokens can also be used to gain extended search permissions with LDAP. To do this you can bind with a dn of `dn=token` and provide the api token in the password. -> **NOTE** The `dn=token` keyword is guaranteed to not be used by any other entry, which is why it was chosen as the keyword to initiate api token binds. +> **NOTE** The `dn=token` keyword is guaranteed to not be used by any other entry, which is why it +> was chosen as the keyword to initiate api token binds. -```shell +```bash ldapwhoami -H ldaps://URL -x -D "dn=token" -w "TOKEN" ldapwhoami -H ldaps://idm.example.com -x -D "dn=token" -w "..." # u: demo_service@idm.example.com @@ -130,62 +129,72 @@ ldapwhoami -H ldaps://idm.example.com -x -D "dn=token" -w "..." ## Example -Given a default install with domain "example.com" the configured LDAP DN will be "dc=example,dc=com". +Given a default install with domain "example.com" the configured LDAP DN will be +"dc=example,dc=com". - # from server.toml - ldapbindaddress = "[::]:3636" +```toml +# from server.toml +ldapbindaddress = "[::]:3636" +``` This can be queried with: - LDAPTLS_CACERT=ca.pem ldapsearch \ - -H ldaps://127.0.0.1:3636 \ - -b 'dc=example,dc=com' \ - -x '(name=test1)' +```bash +LDAPTLS_CACERT=ca.pem ldapsearch \ + -H ldaps://127.0.0.1:3636 \ + -b 'dc=example,dc=com' \ + -x '(name=test1)' - # test1@example.com, example.com - dn: spn=test1@example.com,dc=example,dc=com - objectclass: account - objectclass: memberof - objectclass: object - objectclass: person - displayname: Test User - memberof: spn=group240@example.com,dc=example,dc=com - name: test1 - spn: test1@example.com - entryuuid: 22a65b6c-80c8-4e1a-9b76-3f3afdff8400 +# test1@example.com, example.com +dn: spn=test1@example.com,dc=example,dc=com +objectclass: account +objectclass: memberof +objectclass: object +objectclass: person +displayname: Test User +memberof: spn=group240@example.com,dc=example,dc=com +name: test1 +spn: test1@example.com +entryuuid: 22a65b6c-80c8-4e1a-9b76-3f3afdff8400 +``` -It is recommended that client applications filter accounts that can login with `(class=account)` -and groups with `(class=group)`. If possible, group membership is defined in RFC2307bis or -Active Directory style. This means groups are determined from the "memberof" attribute which -contains a DN to a group. +It is recommended that client applications filter accounts that can login with `(class=account)` and +groups with `(class=group)`. If possible, group membership is defined in RFC2307bis or Active +Directory style. This means groups are determined from the "memberof" attribute which contains a DN +to a group. LDAP binds can use any unique identifier of the account. The following are all valid bind DNs for the object listed above (if it was a POSIX account, that is). - ldapwhoami ... -x -D 'name=test1' - ldapwhoami ... -x -D 'spn=test1@example.com' - ldapwhoami ... -x -D 'test1@example.com' - ldapwhoami ... -x -D 'test1' - ldapwhoami ... -x -D '22a65b6c-80c8-4e1a-9b76-3f3afdff8400' - ldapwhoami ... -x -D 'spn=test1@example.com,dc=example,dc=com' - ldapwhoami ... -x -D 'name=test1,dc=example,dc=com' +```bash +ldapwhoami ... -x -D 'name=test1' +ldapwhoami ... -x -D 'spn=test1@example.com' +ldapwhoami ... -x -D 'test1@example.com' +ldapwhoami ... -x -D 'test1' +ldapwhoami ... -x -D '22a65b6c-80c8-4e1a-9b76-3f3afdff8400' +ldapwhoami ... -x -D 'spn=test1@example.com,dc=example,dc=com' +ldapwhoami ... -x -D 'name=test1,dc=example,dc=com' +``` -Most LDAP clients are very picky about TLS, and can be very hard to debug or display errors. -For example these commands: +Most LDAP clients are very picky about TLS, and can be very hard to debug or display errors. For +example these commands: - ldapsearch -H ldaps://127.0.0.1:3636 -b 'dc=example,dc=com' -x '(name=test1)' - ldapsearch -H ldap://127.0.0.1:3636 -b 'dc=example,dc=com' -x '(name=test1)' - ldapsearch -H ldap://127.0.0.1:3389 -b 'dc=example,dc=com' -x '(name=test1)' +```bash +ldapsearch -H ldaps://127.0.0.1:3636 -b 'dc=example,dc=com' -x '(name=test1)' +ldapsearch -H ldap://127.0.0.1:3636 -b 'dc=example,dc=com' -x '(name=test1)' +ldapsearch -H ldap://127.0.0.1:3389 -b 'dc=example,dc=com' -x '(name=test1)' +``` All give the same error: - ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1) +``` +ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1) +``` This is despite the fact: -* The first command is a certificate validation error. -* The second is a missing LDAPS on a TLS port. -* The third is an incorrect port. +- The first command is a certificate validation error. +- The second is a missing LDAPS on a TLS port. +- The third is an incorrect port. To diagnose errors like this, you may need to add "-d 1" to your LDAP commands or client. - diff --git a/kanidm_book/src/integrations/oauth2.md b/kanidm_book/src/integrations/oauth2.md index 02dcba875..5a1504f2c 100644 --- a/kanidm_book/src/integrations/oauth2.md +++ b/kanidm_book/src/integrations/oauth2.md @@ -1,107 +1,106 @@ # OAuth2 -OAuth is a web authorisation protocol that allows "single sign on". It's key to note -OAuth only provides authorisation, as the protocol in its default forms -do not provide identity or authentication information. All that Oauth2 provides is -information that an entity is authorised for the requested resources. +OAuth is a web authorisation protocol that allows "single sign on". It's key to note OAuth only +provides authorisation, as the protocol in its default forms do not provide identity or +authentication information. All that Oauth2 provides is information that an entity is authorised for +the requested resources. -OAuth can tie into extensions allowing an identity provider to reveal information -about authorised sessions. This extends OAuth from an authorisation only system -to a system capable of identity and authorisation. Two primary methods of this -exist today: RFC7662 token introspection, and OpenID connect. +OAuth can tie into extensions allowing an identity provider to reveal information about authorised +sessions. This extends OAuth from an authorisation only system to a system capable of identity and +authorisation. Two primary methods of this exist today: RFC7662 token introspection, and OpenID +connect. ## How Does OAuth2 Work? -A user wishes to access a service (resource, resource server). The resource -server does not have an active session for the client, so it redirects to the -authorisation server (Kanidm) to determine if the client should be allowed to proceed, and -has the appropriate permissions (scopes) for the requested resources. +A user wishes to access a service (resource, resource server). The resource server does not have an +active session for the client, so it redirects to the authorisation server (Kanidm) to determine if +the client should be allowed to proceed, and has the appropriate permissions (scopes) for the +requested resources. -The authorisation server checks the current session of the user and may present -a login flow if required. Given the identity of the user known to the authorisation -sever, and the requested scopes, the authorisation server makes a decision if it -allows the authorisation to proceed. The user is then prompted to consent to the -authorisation from the authorisation server to the resource server as some identity -information may be revealed by granting this consent. +The authorisation server checks the current session of the user and may present a login flow if +required. Given the identity of the user known to the authorisation sever, and the requested scopes, +the authorisation server makes a decision if it allows the authorisation to proceed. The user is +then prompted to consent to the authorisation from the authorisation server to the resource server +as some identity information may be revealed by granting this consent. -If successful and consent given, the user is redirected back to the resource server with an +If successful and consent given, the user is redirected back to the resource server with an authorisation code. The resource server then contacts the authorisation server directly with this code and exchanges it for a valid token that may be provided to the user's browser. -The resource server may then optionally contact the token introspection endpoint of the -authorisation server about the provided OAuth token, which yields extra metadata about the identity -that holds the token from the authorisation. This metadata may include identity information, -but also may include extended metadata, sometimes refered to as "claims". Claims are -information bound to a token based on properties of the session that may allow -the resource server to make extended authorisation decisions without the need -to contact the authorisation server to arbitrate. +The resource server may then optionally contact the token introspection endpoint of the +authorisation server about the provided OAuth token, which yields extra metadata about the identity +that holds the token from the authorisation. This metadata may include identity information, but +also may include extended metadata, sometimes refered to as "claims". Claims are information bound +to a token based on properties of the session that may allow the resource server to make extended +authorisation decisions without the need to contact the authorisation server to arbitrate. It's important to note that OAuth2 at its core is an authorisation system which has layered identity-providing elements on top. ### Resource Server -This is the server that a user wants to access. Common examples could be Nextcloud, a wiki, -or something else. This is the system that "needs protecting" and wants to delegate authorisation +This is the server that a user wants to access. Common examples could be Nextcloud, a wiki, or +something else. This is the system that "needs protecting" and wants to delegate authorisation decisions to Kanidm. -It's important for you to know *how* your resource server supports OAuth2. For example, does it +It's important for you to know _how_ your resource server supports OAuth2. For example, does it support RFC 7662 token introspection or does it rely on OpenID connect for identity information? Does the resource server support PKCE S256? In general Kanidm requires that your resource server supports: -* HTTP basic authentication to the authorisation server -* PKCE S256 code verification to prevent certain token attack classes -* OIDC only - JWT ES256 for token signatures +- HTTP basic authentication to the authorisation server +- PKCE S256 code verification to prevent certain token attack classes +- OIDC only - JWT ES256 for token signatures Kanidm will expose its OAuth2 APIs at the following URLs: -* user auth url: `https://idm.example.com/ui/oauth2` -* api auth url: `https://idm.example.com/oauth2/authorise` -* token url: `https://idm.example.com/oauth2/token` -* rfc7662 token introspection url: `https://idm.example.com/oauth2/token/introspect` -* rfc7009 token revoke url: `https://idm.example.com/oauth2/token/revoke` +- user auth url: `https://idm.example.com/ui/oauth2` +- api auth url: `https://idm.example.com/oauth2/authorise` +- token url: `https://idm.example.com/oauth2/token` +- rfc7662 token introspection url: `https://idm.example.com/oauth2/token/introspect` +- rfc7009 token revoke url: `https://idm.example.com/oauth2/token/revoke` -OpenID Connect discovery - you need to substitute your OAuth2 client id in the following -urls: +OpenID Connect discovery - you need to substitute your OAuth2 client id in the following urls: -* OpenID connect issuer uri: `https://idm.example.com/oauth2/openid/:client\_id:/` -* OpenID connect discovery: `https://idm.example.com/oauth2/openid/:client\_id:/.well-known/openid-configuration` +- OpenID connect issuer uri: `https://idm.example.com/oauth2/openid/:client\_id:/` +- OpenID connect discovery: + `https://idm.example.com/oauth2/openid/:client\_id:/.well-known/openid-configuration` For manual OpenID configuration: -* OpenID connect userinfo: `https://idm.example.com/oauth2/openid/:client\_id:/userinfo` -* token signing public key: `https://idm.example.com/oauth2/openid/:client\_id:/public\_key.jwk` +- OpenID connect userinfo: `https://idm.example.com/oauth2/openid/:client\_id:/userinfo` +- token signing public key: `https://idm.example.com/oauth2/openid/:client\_id:/public\_key.jwk` ### Scope Relationships -For an authorisation to proceed, the resource server will request a list of scopes, which are -unique to that resource server. For example, when a user wishes to login to the admin panel -of the resource server, it may request the "admin" scope from Kanidm for authorisation. But when -a user wants to login, it may only request "access" as a scope from Kanidm. +For an authorisation to proceed, the resource server will request a list of scopes, which are unique +to that resource server. For example, when a user wishes to login to the admin panel of the resource +server, it may request the "admin" scope from Kanidm for authorisation. But when a user wants to +login, it may only request "access" as a scope from Kanidm. -As each resource server may have its own scopes and understanding of these, Kanidm isolates -scopes to each resource server connected to Kanidm. Kanidm has two methods of granting scopes to accounts (users). +As each resource server may have its own scopes and understanding of these, Kanidm isolates scopes +to each resource server connected to Kanidm. Kanidm has two methods of granting scopes to accounts +(users). -The first is scope mappings. These provide a set of scopes if a user is a member of a specific -group within Kanidm. This allows you to create a relationship between the scopes of a resource -server, and the groups/roles in Kanidm which can be specific to that resource server. +The first is scope mappings. These provide a set of scopes if a user is a member of a specific group +within Kanidm. This allows you to create a relationship between the scopes of a resource server, and +the groups/roles in Kanidm which can be specific to that resource server. -For an authorisation to proceed, all scopes requested by the resource server must be available in the -final scope set that is granted to the account. +For an authorisation to proceed, all scopes requested by the resource server must be available in +the final scope set that is granted to the account. -The second is supplemental scope mappings. These function the same as scope maps where membership -of a group provides a set of scopes to the account. However these scopes are NOT consulted during +The second is supplemental scope mappings. These function the same as scope maps where membership of +a group provides a set of scopes to the account. However these scopes are NOT consulted during authorisation decisions made by Kanidm. These scopes exists to allow optional properties to be -provided (such as personal information about a subset of accounts to be revealed) or so that the resource server -may make it's own authorisation decisions based on the provided scopes. +provided (such as personal information about a subset of accounts to be revealed) or so that the +resource server may make it's own authorisation decisions based on the provided scopes. -This use of scopes is the primary means to control who can access what resources. These access decisions -can take place either on Kanidm or the resource server. +This use of scopes is the primary means to control who can access what resources. These access +decisions can take place either on Kanidm or the resource server. -For example, if you have a resource server that always requests a scope of "read", then users -with scope maps that supply the read scope will be allowed by Kanidm to proceed to the resource server. +For example, if you have a resource server that always requests a scope of "read", then users with +scope maps that supply the read scope will be allowed by Kanidm to proceed to the resource server. Kanidm can then provide the supplementary scopes into provided tokens, so that the resource server can use these to choose if it wishes to display UI elements. If a user has a supplemental "admin" scope, then that user may be able to access an administration panel of the resource server. In this @@ -112,201 +111,232 @@ the resource server. ### Create the Kanidm Configuration -After you have understood your resource server requirements you first need to configure Kanidm. -By default members of "system\_admins" or "idm\_hp\_oauth2\_manage\_priv" are able to create or -manage OAuth2 resource server integrations. +After you have understood your resource server requirements you first need to configure Kanidm. By +default members of "system\_admins" or "idm\_hp\_oauth2\_manage\_priv" are able to create or manage +OAuth2 resource server integrations. You can create a new resource server with: - kanidm system oauth2 create - kanidm system oauth2 create nextcloud "Nextcloud Production" https://nextcloud.example.com +```bash +kanidm system oauth2 create +kanidm system oauth2 create nextcloud "Nextcloud Production" https://nextcloud.example.com +``` You can create a scope map with: - kanidm system oauth2 update_scope_map [scopes]... - kanidm system oauth2 update_scope_map nextcloud nextcloud_admins admin +```bash +kanidm system oauth2 update_scope_map [scopes]... +kanidm system oauth2 update_scope_map nextcloud nextcloud_admins admin +``` -{{#template - ../templates/kani-warning.md - imagepath=../images - title=WARNING - text=If you are creating an OpenID Connect (OIDC) resource server you MUST provide a scope map named openid. Without this, OpenID clients WILL NOT WORK -}} +{{#template ../templates/kani-warning.md imagepath=../images title=WARNING text=If you are creating +an OpenID Connect (OIDC) resource server you +MUST provide a scope map named openid. Without this, OpenID clients WILL NOT +WORK }} -> **HINT** -> OpenID connect allows a number of scopes that affect the content of the resulting -> authorisation token. If one of the following scopes are requested by the OpenID client, -> then the associated claims may be added to the authorisation token. It is not guaranteed -> that all of the associated claims will be added. -> -> * profile - (name, family\_name, given\_name, middle\_name, nickname, preferred\_username, profile, picture, website, gender, birthdate, zoneinfo, locale, and updated\_at) -> * email - (email, email\_verified) -> * address - (address) -> * phone - (phone\_number, phone\_number\_verified) +> **HINT** OpenID connect allows a number of scopes that affect the content of the resulting +> authorisation token. If one of the following scopes are requested by the OpenID client, then the +> associated claims may be added to the authorisation token. It is not guaranteed that all of the +> associated claims will be added. > +> - profile - (name, family\_name, given\_name, middle\_name, nickname, preferred\_username, + > profile, picture, website, gender, birthdate, zoneinfo, locale, and updated\_at) +> - email - (email, email\_verified) +> - address - (address) +> - phone - (phone\_number, phone\_number\_verified) You can create a supplemental scope map with: - kanidm system oauth2 update_sup_scope_map [scopes]... - kanidm system oauth2 update_sup_scope_map nextcloud nextcloud_admins admin +```bash +kanidm system oauth2 update_sup_scope_map [scopes]... +kanidm system oauth2 update_sup_scope_map nextcloud nextcloud_admins admin +``` Once created you can view the details of the resource server. - kanidm system oauth2 get nextcloud - --- - class: oauth2_resource_server - class: oauth2_resource_server_basic - class: object - displayname: Nextcloud Production - oauth2_rs_basic_secret: - oauth2_rs_name: nextcloud - oauth2_rs_origin: https://nextcloud.example.com - oauth2_rs_token_key: hidden +```bash +kanidm system oauth2 get nextcloud +--- +class: oauth2_resource_server +class: oauth2_resource_server_basic +class: object +displayname: Nextcloud Production +oauth2_rs_basic_secret: +oauth2_rs_name: nextcloud +oauth2_rs_origin: https://nextcloud.example.com +oauth2_rs_token_key: hidden +``` ### Configure the Resource Server -On your resource server, you should configure the client ID as the "oauth2\_rs\_name" from -Kanidm, and the password to be the value shown in "oauth2\_rs\_basic\_secret". Ensure that -the code challenge/verification method is set to S256. +On your resource server, you should configure the client ID as the "oauth2\_rs\_name" from Kanidm, +and the password to be the value shown in "oauth2\_rs\_basic\_secret". Ensure that the code +challenge/verification method is set to S256. You should now be able to test authorisation. ## Resetting Resource Server Security Material -In the case of disclosure of the basic secret, or some other security event where you may wish -to invalidate a resource servers active sessions/tokens, you can reset the secret material of -the server with: +In the case of disclosure of the basic secret, or some other security event where you may wish to +invalidate a resource servers active sessions/tokens, you can reset the secret material of the +server with: - kanidm system oauth2 reset_secrets +```bash +kanidm system oauth2 reset_secrets +``` -Each resource server has unique signing keys and access secrets, so this is limited to each -resource server. +Each resource server has unique signing keys and access secrets, so this is limited to each resource +server. ## Extended Options for Legacy Clients -Not all resource servers support modern standards like PKCE or ECDSA. In these situations -it may be necessary to disable these on a per-resource server basis. Disabling these on -one resource server will not affect others. +Not all resource servers support modern standards like PKCE or ECDSA. In these situations it may be +necessary to disable these on a per-resource server basis. Disabling these on one resource server +will not affect others. -{{#template - ../templates/kani-warning.md - imagepath=../images - title=WARNING - text=Changing these settings MAY have serious consequences on the security of your resource server. You should avoid changing these if at all possible! -}} +{{#template ../templates/kani-warning.md imagepath=../images title=WARNING text=Changing these +settings MAY have serious consequences on the security of your resource server. You should avoid +changing these if at all possible! }} To disable PKCE for a resource server: - kanidm system oauth2 warning_insecure_client_disable_pkce +```bash +kanidm system oauth2 warning_insecure_client_disable_pkce +``` To enable legacy cryptograhy (RSA PKCS1-5 SHA256): - kanidm system oauth2 warning_enable_legacy_crypto +```bash +kanidm system oauth2 warning_enable_legacy_crypto +``` ## Example Integrations ### Apache mod\_auth\_openidc -Add the following to a `mod_auth_openidc.conf`. It should be included in a `mods_enabled` folder -or with an appropriate include. +Add the following to a `mod_auth_openidc.conf`. It should be included in a `mods_enabled` folder or +with an appropriate include. - OIDCRedirectURI /protected/redirect_uri - OIDCCryptoPassphrase - OIDCProviderMetadataURL https://kanidm.example.com/oauth2/openid//.well-known/openid-configuration - OIDCScope "openid" - OIDCUserInfoTokenMethod authz_header - OIDCClientID - OIDCClientSecret - OIDCPKCEMethod S256 - OIDCCookieSameSite On - # Set the `REMOTE_USER` field to the `preferred_username` instead of the UUID. - # Remember that the username can change, but this can help with systems like Nagios which use this as a display name. - # OIDCRemoteUserClaim preferred_username +``` +OIDCRedirectURI /protected/redirect_uri +OIDCCryptoPassphrase +OIDCProviderMetadataURL https://kanidm.example.com/oauth2/openid//.well-known/openid-configuration +OIDCScope "openid" +OIDCUserInfoTokenMethod authz_header +OIDCClientID +OIDCClientSecret +OIDCPKCEMethod S256 +OIDCCookieSameSite On +# Set the `REMOTE_USER` field to the `preferred_username` instead of the UUID. +# Remember that the username can change, but this can help with systems like Nagios which use this as a display name. +# OIDCRemoteUserClaim preferred_username +``` -Other scopes can be added as required to the `OIDCScope` line, eg: `OIDCScope "openid scope2 scope3"` +Other scopes can be added as required to the `OIDCScope` line, eg: +`OIDCScope "openid scope2 scope3"` In the virtual host, to protect a location: - - AuthType openid-connect - Require valid-user - +```apache + + AuthType openid-connect + Require valid-user + +``` ### Nextcloud -Install the module [from the nextcloud market place](https://apps.nextcloud.com/apps/user_oidc) - -it can also be found in the Apps section of your deployment as "OpenID Connect user backend". +Install the module [from the nextcloud market place](https://apps.nextcloud.com/apps/user_oidc) - it +can also be found in the Apps section of your deployment as "OpenID Connect user backend". In Nextcloud's config.php you need to allow connection to remote servers: - 'allow_local_remote_servers' => true, +```php +'allow_local_remote_servers' => true, +``` You may optionally choose to add: - 'allow_user_to_change_display_name' => false, - 'lost_password_link' => 'disabled', +```php +'allow_user_to_change_display_name' => false, +'lost_password_link' => 'disabled', +``` If you forget this, you may see the following error in logs: - Host 172.24.11.129 was not connected to because it violates local access rules +``` +Host 172.24.11.129 was not connected to because it violates local access rules +``` This module does not support PKCE or ES256. You will need to run: - kanidm system oauth2 warning_insecure_client_disable_pkce - kanidm system oauth2 warning_enable_legacy_crypto +```bash +kanidm system oauth2 warning_insecure_client_disable_pkce +kanidm system oauth2 warning_enable_legacy_crypto +``` In the settings menu, configure the discovery URL and client ID and secret. You can choose to disable other login methods with: - php occ config:app:set --value=0 user_oidc allow_multiple_user_backends +```bash +php occ config:app:set --value=0 user_oidc allow_multiple_user_backends +``` -You can login directly by appending `?direct=1` to your login page. You can re-enable -other backends by setting the value to `1` +You can login directly by appending `?direct=1` to your login page. You can re-enable other backends +by setting the value to `1` ### Velociraptor -Velociraptor supports OIDC. To configure it select "Authenticate with SSO" then "OIDC" during -the interactive configuration generator. Alternately, you can set the following keys in server.config.yaml: +Velociraptor supports OIDC. To configure it select "Authenticate with SSO" then "OIDC" during the +interactive configuration generator. Alternately, you can set the following keys in +server.config.yaml: - GUI: - authenticator: - type: OIDC - oidc_issuer: https://idm.example.com/oauth2/openid/:client\_id:/ - oauth_client_id: - oauth_client_secret: +``` +GUI: + authenticator: + type: OIDC + oidc_issuer: https://idm.example.com/oauth2/openid/:client\_id:/ + oauth_client_id: + oauth_client_secret: +``` Velociraptor does not support PKCE. You will need to run the following: - kanidm system oauth2 warning_insecure_client_disable_pkce +```bash +kanidm system oauth2 warning_insecure_client_disable_pkce +``` Initial users are mapped via their email in the Velociraptor server.config.yaml config: - GUI: - initial_users: - - name: +``` +GUI: + initial_users: + - name: +``` Accounts require the `openid` and `email` scopes to be authenticated. It is recommended you limit these to a group with a scope map due to Velociraptors high impact. - # kanidm group create velociraptor_users - # kanidm group add_members velociraptor_users ... - kanidm system oauth2 create_scope_map velociraptor_users openid email +```bash +# kanidm group create velociraptor_users +# kanidm group add_members velociraptor_users ... +kanidm system oauth2 create_scope_map velociraptor_users openid email +``` ### Vouch Proxy -> **WARNING** -> Vouch proxy requires a unique identifier but does not use the proper scope, "sub". It uses the fields -> "username" or "email" as primary identifiers instead. As a result, this can cause user or deployment issues, at -> worst security bypasses. You should avoid Vouch Proxy if possible due to these issues. +> **WARNING** Vouch proxy requires a unique identifier but does not use the proper scope, "sub". It +> uses the fields "username" or "email" as primary identifiers instead. As a result, this can cause +> user or deployment issues, at worst security bypasses. You should avoid Vouch Proxy if possible +> due to these issues. > -> * -> * +> - +> - Note: **You need to run at least the version 0.37.0** -Vouch Proxy supports multiple OAuth and OIDC login providers. -To configure it you need to pass: +Vouch Proxy supports multiple OAuth and OIDC login providers. To configure it you need to pass: ```yaml oauth: @@ -324,4 +354,6 @@ oauth: The `email` scope needs to be passed and thus the mail attribute needs to exist on the account: - kanidm person update --mail "YYYY@somedomain.com" --name idm_admin +```bash +kanidm person update --mail "YYYY@somedomain.com" --name idm_admin +``` diff --git a/kanidm_book/src/integrations/pam_and_nsswitch.md b/kanidm_book/src/integrations/pam_and_nsswitch.md index 29ce1ae16..454b0d0f9 100644 --- a/kanidm_book/src/integrations/pam_and_nsswitch.md +++ b/kanidm_book/src/integrations/pam_and_nsswitch.md @@ -1,322 +1,357 @@ # PAM and nsswitch -[PAM](http://linux-pam.org) and [nsswitch](https://en.wikipedia.org/wiki/Name_Service_Switch) -are the core mechanisms used by Linux and BSD clients to resolve identities from an IDM service -like Kanidm into accounts that can be used on the machine for various interactive tasks. +[PAM](http://linux-pam.org) and [nsswitch](https://en.wikipedia.org/wiki/Name_Service_Switch) are +the core mechanisms used by Linux and BSD clients to resolve identities from an IDM service like +Kanidm into accounts that can be used on the machine for various interactive tasks. ## The UNIX Daemon -Kanidm provides a UNIX daemon that runs on any client that wants to use PAM and nsswitch integration. -The daemon can cache the accounts for users who have unreliable networks, or who leave -the site where Kanidm is hosted. The daemon is also able to cache missing-entry responses to reduce network -traffic and main server load. +Kanidm provides a UNIX daemon that runs on any client that wants to use PAM and nsswitch +integration. The daemon can cache the accounts for users who have unreliable networks, or who leave +the site where Kanidm is hosted. The daemon is also able to cache missing-entry responses to reduce +network traffic and main server load. -Additionally, running the daemon means that the PAM and nsswitch integration libraries can be small, -helping to reduce the attack surface of the machine. Similarly, a tasks daemon is available that can -create home directories on first login and supports several features related to aliases and links to +Additionally, running the daemon means that the PAM and nsswitch integration libraries can be small, +helping to reduce the attack surface of the machine. Similarly, a tasks daemon is available that can +create home directories on first login and supports several features related to aliases and links to these home directories. We recommend you install the client daemon from your system package manager: - # OpenSUSE - zypper in kanidm-unixd-clients - # Fedora - dnf install kanidm-unixd-clients +```bash +# OpenSUSE +zypper in kanidm-unixd-clients +# Fedora +dnf install kanidm-unixd-clients +``` You can check the daemon is running on your Linux system with: - systemctl status kanidm-unixd +```bash +systemctl status kanidm-unixd +``` You can check the privileged tasks daemon is running with: - systemctl status kanidm-unixd-tasks +```bash +systemctl status kanidm-unixd-tasks +``` -> **NOTE** The `kanidm_unixd_tasks` daemon is not required for PAM and nsswitch functionality. -> If disabled, your system will function as usual. It is, however, recommended due to the features -> it provides supporting Kanidm's capabilities. +> **NOTE** The `kanidm_unixd_tasks` daemon is not required for PAM and nsswitch functionality. If +> disabled, your system will function as usual. It is, however, recommended due to the features it +> provides supporting Kanidm's capabilities. Both unixd daemons use the connection configuration from /etc/kanidm/config. This is the covered in [client_tools](./client_tools.md#kanidm-configuration). You can also configure some unixd-specific options with the file /etc/kanidm/unixd: - pam_allowed_login_groups = ["posix_group"] - default_shell = "/bin/sh" - home_prefix = "/home/" - home_attr = "uuid" - home_alias = "spn" - use_etc_skel = false - uid_attr_map = "spn" - gid_attr_map = "spn" +```toml +pam_allowed_login_groups = ["posix_group"] +default_shell = "/bin/sh" +home_prefix = "/home/" +home_attr = "uuid" +home_alias = "spn" +use_etc_skel = false +uid_attr_map = "spn" +gid_attr_map = "spn" +``` -`pam_allowed_login_groups` defines a set of POSIX groups where membership of any of these -groups will be allowed to login via PAM. All POSIX users and groups can be resolved by nss -regardless of PAM login status. This may be a group name, spn, or uuid. +`pam_allowed_login_groups` defines a set of POSIX groups where membership of any of these groups +will be allowed to login via PAM. All POSIX users and groups can be resolved by nss regardless of +PAM login status. This may be a group name, spn, or uuid. `default_shell` is the default shell for users. Defaults to `/bin/sh`. -`home_prefix` is the prepended path to where home directories are stored. Must end with -a trailing `/`. Defaults to `/home/`. +`home_prefix` is the prepended path to where home directories are stored. Must end with a trailing +`/`. Defaults to `/home/`. -`home_attr` is the default token attribute used for the home directory path. Valid -choices are `uuid`, `name`, `spn`. Defaults to `uuid`. +`home_attr` is the default token attribute used for the home directory path. Valid choices are +`uuid`, `name`, `spn`. Defaults to `uuid`. -`home_alias` is the default token attribute used for generating symlinks -pointing to the user's -home directory. If set, this will become the value of the home path -to nss calls. It is recommended you choose a "human friendly" attribute here. -Valid choices are `none`, `uuid`, `name`, `spn`. Defaults to `spn`. +`home_alias` is the default token attribute used for generating symlinks pointing to the user's home +directory. If set, this will become the value of the home path to nss calls. It is recommended you +choose a "human friendly" attribute here. Valid choices are `none`, `uuid`, `name`, `spn`. Defaults +to `spn`. -> **NOTICE:** -> All users in Kanidm can change their name (and their spn) at any time. If you change -> `home_attr` from `uuid` you *must* have a plan on how to manage these directory renames -> in your system. We recommend that you have a stable ID (like the UUID), and symlinks -> from the name to the UUID folder. Automatic support is provided for this via the unixd -> tasks daemon, as documented here. +> **NOTICE:** All users in Kanidm can change their name (and their spn) at any time. If you change +> `home_attr` from `uuid` you _must_ have a plan on how to manage these directory renames in your +> system. We recommend that you have a stable ID (like the UUID), and symlinks from the name to the +> UUID folder. Automatic support is provided for this via the unixd tasks daemon, as documented +> here. -`use_etc_skel` controls if home directories should be prepopulated with the contents of `/etc/skel` +`use_etc_skel` controls if home directories should be prepopulated with the contents of `/etc/skel` when first created. Defaults to false. -`uid_attr_map` chooses which attribute is used for domain local users in presentation. Defaults -to `spn`. Users from a trust will always use spn. +`uid_attr_map` chooses which attribute is used for domain local users in presentation. Defaults to +`spn`. Users from a trust will always use spn. -`gid_attr_map` chooses which attribute is used for domain local groups in presentation. Defaults -to `spn`. Groups from a trust will always use spn. +`gid_attr_map` chooses which attribute is used for domain local groups in presentation. Defaults to +`spn`. Groups from a trust will always use spn. You can then check the communication status of the daemon: - kanidm_unixd_status +```bash +kanidm_unixd_status +``` If the daemon is working, you should see: - [2020-02-14T05:58:37Z INFO kanidm_unixd_status] working! +``` +[2020-02-14T05:58:37Z INFO kanidm_unixd_status] working! +``` If it is not working, you will see an error message: - [2020-02-14T05:58:10Z ERROR kanidm_unixd_status] Error -> - Os { code: 111, kind: ConnectionRefused, message: "Connection refused" } +``` +[2020-02-14T05:58:10Z ERROR kanidm_unixd_status] Error -> + Os { code: 111, kind: ConnectionRefused, message: "Connection refused" } +``` -For more information, see the -[Troubleshooting](./pam_and_nsswitch.md#troubleshooting) section. +For more information, see the [Troubleshooting](./pam_and_nsswitch.md#troubleshooting) section. ## nsswitch When the daemon is running you can add the nsswitch libraries to /etc/nsswitch.conf - passwd: compat kanidm - group: compat kanidm +``` +passwd: compat kanidm +group: compat kanidm +``` -You can [create a user](./accounts_and_groups.md#creating-accounts) then +You can [create a user](./accounts_and_groups.md#creating-accounts) then [enable POSIX feature on the user](./posix_accounts.md#enabling-posix-attributes-on-accounts). You can then test that the POSIX extended user is able to be resolved with: - getent passwd - getent passwd testunix - testunix:x:3524161420:3524161420:testunix:/home/testunix:/bin/sh +```bash +getent passwd +getent passwd testunix +testunix:x:3524161420:3524161420:testunix:/home/testunix:/bin/sh +``` You can also do the same for groups. - getent group - getent group testgroup - testgroup:x:2439676479:testunix +```bash +getent group +getent group testgroup +testgroup:x:2439676479:testunix +``` + +> **HINT** Remember to also create a UNIX password with something like +> `kanidm account posix set_password --name idm_admin demo_user`. Otherwise there will be no +> credential for the account to authenticate. -> **HINT** Remember to also create a UNIX password with something like -> `kanidm account posix set_password --name idm_admin demo_user`. -> Otherwise there will be no credential for the account to authenticate. - ## PAM -> **WARNING:** Modifications to PAM configuration *may* leave your system in a state -> where you are unable to login or authenticate. You should always have a recovery -> shell open while making changes (for example, root), or have access to single-user mode -> at the machine's console. +> **WARNING:** Modifications to PAM configuration _may_ leave your system in a state where you are +> unable to login or authenticate. You should always have a recovery shell open while making changes +> (for example, root), or have access to single-user mode at the machine's console. -Pluggable Authentication Modules (PAM) is the mechanism a UNIX-like system -that authenticates users, and to control access to some resources. This is -configured through a stack of modules -that are executed in order to evaluate the request, and then each module may -request or reuse authentication token information. +Pluggable Authentication Modules (PAM) is the mechanism a UNIX-like system that authenticates users, +and to control access to some resources. This is configured through a stack of modules that are +executed in order to evaluate the request, and then each module may request or reuse authentication +token information. ### Before You Start -You *should* backup your /etc/pam.d directory from its original state as you -*may* change the PAM configuration in a way that will not allow you -to authenticate to your machine. +You _should_ backup your /etc/pam.d directory from its original state as you _may_ change the PAM +configuration in a way that will not allow you to authenticate to your machine. - cp -a /etc/pam.d /root/pam.d.backup +```bash +cp -a /etc/pam.d /root/pam.d.backup +``` ### SUSE / OpenSUSE -To configure PAM on suse you must modify four files, which control the -various stages of authentication: +To configure PAM on suse you must modify four files, which control the various stages of +authentication: - /etc/pam.d/common-account - /etc/pam.d/common-auth - /etc/pam.d/common-password - /etc/pam.d/common-session +```bash +/etc/pam.d/common-account +/etc/pam.d/common-auth +/etc/pam.d/common-password +/etc/pam.d/common-session +``` > **IMPORTANT** By default these files are symlinks to their corresponding `-pc` file, for example > `common-account -> common-account-pc`. If you directly edit these you are updating the inner -> content of the `-pc` file and it WILL be reset on a future upgrade. To prevent this you must -> first copy the `-pc` files. You can then edit the files safely. +> content of the `-pc` file and it WILL be reset on a future upgrade. To prevent this you must first +> copy the `-pc` files. You can then edit the files safely. - cp /etc/pam.d/common-account-pc /etc/pam.d/common-account - cp /etc/pam.d/common-auth-pc /etc/pam.d/common-auth - cp /etc/pam.d/common-password-pc /etc/pam.d/common-password - cp /etc/pam.d/common-session-pc /etc/pam.d/common-session +```bash +cp /etc/pam.d/common-account-pc /etc/pam.d/common-account +cp /etc/pam.d/common-auth-pc /etc/pam.d/common-auth +cp /etc/pam.d/common-password-pc /etc/pam.d/common-password +cp /etc/pam.d/common-session-pc /etc/pam.d/common-session +``` The content should look like: - # /etc/pam.d/common-auth-pc - # Controls authentication to this system (verification of credentials) - auth required pam_env.so - auth [default=1 ignore=ignore success=ok] pam_localuser.so - auth sufficient pam_unix.so nullok try_first_pass - auth requisite pam_succeed_if.so uid >= 1000 quiet_success - auth sufficient pam_kanidm.so ignore_unknown_user - auth required pam_deny.so +``` +# /etc/pam.d/common-auth-pc +# Controls authentication to this system (verification of credentials) +auth required pam_env.so +auth [default=1 ignore=ignore success=ok] pam_localuser.so +auth sufficient pam_unix.so nullok try_first_pass +auth requisite pam_succeed_if.so uid >= 1000 quiet_success +auth sufficient pam_kanidm.so ignore_unknown_user +auth required pam_deny.so - # /etc/pam.d/common-account-pc - # Controls authorisation to this system (who may login) - account [default=1 ignore=ignore success=ok] pam_localuser.so - account sufficient pam_unix.so - account [default=1 ignore=ignore success=ok] pam_succeed_if.so uid >= 1000 quiet_success quiet_fail - account sufficient pam_kanidm.so ignore_unknown_user - account required pam_deny.so +# /etc/pam.d/common-account-pc +# Controls authorisation to this system (who may login) +account [default=1 ignore=ignore success=ok] pam_localuser.so +account sufficient pam_unix.so +account [default=1 ignore=ignore success=ok] pam_succeed_if.so uid >= 1000 quiet_success quiet_fail +account sufficient pam_kanidm.so ignore_unknown_user +account required pam_deny.so - # /etc/pam.d/common-password-pc - # Controls flow of what happens when a user invokes the passwd command. Currently does NOT - # interact with kanidm. - password [default=1 ignore=ignore success=ok] pam_localuser.so - password required pam_unix.so use_authtok nullok shadow try_first_pass - password [default=1 ignore=ignore success=ok] pam_succeed_if.so uid >= 1000 quiet_success quiet_fail - password required pam_kanidm.so +# /etc/pam.d/common-password-pc +# Controls flow of what happens when a user invokes the passwd command. Currently does NOT +# interact with kanidm. +password [default=1 ignore=ignore success=ok] pam_localuser.so +password required pam_unix.so use_authtok nullok shadow try_first_pass +password [default=1 ignore=ignore success=ok] pam_succeed_if.so uid >= 1000 quiet_success quiet_fail +password required pam_kanidm.so - # /etc/pam.d/common-session-pc - # Controls setup of the user session once a successful authentication and authorisation has - # occured. - session optional pam_systemd.so - session required pam_limits.so - session optional pam_unix.so try_first_pass - session optional pam_umask.so - session [default=1 ignore=ignore success=ok] pam_succeed_if.so uid >= 1000 quiet_success quiet_fail - session optional pam_kanidm.so +# /etc/pam.d/common-session-pc +# Controls setup of the user session once a successful authentication and authorisation has +# occured. +session optional pam_systemd.so +session required pam_limits.so +session optional pam_unix.so try_first_pass +session optional pam_umask.so +session [default=1 ignore=ignore success=ok] pam_succeed_if.so uid >= 1000 quiet_success quiet_fail +session optional pam_kanidm.so session optional pam_env.so +``` -> **WARNING:** Ensure that `pam_mkhomedir` or `pam_oddjobd` are *not* present in any stage of your -> PAM configuration, as they interfere with the correct operation of the -> Kanidm tasks daemon. +> **WARNING:** Ensure that `pam_mkhomedir` or `pam_oddjobd` are _not_ present in any stage of your +> PAM configuration, as they interfere with the correct operation of the Kanidm tasks daemon. ### Fedora / CentOS -> **WARNING:** Kanidm currently has no support for SELinux policy - this may mean you need to -> run the daemon with permissive mode for the unconfined_service_t daemon type. To do this run: -> `semanage permissive -a unconfined_service_t`. To undo this run `semanage permissive -d unconfined_service_t`. +> **WARNING:** Kanidm currently has no support for SELinux policy - this may mean you need to run +> the daemon with permissive mode for the unconfined_service_t daemon type. To do this run: +> `semanage permissive -a unconfined_service_t`. To undo this run +> `semanage permissive -d unconfined_service_t`. > -> You may also need to run `audit2allow` for sshd and other types to be able to access the UNIX daemon sockets. +> You may also need to run `audit2allow` for sshd and other types to be able to access the UNIX +> daemon sockets. -These files are managed by authselect as symlinks. You can either work with -authselect, or remove the symlinks first. +These files are managed by authselect as symlinks. You can either work with authselect, or remove +the symlinks first. #### Without authselect + If you just remove the symlinks: Edit the content. - # /etc/pam.d/password-auth - auth required pam_env.so - auth required pam_faildelay.so delay=2000000 - auth [default=1 ignore=ignore success=ok] pam_usertype.so isregular - auth [default=1 ignore=ignore success=ok] pam_localuser.so - auth sufficient pam_unix.so nullok try_first_pass - auth [default=1 ignore=ignore success=ok] pam_usertype.so isregular - auth sufficient pam_kanidm.so ignore_unknown_user - auth required pam_deny.so - - account sufficient pam_unix.so - account sufficient pam_localuser.so - account sufficient pam_usertype.so issystem - account sufficient pam_kanidm.so ignore_unknown_user - account required pam_permit.so - - password requisite pam_pwquality.so try_first_pass local_users_only - password sufficient pam_unix.so sha512 shadow nullok try_first_pass use_authtok - password sufficient pam_kanidm.so - password required pam_deny.so - - session optional pam_keyinit.so revoke - session required pam_limits.so - -session optional pam_systemd.so - session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid - session required pam_unix.so - session optional pam_kanidm.so +``` +# /etc/pam.d/password-auth +auth required pam_env.so +auth required pam_faildelay.so delay=2000000 +auth [default=1 ignore=ignore success=ok] pam_usertype.so isregular +auth [default=1 ignore=ignore success=ok] pam_localuser.so +auth sufficient pam_unix.so nullok try_first_pass +auth [default=1 ignore=ignore success=ok] pam_usertype.so isregular +auth sufficient pam_kanidm.so ignore_unknown_user +auth required pam_deny.so + +account sufficient pam_unix.so +account sufficient pam_localuser.so +account sufficient pam_usertype.so issystem +account sufficient pam_kanidm.so ignore_unknown_user +account required pam_permit.so + +password requisite pam_pwquality.so try_first_pass local_users_only +password sufficient pam_unix.so sha512 shadow nullok try_first_pass use_authtok +password sufficient pam_kanidm.so +password required pam_deny.so + +session optional pam_keyinit.so revoke +session required pam_limits.so +-session optional pam_systemd.so +session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid +session required pam_unix.so +session optional pam_kanidm.so - - # /etc/pam.d/system-auth - auth required pam_env.so - auth required pam_faildelay.so delay=2000000 - auth sufficient pam_fprintd.so - auth [default=1 ignore=ignore success=ok] pam_usertype.so isregular - auth [default=1 ignore=ignore success=ok] pam_localuser.so - auth sufficient pam_unix.so nullok try_first_pass - auth [default=1 ignore=ignore success=ok] pam_usertype.so isregular - auth sufficient pam_kanidm.so ignore_unknown_user - auth required pam_deny.so +# /etc/pam.d/system-auth +auth required pam_env.so +auth required pam_faildelay.so delay=2000000 +auth sufficient pam_fprintd.so +auth [default=1 ignore=ignore success=ok] pam_usertype.so isregular +auth [default=1 ignore=ignore success=ok] pam_localuser.so +auth sufficient pam_unix.so nullok try_first_pass +auth [default=1 ignore=ignore success=ok] pam_usertype.so isregular +auth sufficient pam_kanidm.so ignore_unknown_user +auth required pam_deny.so - account sufficient pam_unix.so - account sufficient pam_localuser.so - account sufficient pam_usertype.so issystem - account sufficient pam_kanidm.so ignore_unknown_user - account required pam_permit.so +account sufficient pam_unix.so +account sufficient pam_localuser.so +account sufficient pam_usertype.so issystem +account sufficient pam_kanidm.so ignore_unknown_user +account required pam_permit.so - password requisite pam_pwquality.so try_first_pass local_users_only - password sufficient pam_unix.so sha512 shadow nullok try_first_pass use_authtok - password sufficient pam_kanidm.so - password required pam_deny.so +password requisite pam_pwquality.so try_first_pass local_users_only +password sufficient pam_unix.so sha512 shadow nullok try_first_pass use_authtok +password sufficient pam_kanidm.so +password required pam_deny.so - session optional pam_keyinit.so revoke - session required pam_limits.so - -session optional pam_systemd.so - session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid - session required pam_unix.so - session optional pam_kanidm.so +session optional pam_keyinit.so revoke +session required pam_limits.so +-session optional pam_systemd.so +session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid +session required pam_unix.so +session optional pam_kanidm.so +``` #### With authselect + To work with authselect: -You will need to +You will need to [create a new profile](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_authentication_and_authorization_in_rhel/configuring-user-authentication-using-authselect_configuring-authentication-and-authorization-in-rhel#creating-and-deploying-your-own-authselect-profile_configuring-user-authentication-using-authselect). + + First run the following command: - authselect create-profile kanidm -b sssd +```bash +authselect create-profile kanidm -b sssd +``` -A new folder, /etc/authselect/custom/kanidm, should be created. Inside that folder, create or -overwrite the following three files: nsswitch.conf, password-auth, system-auth. -password-auth and system-auth should be the same as above. nsswitch should be -modified for your use case. A working example looks like this: +A new folder, /etc/authselect/custom/kanidm, should be created. Inside that folder, create or +overwrite the following three files: nsswitch.conf, password-auth, system-auth. password-auth and +system-auth should be the same as above. nsswitch should be modified for your use case. A working +example looks like this: - passwd: compat kanidm sss files systemd - group: compat kanidm sss files systemd - shadow: files - hosts: files dns myhostname - services: sss files - netgroup: sss files - automount: sss files - - aliases: files - ethers: files - gshadow: files - networks: files dns - protocols: files - publickey: files - rpc: files +``` +passwd: compat kanidm sss files systemd +group: compat kanidm sss files systemd +shadow: files +hosts: files dns myhostname +services: sss files +netgroup: sss files +automount: sss files + +aliases: files +ethers: files +gshadow: files +networks: files dns +protocols: files +publickey: files +rpc: files +``` Then run: - authselect select custom/kanidm +```bash +authselect select custom/kanidm +``` to update your profile. @@ -324,33 +359,34 @@ to update your profile. ### Check POSIX-status of Group and Configuration -If authentication is failing via PAM, make sure that a list of groups is configured in `/etc/kanidm/unixd`: +If authentication is failing via PAM, make sure that a list of groups is configured in +`/etc/kanidm/unixd`: -``` +```toml pam_allowed_login_groups = ["example_group"] ``` -Check the status of the group with `kanidm group posix show example_group`. -If you get something similar to the following example: +Check the status of the group with `kanidm group posix show example_group`. If you get something +similar to the following example: -```shell +```bash > kanidm group posix show example_group Using cached token for name idm_admin Error -> Http(500, Some(InvalidAccountState("Missing class: account && posixaccount OR group && posixgroup")), "b71f137e-39f3-4368-9e58-21d26671ae24") ``` -POSIX-enable the group with `kanidm group posix set example_group`. You should get a result similar +POSIX-enable the group with `kanidm group posix set example_group`. You should get a result similar to this when you search for your group name: -```shell +```bash > kanidm group posix show example_group [ spn: example_group@kanidm.example.com, gidnumber: 3443347205 name: example_group, uuid: b71f137e-39f3-4368-9e58-21d26671ae24 ] ``` Also, ensure the target user is in the group by running: -``` +```bash > kanidm group list_members example_group ``` @@ -358,12 +394,16 @@ Also, ensure the target user is in the group by running: For the unixd daemon, you can increase the logging with: - systemctl edit kanidm-unixd.service +```bash +systemctl edit kanidm-unixd.service +``` And add the lines: - [Service] - Environment="RUST_LOG=kanidm=debug" +``` +[Service] +Environment="RUST_LOG=kanidm=debug" +``` Then restart the kanidm-unixd.service. @@ -371,33 +411,39 @@ The same pattern is true for the kanidm-unixd-tasks.service daemon. To debug the pam module interactions add `debug` to the module arguments such as: - auth sufficient pam_kanidm.so debug +``` +auth sufficient pam_kanidm.so debug +``` ### Check the Socket Permissions -Check that the `/var/run/kanidm-unixd/sock` has permissions mode 777, and that non-root readers can see it with -ls or other tools. +Check that the `/var/run/kanidm-unixd/sock` has permissions mode 777, and that non-root readers can +see it with ls or other tools. -Ensure that `/var/run/kanidm-unixd/task_sock` has permissions mode 700, and -that it is owned by the kanidm unixd process user. +Ensure that `/var/run/kanidm-unixd/task_sock` has permissions mode 700, and that it is owned by the +kanidm unixd process user. ### Verify that You Can Access the Kanidm Server You can check this with the client tools: - kanidm self whoami --name anonymous +```bash +kanidm self whoami --name anonymous +``` ### Ensure the Libraries are Correct You should have: - /usr/lib64/libnss_kanidm.so.2 - /usr/lib64/security/pam_kanidm.so - -The exact path *may* change depending on your distribution, `pam_unixd.so` should be co-located -with pam_kanidm.so. Look for it with the find command: - +```bash +/usr/lib64/libnss_kanidm.so.2 +/usr/lib64/security/pam_kanidm.so ``` + +The exact path _may_ change depending on your distribution, `pam_unixd.so` should be co-located with +pam_kanidm.so. Look for it with the find command: + +```bash find /usr/ -name 'pam_unix.so' ``` @@ -405,36 +451,41 @@ For example, on a Debian machine, it's located in `/usr/lib/x86_64-linux-gnu/sec ### Increase Connection Timeout -In some high-latency environments, you may need to increase the connection timeout. We set -this low to improve response on LANs, but over the internet this may need to be increased. -By increasing the conn_timeout, you will be able to operate on higher latency -links, but some operations may take longer to complete causing a degree of -latency. +In some high-latency environments, you may need to increase the connection timeout. We set this low +to improve response on LANs, but over the internet this may need to be increased. By increasing the +conn_timeout, you will be able to operate on higher latency links, but some operations may take +longer to complete causing a degree of latency. -By increasing the cache_timeout, you will need to refresh less often, but it may result in an -account lockout or group change until cache_timeout takes effect. Note that this has security +By increasing the cache_timeout, you will need to refresh less often, but it may result in an +account lockout or group change until cache_timeout takes effect. Note that this has security implications: - # /etc/kanidm/unixd - # Seconds - conn_timeout = 8 - # Cache timeout - cache_timeout = 60 +```toml +# /etc/kanidm/unixd +# Seconds +conn_timeout = 8 +# Cache timeout +cache_timeout = 60 +``` ### Invalidate or Clear the Cache You can invalidate the kanidm_unixd cache with: - kanidm_cache_invalidate +```bash +kanidm_cache_invalidate +``` You can clear (wipe) the cache with: - kanidm_cache_clear +```bash +kanidm_cache_clear +``` -There is an important distinction between these two - invalidated cache items may still -be yielded to a client request if the communication to the main Kanidm server is not -possible. For example, you may have your laptop in a park without wifi. +There is an important distinction between these two - invalidated cache items may still be yielded +to a client request if the communication to the main Kanidm server is not possible. For example, you +may have your laptop in a park without wifi. -Clearing the cache, however, completely wipes all local data about all accounts and groups. -If you are relying on this cached (but invalid) data, you may lose access to your accounts until -other communication issues have been resolved. \ No newline at end of file +Clearing the cache, however, completely wipes all local data about all accounts and groups. If you +are relying on this cached (but invalid) data, you may lose access to your accounts until other +communication issues have been resolved. diff --git a/kanidm_book/src/integrations/radius.md b/kanidm_book/src/integrations/radius.md index 65b12e85e..bd23bb14a 100644 --- a/kanidm_book/src/integrations/radius.md +++ b/kanidm_book/src/integrations/radius.md @@ -1,14 +1,13 @@ # RADIUS -Remote Authentication Dial In User Service (RADIUS) is a network protocol -that is commonly used to authenticate Wi-Fi devices or Virtual Private -Networks (VPNs). While it should not be a sole point of trust/authentication -to an identity, it's still an important control for protecting network resources. +Remote Authentication Dial In User Service (RADIUS) is a network protocol that is commonly used to +authenticate Wi-Fi devices or Virtual Private Networks (VPNs). While it should not be a sole point +of trust/authentication to an identity, it's still an important control for protecting network +resources. -Kanidm has a philosophy that each account can have multiple credentials which -are related to their devices, and limited to specific resources. RADIUS is -no exception and has a separate credential for each account to use for -RADIUS access. +Kanidm has a philosophy that each account can have multiple credentials which are related to their +devices, and limited to specific resources. RADIUS is no exception and has a separate credential for +each account to use for RADIUS access. ## Disclaimer @@ -16,106 +15,103 @@ It's worth noting some disclaimers about Kanidm's RADIUS integration. ### One Credential - One Account -Kanidm normally attempts to have credentials for each *device* and -*application* rather than the legacy model of one to one. +Kanidm normally attempts to have credentials for each _device_ and _application_ rather than the +legacy model of one to one. -The RADIUS protocol is only able to attest a *single* password based credential in an -authentication attempt, which limits us to storing a single RADIUS password credential -per account. However, despite this limitation, it still greatly improves the -situation by isolating the RADIUS credential from the primary or application -credentials of the account. This solves many common security concerns around -credential loss or disclosure, and prevents rogue devices from locking out -accounts as they attempt to authenticate to Wi-Fi with expired credentials. +The RADIUS protocol is only able to attest a _single_ password based credential in an authentication +attempt, which limits us to storing a single RADIUS password credential per account. However, +despite this limitation, it still greatly improves the situation by isolating the RADIUS credential +from the primary or application credentials of the account. This solves many common security +concerns around credential loss or disclosure, and prevents rogue devices from locking out accounts +as they attempt to authenticate to Wi-Fi with expired credentials. -Alternatelly, Kanidm supports mapping users with special configuration of certificates -allowing some systems to use EAP-TLS for RADIUS authentication. This returns to the -"per device" credential model. +Alternatelly, Kanidm supports mapping users with special configuration of certificates allowing some +systems to use EAP-TLS for RADIUS authentication. This returns to the "per device" credential model. ### Cleartext Credential Storage -RADIUS offers many different types of tunnels and authentication mechanisms. -However, most client devices "out of the box" only attempt a single type when -a WPA2-Enterprise network is selected: MSCHAPv2 with PEAP. This is a -challenge-response protocol that requires clear text or Windows NT LAN +RADIUS offers many different types of tunnels and authentication mechanisms. However, most client +devices "out of the box" only attempt a single type when a WPA2-Enterprise network is selected: +MSCHAPv2 with PEAP. This is a challenge-response protocol that requires clear text or Windows NT LAN Manager (NTLM) credentials. -As MSCHAPv2 with PEAP is the only practical, universal RADIUS-type supported -on all devices with minimal configuration, we consider it imperative -that it MUST be supported as the default. Esoteric RADIUS types can be used -as well, but this is up to administrators to test and configure. +As MSCHAPv2 with PEAP is the only practical, universal RADIUS-type supported on all devices with +minimal configuration, we consider it imperative that it MUST be supported as the default. Esoteric +RADIUS types can be used as well, but this is up to administrators to test and configure. -Due to this requirement, we must store the RADIUS material as clear text or -NTLM hashes. It would be silly to think that NTLM is secure as it relies on -the obsolete and deprecated MD4 cryptographic hash, providing only an -illusion of security. +Due to this requirement, we must store the RADIUS material as clear text or NTLM hashes. It would be +silly to think that NTLM is secure as it relies on the obsolete and deprecated MD4 cryptographic +hash, providing only an illusion of security. This means Kanidm stores RADIUS credentials in the database as clear text. We believe this is a reasonable decision and is a low risk to security because: -* The access controls around RADIUS secrets by default are strong, limited - to only self-account read and RADIUS-server read. -* As RADIUS credentials are separate from the primary account credentials and - have no other rights, their disclosure is not going to lead to a full - account compromise. -* Having the credentials in clear text allows a better user experience as - clients can view the credentials at any time to enroll further devices. +- The access controls around RADIUS secrets by default are strong, limited to only self-account read + and RADIUS-server read. +- As RADIUS credentials are separate from the primary account credentials and have no other rights, + their disclosure is not going to lead to a full account compromise. +- Having the credentials in clear text allows a better user experience as clients can view the + credentials at any time to enroll further devices. ### Service Accounts Do Not Have Radius Access Due to the design of service accounts, they do not have access to radius for credential assignemnt. -If you require RADIUS usage with a service account you *may* need to use EAP-TLS or some other +If you require RADIUS usage with a service account you _may_ need to use EAP-TLS or some other authentication method. ## Account Credential Configuration -For an account to use RADIUS they must first generate a RADIUS secret unique -to that account. By default, all accounts can self-create this secret. +For an account to use RADIUS they must first generate a RADIUS secret unique to that account. By +default, all accounts can self-create this secret. - kanidm person radius generate_secret --name william william - kanidm person radius show_secret --name william william +```bash +kanidm person radius generate_secret --name william william +kanidm person radius show_secret --name william william +``` ## Account Group Configuration -In Kanidm, accounts which can authenticate to RADIUS must be a member -of an allowed group. This allows you to define which users or groups may use -a Wi-Fi or VPN infrastructure, and provides a path for revoking access to the -resources through group management. The key point of this is that service -accounts should not be part of this group: +In Kanidm, accounts which can authenticate to RADIUS must be a member of an allowed group. This +allows you to define which users or groups may use a Wi-Fi or VPN infrastructure, and provides a +path for revoking access to the resources through group management. The key point of this is that +service accounts should not be part of this group: - kanidm group create --name idm_admin radius_access_allowed - kanidm group add_members --name idm_admin radius_access_allowed william +```bash +kanidm group create --name idm_admin radius_access_allowed +kanidm group add_members --name idm_admin radius_access_allowed william +``` ## RADIUS Server Service Account -To read these secrets, the RADIUS server requires an account with the -correct privileges. This can be created and assigned through the group -"idm_radius_servers", which is provided by default. +To read these secrets, the RADIUS server requires an account with the correct privileges. This can +be created and assigned through the group "idm_radius_servers", which is provided by default. First, create the service account and add it to the group: -```shell +```bash kanidm service-account create --name admin radius_service_account "Radius Service Account" kanidm group add_members --name admin idm_radius_servers radius_service_account ``` Now reset the account password, using the `admin` account: -```shell +```bash kanidm service-account credential generate-pw --name admin radius_service_account ``` ## Deploying a RADIUS Container -We provide a RADIUS container that has all the needed integrations. -This container requires some cryptographic material, with the following files being in `/etc/raddb/certs`. (Modifiable in the configuration) - -| filename | description | -| --- | --- | -| ca.pem | The signing CA of the RADIUS certificate | -| dh.pem | The output of `openssl dhparam -in ca.pem -out ./dh.pem 2048` | -| cert.pem | The certificate for the RADIUS server | -| key.pem | The signing key for the RADIUS certificate | +We provide a RADIUS container that has all the needed integrations. This container requires some +cryptographic material, with the following files being in `/etc/raddb/certs`. (Modifiable in the +configuration) + +| filename | description | +| -------- | ------------------------------------------------------------- | +| ca.pem | The signing CA of the RADIUS certificate | +| dh.pem | The output of `openssl dhparam -in ca.pem -out ./dh.pem 2048` | +| cert.pem | The certificate for the RADIUS server | +| key.pem | The signing key for the RADIUS certificate | The configuration file (`/data/kanidm`) has the following template: @@ -156,12 +152,10 @@ radius_clients = [ # radius_dh_path = "/etc/raddb/certs/dh.pem" # the CA certificate # radius_ca_path = "/etc/raddb/certs/ca.pem" - ``` ## A fully configured example - ```toml url = "https://example.com" @@ -186,6 +180,7 @@ radius_clients = [ { name = "docker" , ipaddr = "172.17.0.0/16", secret = "testing123" }, ] ``` + ## Moving to Production To expose this to a Wi-Fi infrastructure, add your NAS in the configuration: @@ -199,11 +194,10 @@ radius_clients = [ Then re-create/run your docker instance and expose the ports by adding `-p 1812:1812 -p 1812:1812/udp` to the command. -If you have any issues, check the logs from the RADIUS output, as they tend -to indicate the cause of the problem. To increase the logging level you can -re-run your environment with debug enabled: +If you have any issues, check the logs from the RADIUS output, as they tend to indicate the cause of +the problem. To increase the logging level you can re-run your environment with debug enabled: -```shell +```bash docker rm radiusd docker run --name radiusd \ -e DEBUG=True \ @@ -214,8 +208,7 @@ docker run --name radiusd \ kanidm/radius:latest ``` -Note: the RADIUS container *is* configured to provide -[Tunnel-Private-Group-ID](https://freeradius.org/rfc/rfc2868.html#Tunnel-Private-Group-ID), -so if you wish to use Wi-Fi-assigned VLANs on your infrastructure, you can -assign these by groups in the configuration file as shown in the above examples. - +Note: the RADIUS container _is_ configured to provide +[Tunnel-Private-Group-ID](https://freeradius.org/rfc/rfc2868.html#Tunnel-Private-Group-ID), so if +you wish to use Wi-Fi-assigned VLANs on your infrastructure, you can assign these by groups in the +configuration file as shown in the above examples. diff --git a/kanidm_book/src/integrations/traefik.md b/kanidm_book/src/integrations/traefik.md index c4d0cafc2..816834474 100644 --- a/kanidm_book/src/integrations/traefik.md +++ b/kanidm_book/src/integrations/traefik.md @@ -1,26 +1,32 @@ -# Traefik +# Traefik -Traefik is a flexible HTTP reverse proxy webserver that can be integrated with Docker to allow dynamic configuration -and to automatically use LetsEncrypt to provide valid TLS certificates. -We can leverage this in the setup of Kanidm by specifying the configuration of Kanidm and Traefik in the same [Docker Compose configuration](https://docs.docker.com/compose/). +Traefik is a flexible HTTP reverse proxy webserver that can be integrated with Docker to allow +dynamic configuration and to automatically use LetsEncrypt to provide valid TLS certificates. We can +leverage this in the setup of Kanidm by specifying the configuration of Kanidm and Traefik in the +same [Docker Compose configuration](https://docs.docker.com/compose/). ## Example setup -Create a new directory and copy the following YAML file into it as `docker-compose.yml`. -Edit the YAML to update the LetsEncrypt account email for your domain and the FQDN where Kanidm will be made available. -Ensure you adjust this file or Kanidm's configuration to have a matching HTTPS port; the line `traefik.http.services.kanidm.loadbalancer.server.port=8443` sets this on the Traefik side. -> **NOTE** You will need to generate self-signed certificates for Kanidm, and copy the configuration into the `kanidm_data` volume. Some instructions are available in the "Installing the Server" section of this book. +Create a new directory and copy the following YAML file into it as `docker-compose.yml`. Edit the +YAML to update the LetsEncrypt account email for your domain and the FQDN where Kanidm will be made +available. Ensure you adjust this file or Kanidm's configuration to have a matching HTTPS port; the +line `traefik.http.services.kanidm.loadbalancer.server.port=8443` sets this on the Traefik side. + +> **NOTE** You will need to generate self-signed certificates for Kanidm, and copy the configuration +> into the `kanidm_data` volume. Some instructions are available in the "Installing the Server" +> section of this book. `docker-compose.yml` + ```yaml version: "3.4" - + services: traefik: image: traefik:v2.6 container_name: traefik command: - - "--certificatesresolvers.http.acme.email=admin@example.com" + - "--certificatesresolvers.http.acme.email=admin@example.com" - "--certificatesresolvers.http.acme.storage=/letsencrypt/acme.json" - "--certificatesresolvers.http.acme.tlschallenge=true" - "--entrypoints.websecure.address=:443" @@ -37,7 +43,7 @@ services: - "443:443" kanidm: container_name: kanidm - image: kanidm/server:devel + image: kanidm/server:devel restart: unless-stopped volumes: - kanidm_data:/data @@ -53,4 +59,4 @@ volumes: kanidm_data: {} ``` -Finally you may run `docker-compose up` to start up both Kanidm and Traefik. +Finally you may run `docker-compose up` to start up both Kanidm and Traefik. diff --git a/kanidm_book/src/intro.md b/kanidm_book/src/intro.md index bc1e283ca..c2a82e8d8 100644 --- a/kanidm_book/src/intro.md +++ b/kanidm_book/src/intro.md @@ -1,37 +1,33 @@ # Introduction to Kanidm -Kanidm is an identity management server, acting as an authority on account information, authentication -and authorisation within a technical environment. +Kanidm is an identity management server, acting as an authority on account information, +authentication and authorisation within a technical environment. The intent of the Kanidm project is to: -* Provide a single truth source for accounts, groups and privileges. -* Enable integrations to systems and services so they can authenticate accounts. -* Make system, network, application and web authentication easy and accessible. -* Secure and reliable by default, aiming for the highest levels of quality. +- Provide a single truth source for accounts, groups and privileges. +- Enable integrations to systems and services so they can authenticate accounts. +- Make system, network, application and web authentication easy and accessible. +- Secure and reliable by default, aiming for the highest levels of quality. -{{#template - templates/kani-warning.md - imagepath=images - title=NOTICE - text=Kanidm is still a work in progress. Many features will evolve and change over time which may not be suitable for all users. -}} +{{#template templates/kani-warning.md imagepath=images title=NOTICE text=Kanidm is still a work in +progress. Many features will evolve and change over time which may not be suitable for all users. }} ## Why do I want Kanidm? -Whether you work in a business, a volunteer organisation, or are an enthusiast who manages -their personal services, you need methods of authenticating and identifying -to your systems, and subsequently, ways to determine what authorisation and privileges you have -while accessing these systems. +Whether you work in a business, a volunteer organisation, or are an enthusiast who manages their +personal services, you need methods of authenticating and identifying to your systems, and +subsequently, ways to determine what authorisation and privileges you have while accessing these +systems. -We've probably all been in workplaces where you end up with multiple accounts on various -systems - one for a workstation, different SSH keys for different tasks, maybe some shared -account passwords. Not only is it difficult for people to manage all these different credentials -and what they have access to, but it also means that sometimes these credentials have more -access or privilege than they require. +We've probably all been in workplaces where you end up with multiple accounts on various systems - +one for a workstation, different SSH keys for different tasks, maybe some shared account passwords. +Not only is it difficult for people to manage all these different credentials and what they have +access to, but it also means that sometimes these credentials have more access or privilege than +they require. -Kanidm acts as a central authority of accounts in your organisation and allows each account to associate -many devices and credentials with different privileges. An example of how this looks: +Kanidm acts as a central authority of accounts in your organisation and allows each account to +associate many devices and credentials with different privileges. An example of how this looks: ┌──────────────────┐ ┌┴─────────────────┐│ @@ -78,19 +74,20 @@ many devices and credentials with different privileges. An example of how this l │ You │ └──────────┘ -A key design goal is that you authenticate with your device in some manner, and then your device will -continue to authenticate you in the future. Each of these different types of credentials, from SSH keys, -application passwords, to RADIUS passwords and others, are "things your device knows". Each password -has limited capability, and can only access that exact service or resource. +A key design goal is that you authenticate with your device in some manner, and then your device +will continue to authenticate you in the future. Each of these different types of credentials, from +SSH keys, application passwords, to RADIUS passwords and others, are "things your device knows". +Each password has limited capability, and can only access that exact service or resource. -This helps improve security; a compromise of the service or the network transmission does not -grant you unlimited access to your account and all its privileges. As the credentials are specific -to a device, if a device is compromised you can revoke its associated credentials. If a -specific service is compromised, only the credentials for that service need to be revoked. +This helps improve security; a compromise of the service or the network transmission does not grant +you unlimited access to your account and all its privileges. As the credentials are specific to a +device, if a device is compromised you can revoke its associated credentials. If a specific service +is compromised, only the credentials for that service need to be revoked. -Due to this model, and the design of Kanidm to centre the device and to have more per-service credentials, -workflows and automation are added or designed to reduce human handling. +Due to this model, and the design of Kanidm to centre the device and to have more per-service +credentials, workflows and automation are added or designed to reduce human handling. ## Library documentation -Looking for the `rustdoc` documentation for the libraries themselves? [Click here!](https://kanidm.com/documentation/) +Looking for the `rustdoc` documentation for the libraries themselves? +[Click here!](https://kanidm.com/documentation/) diff --git a/kanidm_book/src/monitoring.md b/kanidm_book/src/monitoring.md index 9e0be8a57..c05d422db 100644 --- a/kanidm_book/src/monitoring.md +++ b/kanidm_book/src/monitoring.md @@ -1,17 +1,17 @@ # Monitoring the platform -The monitoring design of Kanidm is still very much in its infancy - +The monitoring design of Kanidm is still very much in its infancy - [take part in the dicussion at github.com/kanidm/kanidm/issues/216](https://github.com/kanidm/kanidm/issues/216). ## kanidmd -kanidmd currently responds to HTTP GET requests at the `/status` endpoint with a JSON object of +kanidmd currently responds to HTTP GET requests at the `/status` endpoint with a JSON object of either "true" or "false". `true` indicates that the platform is responding to requests. -| URL | `/status` | -| --- | --- | -| Example URL | `https://example.com/status` | -| Expected response | One of either `true` or `false` (without quotes) | -| Additional Headers | x-kanidm-opid -| Content Type | application/json | -| Cookies | kanidm-session | +| URL | `/status` | +| ------------------ | ------------------------------------------------ | +| Example URL | `https://example.com/status` | +| Expected response | One of either `true` or `false` (without quotes) | +| Additional Headers | x-kanidm-opid | +| Content Type | application/json | +| Cookies | kanidm-session | diff --git a/kanidm_book/src/packaging.md b/kanidm_book/src/packaging.md index b5a7c0346..6acd76904 100644 --- a/kanidm_book/src/packaging.md +++ b/kanidm_book/src/packaging.md @@ -2,14 +2,14 @@ Packages are known to exist for the following distributions: - - [Arch Linux](https://aur.archlinux.org/packages?O=0&K=kanidm) - - [OpenSUSE](https://software.opensuse.org/search?baseproject=ALL&q=kanidm) - - [NixOS](https://search.nixos.org/packages?sort=relevance&type=packages&query=kanidm) +- [Arch Linux](https://aur.archlinux.org/packages?O=0&K=kanidm) +- [OpenSUSE](https://software.opensuse.org/search?baseproject=ALL&q=kanidm) +- [NixOS](https://search.nixos.org/packages?sort=relevance&type=packages&query=kanidm) To ease packaging for your distribution, the `Makefile` has targets for sets of binary outputs. - -| Target | Description | -| --- | --- | + +| Target | Description | +| ---------------------- | --------------------------- | | `release/kanidm` | Kanidm's CLI | | `release/kanidmd` | The server daemon | | `release/kanidm-ssh` | SSH-related utilities | diff --git a/kanidm_book/src/packaging_debs.md b/kanidm_book/src/packaging_debs.md index 0e810c781..3ef50e46c 100644 --- a/kanidm_book/src/packaging_debs.md +++ b/kanidm_book/src/packaging_debs.md @@ -5,7 +5,8 @@ This happens in Docker currently, and here's some instructions for doing it for Ubuntu: 1. Start in the root directory of the repository. -2. Run `./platform/debian/ubuntu_docker_builder.sh` This'll start a container, mounting the repository in `~/kanidm/`. +2. Run `./platform/debian/ubuntu_docker_builder.sh` This'll start a container, mounting the + repository in `~/kanidm/`. 3. Install the required dependencies by running `./platform/debian/install_deps.sh`. 4. Building packages uses make, get a list by running `make -f ./platform/debian/Makefile help` @@ -23,12 +24,16 @@ debs/all: build all the debs ``` -5. So if you wanted to build the package for the Kanidm CLI, run `make -f ./platform/debian/Makefile debs/kanidm`. -6. The package will be copied into the `target` directory of the repository on the docker host - not just in the container. +5. So if you wanted to build the package for the Kanidm CLI, run + `make -f ./platform/debian/Makefile debs/kanidm`. +6. The package will be copied into the `target` directory of the repository on the docker host - not + just in the container. ## Adding a package -There's a set of default configuration files in `packaging/`; if you want to add a package definition, add a folder with the package name and then files in there will be copied over the top of the ones from `packaging/` on build. +There's a set of default configuration files in `packaging/`; if you want to add a package +definition, add a folder with the package name and then files in there will be copied over the top +of the ones from `packaging/` on build. You'll need two custom files at minimum: @@ -38,14 +43,14 @@ You'll need two custom files at minimum: There's a lot of other files that can go into a .deb, some handy ones are: | Filename | What it does | -| --- | --- | +| -------- | ------------------------------------------------------------------------ | | preinst | Runs before installation occurs | | postrm | Runs after removal happens | | prerm | Runs before removal happens - handy to shut down services. | | postinst | Runs after installation occurs - we're using that to show notes to users | - ## Some Debian packaging links -* [DH reference](https://www.debian.org/doc/manuals/maint-guide/dreq.en.html) - Explains what needs to be done for packaging (mostly). -* [Reference for what goes in control files](https://www.debian.org/doc/debian-policy/ch-controlfields) \ No newline at end of file +- [DH reference](https://www.debian.org/doc/manuals/maint-guide/dreq.en.html) - Explains what needs + to be done for packaging (mostly). +- [Reference for what goes in control files](https://www.debian.org/doc/debian-policy/ch-controlfields) diff --git a/kanidm_book/src/password_quality.md b/kanidm_book/src/password_quality.md index 86b4391e4..17f37d524 100644 --- a/kanidm_book/src/password_quality.md +++ b/kanidm_book/src/password_quality.md @@ -1,32 +1,33 @@ # Password Quality and Badlisting -Kanidm embeds a set of tools to help your users use and create strong passwords. -This is important as not all user types will require multi-factor authentication (MFA) -for their roles, but compromised accounts still pose a risk. There may also be deployment -or other barriers to a site rolling out sitewide MFA. +Kanidm embeds a set of tools to help your users use and create strong passwords. This is important +as not all user types will require multi-factor authentication (MFA) for their roles, but +compromised accounts still pose a risk. There may also be deployment or other barriers to a site +rolling out sitewide MFA. ## Quality Checking -Kanidm enforces that all passwords are checked by the library "[zxcvbn](https://github.com/dropbox/zxcvbn)". -This has a large number of checks for password quality. It also provides constructive feedback to users on how -to improve their passwords if they are rejected. +Kanidm enforces that all passwords are checked by the library +"[zxcvbn](https://github.com/dropbox/zxcvbn)". This has a large number of checks for password +quality. It also provides constructive feedback to users on how to improve their passwords if they +are rejected. -Some things that zxcvbn looks for is use of the account name or email in the password, common passwords, -low entropy passwords, dates, reverse words and more. +Some things that zxcvbn looks for is use of the account name or email in the password, common +passwords, low entropy passwords, dates, reverse words and more. This library can not be disabled - all passwords in Kanidm must pass this check. ## Password Badlisting -This is the process of configuring a list of passwords to exclude from being able to be used. -This is especially useful if a specific business has been notified of compromised accounts, allowing -you to maintain a list of customised excluded passwords. +This is the process of configuring a list of passwords to exclude from being able to be used. This +is especially useful if a specific business has been notified of compromised accounts, allowing you +to maintain a list of customised excluded passwords. -The other value to this feature is being able to badlist common passwords that zxcvbn does not detect, or -from other large scale password compromises. +The other value to this feature is being able to badlist common passwords that zxcvbn does not +detect, or from other large scale password compromises. -By default we ship with a preconfigured badlist that is updated over time as new password breach lists are -made available. +By default we ship with a preconfigured badlist that is updated over time as new password breach +lists are made available. The password badlist by default is append only, meaning it can only grow, but will never remove passwords previously considered breached. @@ -35,18 +36,21 @@ passwords previously considered breached. You can display the current badlist with: - kanidm system pw-badlist show +```bash +kanidm system pw-badlist show +``` You can update your own badlist with: - kanidm system pw-badlist upload "path/to/badlist" [...] +```bash +kanidm system pw-badlist upload "path/to/badlist" [...] +``` Multiple bad lists can be listed and uploaded at once. These are preprocessed to identify and remove -passwords that zxcvbn and our password rules would already have eliminated. That helps to make the bad -list more efficent to operate over at run time. +passwords that zxcvbn and our password rules would already have eliminated. That helps to make the +bad list more efficent to operate over at run time. ## Password Rotation -Kanidm will never support this "anti-feature". Password rotation encourages poor password hygiene and -is not shown to prevent any attacks. - +Kanidm will never support this "anti-feature". Password rotation encourages poor password hygiene +and is not shown to prevent any attacks. diff --git a/kanidm_book/src/posix_accounts.md b/kanidm_book/src/posix_accounts.md index 052d11c98..7ef6853cf 100644 --- a/kanidm_book/src/posix_accounts.md +++ b/kanidm_book/src/posix_accounts.md @@ -1,73 +1,69 @@ # POSIX Accounts and Groups -Kanidm has features that enable its accounts and groups to be consumed on -POSIX-like machines, such as Linux, FreeBSD, or others. Both service accounts -and person accounts can be used on POSIX systems. +Kanidm has features that enable its accounts and groups to be consumed on POSIX-like machines, such +as Linux, FreeBSD, or others. Both service accounts and person accounts can be used on POSIX +systems. ## Notes on POSIX Features -Many design decisions have been made in the POSIX features -of Kanidm that are intended to make distributed systems easier to manage and -client systems more secure. +Many design decisions have been made in the POSIX features of Kanidm that are intended to make +distributed systems easier to manage and client systems more secure. ### UID and GID Numbers -In Kanidm there is no difference between a UID and a GID number. On most UNIX systems -a user will create all files with a primary user and group. The primary group is -effectively equivalent to the permissions of the user. It is very easy to see scenarios -where someone may change the account to have a shared primary group (ie `allusers`), -but without changing the umask on all client systems. This can cause users' data to be -compromised by any member of the same shared group. +In Kanidm there is no difference between a UID and a GID number. On most UNIX systems a user will +create all files with a primary user and group. The primary group is effectively equivalent to the +permissions of the user. It is very easy to see scenarios where someone may change the account to +have a shared primary group (ie `allusers`), but without changing the umask on all client systems. +This can cause users' data to be compromised by any member of the same shared group. -To prevent this, many systems create a "user private group", or UPG. This group has the -GID number matching the UID of the user, and the user sets their primary -group ID to the GID number of the UPG. +To prevent this, many systems create a "user private group", or UPG. This group has the GID number +matching the UID of the user, and the user sets their primary group ID to the GID number of the UPG. -As there is now an equivalence between the UID and GID number of the user and the UPG, -there is no benefit in separating these values. As a result Kanidm accounts *only* -have a GID number, which is also considered to be its UID number as well. This has the benefit -of preventing the accidental creation of a separate group that has an overlapping GID number -(the `uniqueness` attribute of the schema will block the creation). +As there is now an equivalence between the UID and GID number of the user and the UPG, there is no +benefit in separating these values. As a result Kanidm accounts _only_ have a GID number, which is +also considered to be its UID number as well. This has the benefit of preventing the accidental +creation of a separate group that has an overlapping GID number (the `uniqueness` attribute of the +schema will block the creation). ### UPG Generation -Due to the requirement that a user have a UPG for security, many systems create these as -two independent items. For example in /etc/passwd and /etc/group: +Due to the requirement that a user have a UPG for security, many systems create these as two +independent items. For example in /etc/passwd and /etc/group: - # passwd - william:x:654401105:654401105::/home/william:/bin/zsh - # group - william:x:654401105: +``` +# passwd +william:x:654401105:654401105::/home/william:/bin/zsh +# group +william:x:654401105: +``` -Other systems like FreeIPA use a plugin that generates a UPG as a seperate group entry on -creation of the account. This means there are two entries for an account, and they must -be kept in lock-step. +Other systems like FreeIPA use a plugin that generates a UPG as a seperate group entry on creation +of the account. This means there are two entries for an account, and they must be kept in lock-step. -Kanidm does neither of these. As the GID number of the user must be unique, and a user -implies the UPG must exist, we can generate UPG's on-demand from the account. -This has a single side effect - that you are unable to add any members to a -UPG - given the nature of a user private group, this is the point. +Kanidm does neither of these. As the GID number of the user must be unique, and a user implies the +UPG must exist, we can generate UPG's on-demand from the account. This has a single side effect - +that you are unable to add any members to a UPG - given the nature of a user private group, this is +the point. ### GID Number Generation -Kanidm will have asynchronous replication as a feature between writable -database servers. In this case, we need to be able to allocate stable and reliable -GID numbers to accounts on replicas that may not be in continual communication. +Kanidm will have asynchronous replication as a feature between writable database servers. In this +case, we need to be able to allocate stable and reliable GID numbers to accounts on replicas that +may not be in continual communication. To do this, we use the last 32 bits of the account or group's UUID to generate the GID number. -A valid concern is the possibility of duplication in the lower 32 bits. Given the -birthday problem, if you have 77,000 groups and accounts, you have a 50% chance -of duplication. With 50,000 you have a 20% chance, 9,300 you have a 1% chance and -with 2900 you have a 0.1% chance. +A valid concern is the possibility of duplication in the lower 32 bits. Given the birthday problem, +if you have 77,000 groups and accounts, you have a 50% chance of duplication. With 50,000 you have a +20% chance, 9,300 you have a 1% chance and with 2900 you have a 0.1% chance. -We advise that if you have a site with >10,000 users you should use an external system -to allocate GID numbers serially or consistently to avoid potential duplication events. +We advise that if you have a site with >10,000 users you should use an external system to allocate +GID numbers serially or consistently to avoid potential duplication events. -This design decision is made as most small sites will benefit greatly from the -auto-allocation policy and the simplicity of its design, while larger enterprises -will already have IDM or business process applications for HR/People that are -capable of supplying this kind of data in batch jobs. +This design decision is made as most small sites will benefit greatly from the auto-allocation +policy and the simplicity of its design, while larger enterprises will already have IDM or business +process applications for HR/People that are capable of supplying this kind of data in batch jobs. ## Enabling POSIX Attributes @@ -78,48 +74,58 @@ To enable POSIX account features and IDs on an account, you require the permissi You can then use the following command to enable POSIX extensions on a person or service account. - kanidm [person OR service-account] posix set --name idm_admin [--shell SHELL --gidnumber GID] +```bash +kanidm [person OR service-account] posix set --name idm_admin [--shell SHELL --gidnumber GID] - kanidm person posix set --name idm_admin demo_user - kanidm person posix set --name idm_admin demo_user --shell /bin/zsh - kanidm person posix set --name idm_admin demo_user --gidnumber 2001 +kanidm person posix set --name idm_admin demo_user +kanidm person posix set --name idm_admin demo_user --shell /bin/zsh +kanidm person posix set --name idm_admin demo_user --gidnumber 2001 - kanidm service-account posix set --name idm_admin demo_account - kanidm service-account posix set --name idm_admin demo_account --shell /bin/zsh - kanidm service-account posix set --name idm_admin demo_account --gidnumber 2001 +kanidm service-account posix set --name idm_admin demo_account +kanidm service-account posix set --name idm_admin demo_account --shell /bin/zsh +kanidm service-account posix set --name idm_admin demo_account --gidnumber 2001 +``` You can view the accounts POSIX token details with: - kanidm person posix show --name anonymous demo_user - kanidm service-account posix show --name anonymous demo_account +```bash +kanidm person posix show --name anonymous demo_user +kanidm service-account posix show --name anonymous demo_account +``` ### Enabling POSIX Attributes on Groups -To enable POSIX group features and IDs on an account, you require the permission `idm_group_unix_extend_priv`. -This is provided to `idm_admins` in the default database. +To enable POSIX group features and IDs on an account, you require the permission +`idm_group_unix_extend_priv`. This is provided to `idm_admins` in the default database. You can then use the following command to enable POSIX extensions: - kanidm group posix set --name idm_admin [--gidnumber GID] - kanidm group posix set --name idm_admin demo_group - kanidm group posix set --name idm_admin demo_group --gidnumber 2001 +```bash +kanidm group posix set --name idm_admin [--gidnumber GID] +kanidm group posix set --name idm_admin demo_group +kanidm group posix set --name idm_admin demo_group --gidnumber 2001 +``` You can view the accounts POSIX token details with: - kanidm group posix show --name anonymous demo_group +```bash +kanidm group posix show --name anonymous demo_group +``` -POSIX-enabled groups will supply their members as POSIX members to clients. There is no -special or separate type of membership for POSIX members required. +POSIX-enabled groups will supply their members as POSIX members to clients. There is no special or +separate type of membership for POSIX members required. ## Troubleshooting Common Issues ### subuid conflicts with Podman -Due to the way that Podman operates, in some cases using the Kanidm client inside non-root containers -with Kanidm accounts may fail with an error such as: +Due to the way that Podman operates, in some cases using the Kanidm client inside non-root +containers with Kanidm accounts may fail with an error such as: - ERRO[0000] cannot find UID/GID for user NAME: No subuid ranges found for user "NAME" in /etc/subuid +``` +ERRO[0000] cannot find UID/GID for user NAME: No subuid ranges found for user "NAME" in /etc/subuid +``` -This is a fault in Podman and how it attempts to provide non-root containers, when UID/GIDs -are greater than 65535. In this case you may manually allocate your users GID number to be -between 1000 - 65535, which may not trigger the fault. +This is a fault in Podman and how it attempts to provide non-root containers, when UID/GIDs are +greater than 65535. In this case you may manually allocate your users GID number to be between +1000 - 65535, which may not trigger the fault. diff --git a/kanidm_book/src/prepare_the_server.md b/kanidm_book/src/prepare_the_server.md index a48627408..0ab7a7b17 100644 --- a/kanidm_book/src/prepare_the_server.md +++ b/kanidm_book/src/prepare_the_server.md @@ -2,23 +2,30 @@ ## Software Installation Method -> **NOTE** Our preferred deployment method is in containers, and this documentation assumes you're running in docker. Kanidm will alternately run as a daemon/service, and server builds are available for multiple platforms if you prefer this option. +> **NOTE** Our preferred deployment method is in containers, and this documentation assumes you're +> running in docker. Kanidm will alternately run as a daemon/service, and server builds are +> available for multiple platforms if you prefer this option. We provide docker images for the server components. They can be found at: - - - - +- +- You can fetch these by running the commands: - docker pull kanidm/server:x86_64_latest - docker pull kanidm/radius:latest +```bash +docker pull kanidm/server:x86_64_latest +docker pull kanidm/radius:latest +``` If you do not meet the [system requirements](#system-requirements) for your CPU you should use: - docker pull kanidm/server:latest +```bash +docker pull kanidm/server:latest +``` -You may need to adjust your example commands throughout this document to suit your desired server type. +You may need to adjust your example commands throughout this document to suit your desired server +type. ## Development Version @@ -34,8 +41,10 @@ report issues, we will make every effort to help resolve them. If you are using the x86\_64 cpu-optimised version, you must have a CPU that is from 2013 or newer (Haswell, Ryzen). The following instruction flags are used. - cmov, cx8, fxsr, mmx, sse, sse2, cx16, sahf, popcnt, sse3, sse4.1, sse4.2, avx, avx2, - bmi, bmi2, f16c, fma, lzcnt, movbe, xsave +``` +cmov, cx8, fxsr, mmx, sse, sse2, cx16, sahf, popcnt, sse3, sse4.1, sse4.2, avx, avx2, +bmi, bmi2, f16c, fma, lzcnt, movbe, xsave +``` Older or unsupported CPUs may raise a SIGIL (Illegal Instruction) on hardware that is not supported by the project. @@ -45,96 +54,111 @@ In this case, you should use the standard server:latest image. In the future we may apply a baseline of flags as a requirement for x86\_64 for the server:latest image. These flags will be: - cmov, cx8, fxsr, mmx, sse, sse2 +``` +cmov, cx8, fxsr, mmx, sse, sse2 +``` -{{#template - templates/kani-alert.md - imagepath=images - title=Tip - text=You can check your cpu flags on Linux with the command `lscpu` -}} +{{#template templates/kani-alert.md imagepath=images title=Tip text=You can check your cpu flags on +Linux with the command `lscpu` }} #### Memory Kanidm extensively uses memory caching, trading memory consumption to improve parallel throughput. -You should expect to see 64KB of ram per entry in your database, depending on cache tuning and settings. +You should expect to see 64KB of ram per entry in your database, depending on cache tuning and +settings. #### Disk You should expect to use up to 8KB of disk per entry you plan to store. At an estimate 10,000 entry databases will consume 40MB, 100,000 entry will consume 400MB. -For best performance, you should use non-volatile memory express (NVME), or other Flash storage media. +For best performance, you should use non-volatile memory express (NVME), or other Flash storage +media. ## TLS You'll need a volume where you can place configuration, certificates, and the database: - docker volume create kanidmd +```bash +docker volume create kanidmd +``` -You should have a chain.pem and key.pem in your kanidmd volume. The reason for requiring -Transport Layer Security (TLS, which replaces the deprecated Secure Sockets Layer, SSL) is explained in [why tls](./why_tls.md). In summary, TLS is our root of trust between the -server and clients, and a critical element of ensuring a secure system. +You should have a chain.pem and key.pem in your kanidmd volume. The reason for requiring Transport +Layer Security (TLS, which replaces the deprecated Secure Sockets Layer, SSL) is explained in +[why tls](./why_tls.md). In summary, TLS is our root of trust between the server and clients, and a +critical element of ensuring a secure system. The key.pem should be a single PEM private key, with no encryption. The file content should be similar to: - -----BEGIN RSA PRIVATE KEY----- - MII... - -----END RSA PRIVATE KEY----- +``` +-----BEGIN RSA PRIVATE KEY----- +MII... +-----END RSA PRIVATE KEY----- +``` The chain.pem is a series of PEM formatted certificates. The leaf certificate, or the certificate -that matches the private key should be the first certificate in the file. This should be followed -by the series of intermediates, and the final certificate should be the CA root. For example: +that matches the private key should be the first certificate in the file. This should be followed by +the series of intermediates, and the final certificate should be the CA root. For example: - -----BEGIN CERTIFICATE----- - - -----END CERTIFICATE----- - -----BEGIN CERTIFICATE----- - - -----END CERTIFICATE----- - [ more intermediates if needed ] - -----BEGIN CERTIFICATE----- - - -----END CERTIFICATE----- +``` +-----BEGIN CERTIFICATE----- + +-----END CERTIFICATE----- +-----BEGIN CERTIFICATE----- + +-----END CERTIFICATE----- +[ more intermediates if needed ] +-----BEGIN CERTIFICATE----- + +-----END CERTIFICATE----- +``` -> **HINT** -> If you are using Let's Encrypt the provided files "fullchain.pem" and "privkey.pem" are already -> correctly formatted as required for Kanidm. +> **HINT** If you are using Let's Encrypt the provided files "fullchain.pem" and "privkey.pem" are +> already correctly formatted as required for Kanidm. You can validate that the leaf certificate matches the key with the command: - # ECDSA - openssl ec -in key.pem -pubout | openssl sha1 - 1c7e7bf6ef8f83841daeedf16093bda585fc5bb0 - openssl x509 -in chain.pem -noout -pubkey | openssl sha1 - 1c7e7bf6ef8f83841daeedf16093bda585fc5bb0 +```bash +# ECDSA +openssl ec -in key.pem -pubout | openssl sha1 +1c7e7bf6ef8f83841daeedf16093bda585fc5bb0 +openssl x509 -in chain.pem -noout -pubkey | openssl sha1 +1c7e7bf6ef8f83841daeedf16093bda585fc5bb0 - # RSA - # openssl rsa -noout -modulus -in key.pem | openssl sha1 - d2188932f520e45f2e76153fbbaf13f81ea6c1ef - # openssl x509 -noout -modulus -in chain.pem | openssl sha1 - d2188932f520e45f2e76153fbbaf13f81ea6c1ef +# RSA +# openssl rsa -noout -modulus -in key.pem | openssl sha1 +d2188932f520e45f2e76153fbbaf13f81ea6c1ef +# openssl x509 -noout -modulus -in chain.pem | openssl sha1 +d2188932f520e45f2e76153fbbaf13f81ea6c1ef +``` If your chain.pem contains the CA certificate, you can validate this file with the command: - openssl verify -CAfile chain.pem chain.pem +```bash +openssl verify -CAfile chain.pem chain.pem +``` If your chain.pem does not contain the CA certificate (Let's Encrypt chains do not contain the CA for example) then you can validate with this command. - openssl verify -untrusted fullchain.pem fullchain.pem +```bash +openssl verify -untrusted fullchain.pem fullchain.pem +``` -> **NOTE** Here "-untrusted" flag means a list of further certificates in the chain to build up -> to the root is provided, but that the system CA root should be consulted. Verification is NOT bypassed -> or allowed to be invalid. +> **NOTE** Here "-untrusted" flag means a list of further certificates in the chain to build up to +> the root is provided, but that the system CA root should be consulted. Verification is NOT +> bypassed or allowed to be invalid. If these verifications pass you can now use these certificates with Kanidm. To put the certificates in place you can use a shell container that mounts the volume such as: - docker run --rm -i -t -v kanidmd:/data -v /my/host/path/work:/work opensuse/leap:latest /bin/sh -c "cp /work/* /data/" +```bash +docker run --rm -i -t -v kanidmd:/data -v /my/host/path/work:/work opensuse/leap:latest /bin/sh -c "cp /work/* /data/" +``` OR for a shell into the volume: - docker run --rm -i -t -v kanidmd:/data opensuse/leap:latest /bin/sh - +```bash +docker run --rm -i -t -v kanidmd:/data opensuse/leap:latest /bin/sh +``` diff --git a/kanidm_book/src/recycle_bin.md b/kanidm_book/src/recycle_bin.md index 726784e5f..1ec917bce 100644 --- a/kanidm_book/src/recycle_bin.md +++ b/kanidm_book/src/recycle_bin.md @@ -1,25 +1,22 @@ # Recycle Bin -The recycle bin is a storage of deleted entries from the server. This allows -recovery from mistakes for a period of time. +The recycle bin is a storage of deleted entries from the server. This allows recovery from mistakes +for a period of time. -{{#template - templates/kani-warning.md - imagepath=images - title=Warning! - text=The recycle bin is a best effort - when recovering in some cases not everything can be "put back" the way it was. Be sure to check your entries are valid once they have been revived. -}} +{{#template\ +templates/kani-warning.md imagepath=images title=Warning! text=The recycle bin is a best effort - +when recovering in some cases not everything can be "put back" the way it was. Be sure to check your +entries are valid once they have been revived. }} ## Where is the Recycle Bin? -The recycle bin is stored as part of your main database - it is included in all -backups and restores, just like any other data. It is also replicated between -all servers. +The recycle bin is stored as part of your main database - it is included in all backups and +restores, just like any other data. It is also replicated between all servers. ## How do Things Get Into the Recycle Bin? -Any delete operation of an entry will cause it to be sent to the recycle bin. No -configuration or specification is required. +Any delete operation of an entry will cause it to be sent to the recycle bin. No configuration or +specification is required. ## How Long Do Items Stay in the Recycle Bin? @@ -29,46 +26,56 @@ Currently they stay up to 1 week before they are removed. You can display all items in the Recycle Bin with: - kanidm recycle-bin list --name admin +```bash +kanidm recycle-bin list --name admin +``` You can show a single item with: - kanidm recycle-bin get --name admin +```bash +kanidm recycle-bin get --name admin +``` An entry can be revived with: - kanidm recycle-bin revive --name admin +```bash +kanidm recycle-bin revive --name admin +``` ## Edge Cases -The recycle bin is a best effort to restore your data - there are some cases where -the revived entries may not be the same as their were when they were deleted. This -generally revolves around reference types such as group membership, or when the reference -type includes supplemental map data such as the oauth2 scope map type. +The recycle bin is a best effort to restore your data - there are some cases where the revived +entries may not be the same as their were when they were deleted. This generally revolves around +reference types such as group membership, or when the reference type includes supplemental map data +such as the oauth2 scope map type. An example of this data loss is the following steps: - add user1 - add group1 - add user1 as member of group1 - delete user1 - delete group1 - revive user1 - revive group1 +``` +add user1 +add group1 +add user1 as member of group1 +delete user1 +delete group1 +revive user1 +revive group1 +``` -In this series of steps, due to the way that referential integrity is implemented, the -membership of user1 in group1 would be lost in this process. To explain why: +In this series of steps, due to the way that referential integrity is implemented, the membership of +user1 in group1 would be lost in this process. To explain why: - add user1 - add group1 - add user1 as member of group1 // refint between the two established, and memberof added - delete user1 // group1 removes member user1 from refint - delete group1 // user1 now removes memberof group1 from refint - revive user1 // re-add groups based on directmemberof (empty set) - revive group1 // no members +``` +add user1 +add group1 +add user1 as member of group1 // refint between the two established, and memberof added +delete user1 // group1 removes member user1 from refint +delete group1 // user1 now removes memberof group1 from refint +revive user1 // re-add groups based on directmemberof (empty set) +revive group1 // no members +``` -These issues could be looked at again in the future, but for now we think that deletes of -groups is rare - we expect recycle bin to save you in "opps" moments, and in a majority -of cases you may delete a group or a user and then restore them. To handle this series -of steps requires extra code complexity in how we flag operations. For more, -see [This issue on github](https://github.com/kanidm/kanidm/issues/177). +These issues could be looked at again in the future, but for now we think that deletes of groups is +rare - we expect recycle bin to save you in "opps" moments, and in a majority of cases you may +delete a group or a user and then restore them. To handle this series of steps requires extra code +complexity in how we flag operations. For more, see +[This issue on github](https://github.com/kanidm/kanidm/issues/177). diff --git a/kanidm_book/src/security_hardening.md b/kanidm_book/src/security_hardening.md index 3fc90f690..a86de5fd3 100644 --- a/kanidm_book/src/security_hardening.md +++ b/kanidm_book/src/security_hardening.md @@ -1,50 +1,51 @@ # Security Hardening -Kanidm ships with a secure-by-default configuration, however that is only as strong -as the environment that Kanidm operates in. This could be your container environment -or your Unix-like system. +Kanidm ships with a secure-by-default configuration, however that is only as strong as the +environment that Kanidm operates in. This could be your container environment or your Unix-like +system. -This chapter will detail a number of warnings and security practices you should -follow to ensure that Kanidm operates in a secure environment. +This chapter will detail a number of warnings and security practices you should follow to ensure +that Kanidm operates in a secure environment. -The main server is a high-value target for a potential attack, as Kanidm serves as -the authority on identity and authorisation in a network. Compromise of the Kanidm -server is equivalent to a full-network take over, also known as "game over". +The main server is a high-value target for a potential attack, as Kanidm serves as the authority on +identity and authorisation in a network. Compromise of the Kanidm server is equivalent to a +full-network take over, also known as "game over". -The unixd resolver is also a high value target as it can be accessed to allow unauthorised -access to a server, to intercept communications to the server, or more. This also must be protected -carefully. +The unixd resolver is also a high value target as it can be accessed to allow unauthorised access to +a server, to intercept communications to the server, or more. This also must be protected carefully. -For this reason, Kanidm's components must be protected carefully. Kanidm avoids many classic -attacks by being developed in a memory safe language, but risks still exist. +For this reason, Kanidm's components must be protected carefully. Kanidm avoids many classic attacks +by being developed in a memory safe language, but risks still exist. ## Startup Warnings -At startup Kanidm will warn you if the environment it is running in is suspicious or -has risks. For example: +At startup Kanidm will warn you if the environment it is running in is suspicious or has risks. For +example: - kanidmd server -c /tmp/server.toml - WARNING: permissions on /tmp/server.toml may not be secure. Should be readonly to running uid. This could be a security risk ... - WARNING: /tmp/server.toml has 'everyone' permission bits in the mode. This could be a security risk ... - WARNING: /tmp/server.toml owned by the current uid, which may allow file permission changes. This could be a security risk ... - WARNING: permissions on ../insecure/ca.pem may not be secure. Should be readonly to running uid. This could be a security risk ... - WARNING: permissions on ../insecure/cert.pem may not be secure. Should be readonly to running uid. This could be a security risk ... - WARNING: permissions on ../insecure/key.pem may not be secure. Should be readonly to running uid. This could be a security risk ... - WARNING: ../insecure/key.pem has 'everyone' permission bits in the mode. This could be a security risk ... - WARNING: DB folder /tmp has 'everyone' permission bits in the mode. This could be a security risk ... +```bash +kanidmd server -c /tmp/server.toml +WARNING: permissions on /tmp/server.toml may not be secure. Should be readonly to running uid. This could be a security risk ... +WARNING: /tmp/server.toml has 'everyone' permission bits in the mode. This could be a security risk ... +WARNING: /tmp/server.toml owned by the current uid, which may allow file permission changes. This could be a security risk ... +WARNING: permissions on ../insecure/ca.pem may not be secure. Should be readonly to running uid. This could be a security risk ... +WARNING: permissions on ../insecure/cert.pem may not be secure. Should be readonly to running uid. This could be a security risk ... +WARNING: permissions on ../insecure/key.pem may not be secure. Should be readonly to running uid. This could be a security risk ... +WARNING: ../insecure/key.pem has 'everyone' permission bits in the mode. This could be a security risk ... +WARNING: DB folder /tmp has 'everyone' permission bits in the mode. This could be a security risk ... +``` Each warning highlights an issue that may exist in your environment. It is not possible for us to -prescribe an exact configuration that may secure your system. This is why we only present -possible risks. +prescribe an exact configuration that may secure your system. This is why we only present possible +risks. ### Should be Read-only to Running UID -Files, such as configuration files, should be read-only to the UID of the Kanidm daemon. If an attacker is -able to gain code execution, they are then unable to modify the configuration to write, or to over-write -files in other locations, or to tamper with the systems configuration. +Files, such as configuration files, should be read-only to the UID of the Kanidm daemon. If an +attacker is able to gain code execution, they are then unable to modify the configuration to write, +or to over-write files in other locations, or to tamper with the systems configuration. -This can be prevented by changing the files ownership to another user, or removing "write" bits -from the group. +This can be prevented by changing the files ownership to another user, or removing "write" bits from +the group. ### 'everyone' Permission Bits in the Mode @@ -57,93 +58,103 @@ configuration, and removing "everyone" bits from the files in question. ### Owned by the Current UID, Which May Allow File Permission Changes -File permissions in UNIX systems are a discretionary access control system, which means the -named UID owner is able to further modify the access of a file regardless of the current -settings. For example: +File permissions in UNIX systems are a discretionary access control system, which means the named +UID owner is able to further modify the access of a file regardless of the current settings. For +example: - [william@amethyst 12:25] /tmp > touch test - [william@amethyst 12:25] /tmp > ls -al test - -rw-r--r-- 1 william wheel 0 29 Jul 12:25 test - [william@amethyst 12:25] /tmp > chmod 400 test - [william@amethyst 12:25] /tmp > ls -al test - -r-------- 1 william wheel 0 29 Jul 12:25 test - [william@amethyst 12:25] /tmp > chmod 644 test - [william@amethyst 12:26] /tmp > ls -al test - -rw-r--r-- 1 william wheel 0 29 Jul 12:25 test +```bash +[william@amethyst 12:25] /tmp > touch test +[william@amethyst 12:25] /tmp > ls -al test +-rw-r--r-- 1 william wheel 0 29 Jul 12:25 test +[william@amethyst 12:25] /tmp > chmod 400 test +[william@amethyst 12:25] /tmp > ls -al test +-r-------- 1 william wheel 0 29 Jul 12:25 test +[william@amethyst 12:25] /tmp > chmod 644 test +[william@amethyst 12:26] /tmp > ls -al test +-rw-r--r-- 1 william wheel 0 29 Jul 12:25 test +``` -Notice that even though the file was set to "read only" to william, and no permission to any -other users, user "william" can change the bits to add write permissions back or permissions -for other users. +Notice that even though the file was set to "read only" to william, and no permission to any other +users, user "william" can change the bits to add write permissions back or permissions for other +users. This can be prevent by making the file owner a different UID than the running process for kanidm. ### A Secure Example -Between these three issues it can be hard to see a possible strategy to secure files, however -one way exists - group read permissions. The most effective method to secure resources for Kanidm -is to set configurations to: +Between these three issues it can be hard to see a possible strategy to secure files, however one +way exists - group read permissions. The most effective method to secure resources for Kanidm is to +set configurations to: - [william@amethyst 12:26] /etc/kanidm > ls -al server.toml - -r--r----- 1 root kanidm 212 28 Jul 16:53 server.toml +```bash +[william@amethyst 12:26] /etc/kanidm > ls -al server.toml +-r--r----- 1 root kanidm 212 28 Jul 16:53 server.toml +``` -The Kanidm server should be run as "kanidm:kanidm" with the appropriate user and user private -group created on your system. This applies to unixd configuration as well. +The Kanidm server should be run as "kanidm:kanidm" with the appropriate user and user private group +created on your system. This applies to unixd configuration as well. For the database your data folder should be: - [root@amethyst 12:38] /data/kanidm > ls -al . - total 1064 - drwxrwx--- 3 root kanidm 96 29 Jul 12:38 . - -rw-r----- 1 kanidm kanidm 544768 29 Jul 12:38 kanidm.db +```bash +[root@amethyst 12:38] /data/kanidm > ls -al . +total 1064 +drwxrwx--- 3 root kanidm 96 29 Jul 12:38 . +-rw-r----- 1 kanidm kanidm 544768 29 Jul 12:38 kanidm.db +``` This means 770 root:kanidm. This allows Kanidm to create new files in the folder, but prevents Kanidm from being able to change the permissions of the folder. Because the folder does not have -"everyone" mode bits, the content of the database is secure because users can now cd/read -from the directory. +"everyone" mode bits, the content of the database is secure because users can now cd/read from the +directory. Configurations for clients, such as /etc/kanidm/config, should be secured with read-only permissions and owned by root: - [william@amethyst 12:26] /etc/kanidm > ls -al config - -r--r--r-- 1 root root 38 10 Jul 10:10 config - +```bash +[william@amethyst 12:26] /etc/kanidm > ls -al config +-r--r--r-- 1 root root 38 10 Jul 10:10 config +``` + This file should be "everyone"-readable, which is why the bits are defined as such. > NOTE: Why do you use 440 or 444 modes? > -> A bug exists in the implementation of readonly() in rust that checks this as "does a write -> bit exist for any user" vs "can the current UID write the file?". This distinction is subtle -> but it affects the check. We don't believe this is a significant issue though, because -> setting these to 440 and 444 helps to prevent accidental changes by an administrator anyway +> A bug exists in the implementation of readonly() in rust that checks this as "does a write bit +> exist for any user" vs "can the current UID write the file?". This distinction is subtle but it +> affects the check. We don't believe this is a significant issue though, because setting these to +> 440 and 444 helps to prevent accidental changes by an administrator anyway ## Running as Non-root in docker -The commands provided in this book will run kanidmd as "root" in the container to make the onboarding -smoother. However, this is not recommended in production for security reasons. +The commands provided in this book will run kanidmd as "root" in the container to make the +onboarding smoother. However, this is not recommended in production for security reasons. -You should allocate unique UID and GID numbers for the service to run as on your host -system. In this example we use `1000:1000` +You should allocate unique UID and GID numbers for the service to run as on your host system. In +this example we use `1000:1000` -You will need to adjust the permissions on the `/data` volume to ensure that the process -can manage the files. Kanidm requires the ability to write to the `/data` directory to create -the sqlite files. This UID/GID number should match the above. You could consider the following -changes to help isolate these changes: +You will need to adjust the permissions on the `/data` volume to ensure that the process can manage +the files. Kanidm requires the ability to write to the `/data` directory to create the sqlite files. +This UID/GID number should match the above. You could consider the following changes to help isolate +these changes: - docker run --rm -i -t -v kanidmd:/data opensuse/leap:latest /bin/sh - mkdir /data/db/ - chown 1000:1000 /data/db/ - chmod 750 /data/db/ - sed -i -e "s/db_path.*/db_path = \"\/data\/db\/kanidm.db\"/g" /data/server.toml - chown root:root /data/server.toml - chmod 644 /data/server.toml +```bash +docker run --rm -i -t -v kanidmd:/data opensuse/leap:latest /bin/sh +mkdir /data/db/ +chown 1000:1000 /data/db/ +chmod 750 /data/db/ +sed -i -e "s/db_path.*/db_path = \"\/data\/db\/kanidm.db\"/g" /data/server.toml +chown root:root /data/server.toml +chmod 644 /data/server.toml +``` Note that the example commands all run inside the docker container. You can then use this to run the Kanidm server in docker with a user: - docker run --rm -i -t -u 1000:1000 -v kanidmd:/data kanidm/server:latest /sbin/kanidmd ... - -> **HINT** -> You need to use the UID or GID number with the `-u` argument, as the container can't resolve -> usernames from the host system. +```bash +docker run --rm -i -t -u 1000:1000 -v kanidmd:/data kanidm/server:latest /sbin/kanidmd ... +``` +> **HINT** You need to use the UID or GID number with the `-u` argument, as the container can't +> resolve usernames from the host system. diff --git a/kanidm_book/src/server_configuration.md b/kanidm_book/src/server_configuration.md index cb36d26de..2ba0007bc 100644 --- a/kanidm_book/src/server_configuration.md +++ b/kanidm_book/src/server_configuration.md @@ -2,49 +2,56 @@ ### Configuring server.toml -You need a configuration file in the volume named `server.toml`. (Within the container it should be `/data/server.toml`) Its contents should be as follows: +You need a configuration file in the volume named `server.toml`. (Within the container it should be +`/data/server.toml`) Its contents should be as follows: ``` {{#rustdoc_include ../../examples/server_container.toml}} ``` -This example is located in [examples/server_container.toml](https://github.com/kanidm/kanidm/blob/master/examples/server_container.toml). +This example is located in +[examples/server_container.toml](https://github.com/kanidm/kanidm/blob/master/examples/server_container.toml). -{{#template - templates/kani-warning.md - imagepath=images - title=Warning! - text=You MUST set the `domain` name correctly, aligned with your `origin`, else the server may refuse to start or some features (e.g. webauthn, oauth) may not work correctly! -}} +{{#template templates/kani-warning.md imagepath=images title=Warning! text=You MUST set the `domain` +name correctly, aligned with your `origin`, else the server may refuse to start or some features +(e.g. webauthn, oauth) may not work correctly! }} ### Check the configuration is valid. You should test your configuration is valid before you proceed. - docker run --rm -i -t -v kanidmd:/data \ - kanidm/server:latest /sbin/kanidmd configtest -c /data/server.toml +```bash +docker run --rm -i -t -v kanidmd:/data \ + kanidm/server:latest /sbin/kanidmd configtest -c /data/server.toml +``` ### Default Admin Account -Then you can setup the initial admin account and initialise the database into your volume. This command -will generate a new random password for the admin account. +Then you can setup the initial admin account and initialise the database into your volume. This +command will generate a new random password for the admin account. - docker run --rm -i -t -v kanidmd:/data \ - kanidm/server:latest /sbin/kanidmd recover_account -c /data/server.toml admin - # success - recover_account password for user admin: vv... +```bash +docker run --rm -i -t -v kanidmd:/data \ + kanidm/server:latest /sbin/kanidmd recover_account -c /data/server.toml admin +# success - recover_account password for user admin: vv... +``` ### Run the Server -Now we can run the server so that it can accept connections. This defaults to using `-c /data/server.toml` +Now we can run the server so that it can accept connections. This defaults to using +`-c /data/server.toml` - docker run -p 443:8443 -v kanidmd:/data kanidm/server:latest +```bash +docker run -p 443:8443 -v kanidmd:/data kanidm/server:latest +``` ### Using the NET\_BIND\_SERVICE capability -If you plan to run without using docker port mapping or some other reverse proxy, and your bindaddress -or ldapbindaddress port is less than `1024` you will need the `NET_BIND_SERVICE` in docker to allow -these port binds. You can add this with `--cap-add` in your docker run command. - - docker run --cap-add NET_BIND_SERVICE --network [host OR macvlan OR ipvlan] \ - -v kanidmd:/data kanidm/server:latest +If you plan to run without using docker port mapping or some other reverse proxy, and your +bindaddress or ldapbindaddress port is less than `1024` you will need the `NET_BIND_SERVICE` in +docker to allow these port binds. You can add this with `--cap-add` in your docker run command. +```bash +docker run --cap-add NET_BIND_SERVICE --network [host OR macvlan OR ipvlan] \ + -v kanidmd:/data kanidm/server:latest +``` diff --git a/kanidm_book/src/server_update.md b/kanidm_book/src/server_update.md index 0e5a06ebe..bca21ce7e 100644 --- a/kanidm_book/src/server_update.md +++ b/kanidm_book/src/server_update.md @@ -2,18 +2,22 @@ ### Preserving the Previous Image -You may wish to preserve the previous image before updating. This is useful if an issue is encountered -in upgrades. +You may wish to preserve the previous image before updating. This is useful if an issue is +encountered in upgrades. - docker tag kanidm/server:latest kanidm/server: - docker tag kanidm/server:latest kanidm/server:2022-10-24 +```bash +docker tag kanidm/server:latest kanidm/server: +docker tag kanidm/server:latest kanidm/server:2022-10-24 +``` ### Update your Image Pull the latest version of Kanidm that matches your CPU profile - docker pull kanidm/server:latest - docker pull kanidm/server:x86_64_latest +```bash +docker pull kanidm/server:latest +docker pull kanidm/server:x86_64_latest +``` ### Perform a backup @@ -21,42 +25,50 @@ See [backup and restore](backup_restore.md) ### Update your Instance -{{#template - templates/kani-warning.md - imagepath=images - title=WARNING - text=It is not always guaranteed that downgrades are possible. It is critical you know how to backup and restore before you proceed with this step. -}} +{{#template templates/kani-warning.md imagepath=images title=WARNING text=It is not always +guaranteed that downgrades are possible. It is critical you know how to backup and restore before +you proceed with this step. }} Docker updates by deleting and recreating the instance. All that needs to be preserved in your storage volume. - docker stop +```bash +docker stop +``` You can test that your configuration is correct, and the server should correctly start. - docker run --rm -i -t -v kanidmd:/data \ - kanidm/server:latest /sbin/kanidmd configtest -c /data/server.toml +```bash +docker run --rm -i -t -v kanidmd:/data \ + kanidm/server:latest /sbin/kanidmd configtest -c /data/server.toml +``` You can then follow through with the upgrade - docker run -p PORTS -v kanidmd:/data \ - OTHER_CUSTOM_OPTIONS \ - kanidm/server:latest +```bash +docker run -p PORTS -v kanidmd:/data \ + OTHER_CUSTOM_OPTIONS \ + kanidm/server:latest +``` Once you confirm the upgrade is successful you can delete the previous instance - docker rm +```bash +docker rm +``` If you encounter an issue you can revert to the previous version. - docker stop - docker start +```bash +docker stop +docker start +``` If you deleted the previous instance, you can recreate it from your preserved tag instead. - docker run -p ports -v volumes kanidm/server: +```bash +docker run -p ports -v volumes kanidm/server: +``` In some cases the downgrade to the previous instance may not work. If the server from your previous version fails to start, you may need to restore from backup. - diff --git a/kanidm_book/src/ssh_key_dist.md b/kanidm_book/src/ssh_key_dist.md index 54b4929d3..49ebfca76 100644 --- a/kanidm_book/src/ssh_key_dist.md +++ b/kanidm_book/src/ssh_key_dist.md @@ -1,102 +1,119 @@ # SSH Key Distribution -To support SSH authentication securely to a large set of hosts running SSH, we support -distribution of SSH public keys via the Kanidm server. Both persons and service accounts -support SSH public keys on their accounts. +To support SSH authentication securely to a large set of hosts running SSH, we support distribution +of SSH public keys via the Kanidm server. Both persons and service accounts support SSH public keys +on their accounts. ## Configuring Accounts To view the current SSH public keys on accounts, you can use: - kanidm person|service-account ssh list_publickeys --name - kanidm person|service-account ssh list_publickeys --name idm_admin william +```bash +kanidm person|service-account ssh list_publickeys --name +kanidm person|service-account ssh list_publickeys --name idm_admin william +``` -All users by default can self-manage their SSH public keys. To upload a key, a command like this -is the best way to do so: +All users by default can self-manage their SSH public keys. To upload a key, a command like this is +the best way to do so: - kanidm person|service-account ssh add_publickey --name william william 'test-key' "`cat ~/.ssh/id_rsa.pub`" +```bash +kanidm person|service-account ssh add_publickey --name william william 'test-key' "`cat ~/.ssh/id_rsa.pub`" +``` To remove (revoke) an SSH public key, delete them by the tag name: - kanidm person|service-account ssh delete_publickey --name william william 'test-key' +```bash +kanidm person|service-account ssh delete_publickey --name william william 'test-key' +``` ## Security Notes -As a security feature, Kanidm validates *all* public keys to ensure they are valid SSH public keys. +As a security feature, Kanidm validates _all_ public keys to ensure they are valid SSH public keys. Uploading a private key or other data will be rejected. For example: - kanidm person|service-account ssh add_publickey --name william william 'test-key' "invalid" - Enter password: - ... Some(SchemaViolation(InvalidAttributeSyntax)))' ... +```bash +kanidm person|service-account ssh add_publickey --name william william 'test-key' "invalid" +Enter password: + ... Some(SchemaViolation(InvalidAttributeSyntax)))' ... +``` ## Server Configuration ### Public Key Caching Configuration If you have kanidm_unixd running, you can use it to locally cache SSH public keys. This means you -can still SSH into your machines, even if your network is down, you move away from Kanidm, or -some other interruption occurs. +can still SSH into your machines, even if your network is down, you move away from Kanidm, or some +other interruption occurs. -The kanidm_ssh_authorizedkeys command is part of the kanidm-unix-clients package, so should be installed -on the servers. It communicates to kanidm_unixd, so you should have a configured PAM/nsswitch -setup as well. +The kanidm_ssh_authorizedkeys command is part of the kanidm-unix-clients package, so should be +installed on the servers. It communicates to kanidm_unixd, so you should have a configured +PAM/nsswitch setup as well. You can test this is configured correctly by running: - kanidm_ssh_authorizedkeys +```bash +kanidm_ssh_authorizedkeys +``` If the account has SSH public keys you should see them listed, one per line. -To configure servers to accept these keys, you must change their /etc/ssh/sshd_config to -contain the lines: +To configure servers to accept these keys, you must change their /etc/ssh/sshd_config to contain the +lines: - PubkeyAuthentication yes - UsePAM yes - AuthorizedKeysCommand /usr/bin/kanidm_ssh_authorizedkeys %u - AuthorizedKeysCommandUser nobody +``` +PubkeyAuthentication yes +UsePAM yes +AuthorizedKeysCommand /usr/bin/kanidm_ssh_authorizedkeys %u +AuthorizedKeysCommandUser nobody +``` Restart sshd, and then attempt to authenticate with the keys. It's highly recommended you keep your client configuration and sshd_configuration in a configuration management tool such as salt or ansible. -> **NOTICE:** -> With a working SSH key setup, you should also consider adding the following +> **NOTICE:** With a working SSH key setup, you should also consider adding the following > sshd_config options as hardening. - PermitRootLogin no - PasswordAuthentication no - PermitEmptyPasswords no - GSSAPIAuthentication no - KerberosAuthentication no +``` +PermitRootLogin no +PasswordAuthentication no +PermitEmptyPasswords no +GSSAPIAuthentication no +KerberosAuthentication no +``` ### Direct Communication Configuration In this mode, the authorised keys commands will contact Kanidm directly. -> **NOTICE:** -> As Kanidm is contacted directly there is no SSH public key cache. Any network -> outage or communication loss may prevent you accessing your systems. You should -> only use this version if you have a requirement for it. +> **NOTICE:** As Kanidm is contacted directly there is no SSH public key cache. Any network outage +> or communication loss may prevent you accessing your systems. You should only use this version if +> you have a requirement for it. -The kanidm_ssh_authorizedkeys_direct command is part of the kanidm-clients package, so should be installed -on the servers. +The kanidm_ssh_authorizedkeys_direct command is part of the kanidm-clients package, so should be +installed on the servers. -To configure the tool, you should edit /etc/kanidm/config, as documented in [clients](./client_tools.md) +To configure the tool, you should edit /etc/kanidm/config, as documented in +[clients](./client_tools.md) You can test this is configured correctly by running: - kanidm_ssh_authorizedkeys_direct -D anonymous +```bash +kanidm_ssh_authorizedkeys_direct -D anonymous +``` If the account has SSH public keys you should see them listed, one per line. -To configure servers to accept these keys, you must change their /etc/ssh/sshd_config to -contain the lines: +To configure servers to accept these keys, you must change their /etc/ssh/sshd_config to contain the +lines: - PubkeyAuthentication yes - UsePAM yes - AuthorizedKeysCommand /usr/bin/kanidm_ssh_authorizedkeys_direct -D anonymous %u - AuthorizedKeysCommandUser nobody +``` +PubkeyAuthentication yes +UsePAM yes +AuthorizedKeysCommand /usr/bin/kanidm_ssh_authorizedkeys_direct -D anonymous %u +AuthorizedKeysCommandUser nobody +``` Restart sshd, and then attempt to authenticate with the keys. diff --git a/kanidm_book/src/sync/concepts.md b/kanidm_book/src/sync/concepts.md index 22eec2c36..c67ad4ed8 100644 --- a/kanidm_book/src/sync/concepts.md +++ b/kanidm_book/src/sync/concepts.md @@ -9,13 +9,13 @@ Kanidm to work with these, it is possible to synchronised data between these IDM Currently Kanidm can consume (import) data from another IDM system. There are two major use cases for this: -* Running Kanidm in parallel with another IDM system -* Migrating from an existing IDM to Kanidm +- Running Kanidm in parallel with another IDM system +- Migrating from an existing IDM to Kanidm -An incoming IDM data source is bound to Kanidm by a sync account. All synchronised entries will -have a reference to the sync account that they came from defined by their "sync parent uuid". -While an entry is owned by a sync account we refer to the sync account as having authority over -the content of that entry. +An incoming IDM data source is bound to Kanidm by a sync account. All synchronised entries will have +a reference to the sync account that they came from defined by their "sync parent uuid". While an +entry is owned by a sync account we refer to the sync account as having authority over the content +of that entry. The sync process is driven by a sync tool. This tool extracts the current state of the sync from Kanidm, requests the set of changes (differences) from the IDM source, and then submits these @@ -23,45 +23,50 @@ changes to Kanidm. Kanidm will update and apply these changes and commit the new success. In the event of a conflict or data import error, Kanidm will halt and rollback the synchronisation -to the last good state. The sync tool should be reconfigured to exclude the conflicting entry or -to remap it's properties to resolve the conflict. The operation can then be retried. +to the last good state. The sync tool should be reconfigured to exclude the conflicting entry or to +remap it's properties to resolve the conflict. The operation can then be retried. This process can continue long term to allow Kanidm to operate in parallel to another IDM system. If this is for a migration however, the sync account can be finalised. This terminates the sync account and removes the sync parent uuid from all synchronised entries, moving authority of the entry into Kanidm. -Alternatelly, the sync account can be terminated which removes all synchronised content that was submitted. +Alternatelly, the sync account can be terminated which removes all synchronised content that was +submitted. ## Creating a Sync Account -Creating a sync account requires administration permissions. By default this is available to -members of the "system\_admins" group which "admin" is a memberof by default. +Creating a sync account requires administration permissions. By default this is available to members +of the "system\_admins" group which "admin" is a memberof by default. - kanidm system sync create - kanidm system sync create ipasync +```bash +kanidm system sync create +kanidm system sync create ipasync +``` -Once the sync account is created you can then generate the sync token which identifies the -sync tool. +Once the sync account is created you can then generate the sync token which identifies the sync +tool. - kanidm system sync generate-token - kanidm system sync generate-token ipasync mylabel - token: eyJhbGci... +```bash +kanidm system sync generate-token +kanidm system sync generate-token ipasync mylabel +token: eyJhbGci... +``` -{{#template - ../templates/kani-warning.md - imagepath=../images - title=Warning! - text=The sync account token has a high level of privilege, able to create new accounts and groups. It should be treated carefully as a result! -}} +{{#template\ +../templates/kani-warning.md imagepath=../images title=Warning! text=The sync account token has a +high level of privilege, able to create new accounts and groups. It should be treated carefully as a +result! }} If you need to revoke the token, you can do so with: - kanidm system sync destroy-token - kanidm system sync destroy-token ipasync +```bash +kanidm system sync destroy-token +kanidm system sync destroy-token ipasync +``` -Destroying the token does NOT affect the state of the sync account and it's synchronised entries. Creating -a new token and providing that to the sync tool will continue the sync process. +Destroying the token does NOT affect the state of the sync account and it's synchronised entries. +Creating a new token and providing that to the sync tool will continue the sync process. ## Operating the Sync Tool @@ -84,16 +89,15 @@ If you are performing a migration from an external IDM to Kanidm, when that migr you can nominate that Kanidm now owns all of the imported data. This is achieved by finalising the sync account. -{{#template - ../templates/kani-warning.md - imagepath=../images - title=Warning! - text=You can not undo this operation. Once you have finalised an agreement, Kanidm owns all of the synchronised data, and you can not resume synchronisation. -}} +{{#template ../templates/kani-warning.md imagepath=../images title=Warning! text=You can not undo +this operation. Once you have finalised an agreement, Kanidm owns all of the synchronised data, and +you can not resume synchronisation. }} - kanidm system sync finalise - kanidm system sync finalise ipasync - # Do you want to continue? This operation can NOT be undone. [y/N] +```bash +kanidm system sync finalise +kanidm system sync finalise ipasync +# Do you want to continue? This operation can NOT be undone. [y/N] +``` Once finalised, imported accounts can now be fully managed by Kanidm. @@ -102,16 +106,14 @@ Once finalised, imported accounts can now be fully managed by Kanidm. If you decide to cease importing accounts or need to remove all imported accounts from a sync account, you can choose to terminate the agreement removing all data that was imported. -{{#template - ../templates/kani-warning.md - imagepath=../images - title=Warning! - text=You can not undo this operation. Once you have terminated an agreement, Kanidm deletes all of the synchronised data, and you can not resume synchronisation. -}} +{{#template ../templates/kani-warning.md imagepath=../images title=Warning! text=You can not undo +this operation. Once you have terminated an agreement, Kanidm deletes all of the synchronised data, +and you can not resume synchronisation. }} - kanidm system sync terminate - kanidm system sync terminate ipasync - # Do you want to continue? This operation can NOT be undone. [y/N] +```bash +kanidm system sync terminate +kanidm system sync terminate ipasync +# Do you want to continue? This operation can NOT be undone. [y/N] +``` Once terminated all imported data will be deleted by Kanidm. - diff --git a/kanidm_book/src/sync/freeipa.md b/kanidm_book/src/sync/freeipa.md index 887c9c6e7..eb9138db8 100644 --- a/kanidm_book/src/sync/freeipa.md +++ b/kanidm_book/src/sync/freeipa.md @@ -19,62 +19,75 @@ to understand how to connect to Kanidm. The sync tool specific components are configured in it's own configuration file. -``` +```rust {{#rustdoc_include ../../../examples/kanidm-ipa-sync}} ``` -This example is located in [examples/kanidm-ipa-sync](https://github.com/kanidm/kanidm/blob/master/examples/kanidm-ipa-sync). +This example is located in +[examples/kanidm-ipa-sync](https://github.com/kanidm/kanidm/blob/master/examples/kanidm-ipa-sync). In addition to this, you must make some configuration changes to FreeIPA to enable synchronisation. You can find the name of your 389 Directory Server instance with: - dsconf --list +```bash +dsconf --list +``` -Using this you can show the current status of the retro changelog plugin to see if you need -to change it's configuration. +Using this you can show the current status of the retro changelog plugin to see if you need to +change it's configuration. - dsconf plugin retro-changelog show - dsconf slapd-DEV-KANIDM-COM plugin retro-changelog show +```bash +dsconf plugin retro-changelog show +dsconf slapd-DEV-KANIDM-COM plugin retro-changelog show +``` You must modify the retro changelog plugin to include the full scope of the database suffix so that -the sync tool can view the changes to the database. Currently dsconf can not modify the include-suffix -so you must do this manually. +the sync tool can view the changes to the database. Currently dsconf can not modify the +include-suffix so you must do this manually. -You need to change the `nsslapd-include-suffix` to match your FreeIPA baseDN here. You can -access the basedn with: - - ldapsearch -H ldaps:// -x -b '' -s base namingContexts - # namingContexts: dc=ipa,dc=dev,dc=kanidm,dc=com - -You should ignore `cn=changelog` and `o=ipaca` as these are system internal namingContexts. You -can then create an ldapmodify like the following. +You need to change the `nsslapd-include-suffix` to match your FreeIPA baseDN here. You can access +the basedn with: +```bash +ldapsearch -H ldaps:// -x -b '' -s base namingContexts +# namingContexts: dc=ipa,dc=dev,dc=kanidm,dc=com ``` + +You should ignore `cn=changelog` and `o=ipaca` as these are system internal namingContexts. You can +then create an ldapmodify like the following. + +```rust {{#rustdoc_include ../../../iam_migrations/freeipa/00config-mod.ldif}} ``` And apply it with: - ldapmodify -f change.ldif -H ldaps:// -x -D 'cn=Directory Manager' -W - # Enter LDAP Password: +```bash +ldapmodify -f change.ldif -H ldaps:// -x -D 'cn=Directory Manager' -W +# Enter LDAP Password: +``` You must then reboot your FreeIPA server. ## Running the Sync Tool Manually -You can perform a dry run with the sync tool manually to check your configurations are -correct and that the tool can synchronise from FreeIPA. +You can perform a dry run with the sync tool manually to check your configurations are correct and +that the tool can synchronise from FreeIPA. - kanidm-ipa-sync [-c /path/to/kanidm/config] -i /path/to/kanidm-ipa-sync -n - kanidm-ipa-sync -i /etc/kanidm/ipa-sync -n +```bash +kanidm-ipa-sync [-c /path/to/kanidm/config] -i /path/to/kanidm-ipa-sync -n +kanidm-ipa-sync -i /etc/kanidm/ipa-sync -n +``` ## Running the Sync Tool Automatically -The sync tool can be run on a schedule if you configure the `schedule` parameter, and provide -the option "--schedule" on the cli +The sync tool can be run on a schedule if you configure the `schedule` parameter, and provide the +option "--schedule" on the cli - kanidm-ipa-sync [-c /path/to/kanidm/config] -i /path/to/kanidm-ipa-sync --schedule +```bash +kanidm-ipa-sync [-c /path/to/kanidm/config] -i /path/to/kanidm-ipa-sync --schedule +``` ## Monitoring the Sync Tool @@ -85,10 +98,11 @@ You can configure a status listener that can be monitored via tcp with the param An example of monitoring this with netcat is: - # status_bind = "[::1]:12345" - # nc ::1 12345 - Ok - -It's important to note no details are revealed via the status socket, and is purely for Ok or Err status -of the last sync. +```bash +# status_bind = "[::1]:12345" +# nc ::1 12345 +Ok +``` +It's important to note no details are revealed via the status socket, and is purely for Ok or Err +status of the last sync. diff --git a/kanidm_book/src/templates/kani-warning.md b/kanidm_book/src/templates/kani-warning.md index 9a27ff043..3e7d0c614 100644 --- a/kanidm_book/src/templates/kani-warning.md +++ b/kanidm_book/src/templates/kani-warning.md @@ -6,4 +6,4 @@ [[#text]] - \ No newline at end of file + diff --git a/kanidm_book/src/troubleshooting.md b/kanidm_book/src/troubleshooting.md index ef83fbbe5..bd14c113c 100644 --- a/kanidm_book/src/troubleshooting.md +++ b/kanidm_book/src/troubleshooting.md @@ -4,18 +4,21 @@ Some things to try. ## Is the server started? -If you don't see "ready to rock! 🪨" in your logs, it's not started. Scroll back and look for errors!dd +If you don't see "ready to rock! 🪨" in your logs, it's not started. Scroll back and look for +errors!dd ## Can you connect? -If the server's running on `idm.example.com:8443` then a simple connectivity test is done using [curl](https://curl.se). +If the server's running on `idm.example.com:8443` then a simple connectivity test is done using +[curl](https://curl.se). Run the following command: + ```shell curl -k https://idm.example.com:8443/status ``` -This is similar to what you *should* see: +This is similar to what you _should_ see: ```shell {{#rustdoc_include troubleshooting/curl_connection_test.txt}} @@ -38,9 +41,11 @@ If you see something like this: curl: (7) Failed to connect to idm.example.com port 8443 after 5 ms: Connection refused ``` -Then either your DNS is wrong (it's pointing at 10.0.0.1) or you can't connect to the server for some reason. +Then either your DNS is wrong (it's pointing at 10.0.0.1) or you can't connect to the server for +some reason. -If you get errors about certificates, try adding `-k` to skip certificate verification checking and just test connectivity: +If you get errors about certificates, try adding `-k` to skip certificate verification checking and +just test connectivity: ``` curl -vk https://idm.example.com:8443 @@ -48,9 +53,10 @@ curl -vk https://idm.example.com:8443 ## Server things to check -* Has the config file got `bindaddress = "127.0.0.1:8443"` ? Change it to `bindaddress = "[::]:8443"`, so it listens on all interfaces. -* Is there a firewall on the server? -* If you're running in docker, did you expose the port? (`-p 8443:8443`) +- Has the config file got `bindaddress = "127.0.0.1:8443"` ? Change it to + `bindaddress = "[::]:8443"`, so it listens on all interfaces. +- Is there a firewall on the server? +- If you're running in docker, did you expose the port? (`-p 8443:8443`) ## Client things to check @@ -59,4 +65,3 @@ Try running commands with `RUST_LOG=debug` to get more information: ``` RUST_LOG=debug kanidm login --name anonymous ``` - diff --git a/kanidm_book/src/why_tls.md b/kanidm_book/src/why_tls.md index 6e450daf4..1221545c2 100644 --- a/kanidm_book/src/why_tls.md +++ b/kanidm_book/src/why_tls.md @@ -1,32 +1,29 @@ - # Why TLS? You may have noticed that Kanidm requires you to configure TLS in your container. -We are a secure-by-design rather than secure-by-installation system, so TLS for -all connections is considered mandatory. +We are a secure-by-design rather than secure-by-installation system, so TLS for all connections is +considered mandatory. ## What are Secure Cookies? -`secure-cookies` is a flag set in cookies that asks a client to transmit them -back to the origin site if and only if HTTPS is present in the URL. +`secure-cookies` is a flag set in cookies that asks a client to transmit them back to the origin +site if and only if HTTPS is present in the URL. -Certificate authority (CA) verification is *not* checked - you can use invalid, -out of date certificates, or even certificates where the `subjectAltName` does -not match, but the client must see https:// as the destination else it *will not* -send the cookies. +Certificate authority (CA) verification is _not_ checked - you can use invalid, out of date +certificates, or even certificates where the `subjectAltName` does not match, but the client must +see https:// as the destination else it _will not_ send the cookies. ## How Does That Affect Kanidm? -Kanidm's authentication system is a stepped challenge response design, where you -initially request an "intent" to authenticate. Once you establish this intent, -the server sets up a session-id into a cookie, and informs the client of -what authentication methods can proceed. +Kanidm's authentication system is a stepped challenge response design, where you initially request +an "intent" to authenticate. Once you establish this intent, the server sets up a session-id into a +cookie, and informs the client of what authentication methods can proceed. -If you do NOT have a HTTPS URL, the cookie with the session-id is not transmitted. -The server detects this as an invalid-state request in the authentication design, -and immediately breaks the connection, because it appears insecure. +If you do NOT have a HTTPS URL, the cookie with the session-id is not transmitted. The server +detects this as an invalid-state request in the authentication design, and immediately breaks the +connection, because it appears insecure. -Simply put, we are trying to use settings like `secure_cookies` to add constraints -to the server so that you *must* perform and adhere to best practices - such -as having TLS present on your communication channels. +Simply put, we are trying to use settings like `secure_cookies` to add constraints to the server so +that you _must_ perform and adhere to best practices - such as having TLS present on your +communication channels. diff --git a/kanidmd_web_ui/pkg/LICENSE.md b/kanidmd_web_ui/pkg/LICENSE.md index 52d135112..74dee48ce 100644 --- a/kanidmd_web_ui/pkg/LICENSE.md +++ b/kanidmd_web_ui/pkg/LICENSE.md @@ -1,28 +1,22 @@ -Mozilla Public License Version 2.0 -================================== +# Mozilla Public License Version 2.0 1. Definitions --------------- -1.1. "Contributor" - means each individual or legal entity that creates, contributes to - the creation of, or owns Covered Software. +--- -1.2. "Contributor Version" - means the combination of the Contributions of others (if any) used - by a Contributor and that particular Contributor's Contribution. +1.1. "Contributor" means each individual or legal entity that creates, contributes to the creation +of, or owns Covered Software. -1.3. "Contribution" - means Covered Software of a particular Contributor. +1.2. "Contributor Version" means the combination of the Contributions of others (if any) used by a +Contributor and that particular Contributor's Contribution. -1.4. "Covered Software" - means Source Code Form to which the initial Contributor has attached - the notice in Exhibit A, the Executable Form of such Source Code - Form, and Modifications of such Source Code Form, in each case - including portions thereof. +1.3. "Contribution" means Covered Software of a particular Contributor. -1.5. "Incompatible With Secondary Licenses" - means +1.4. "Covered Software" means Source Code Form to which the initial Contributor has attached the +notice in Exhibit A, the Executable Form of such Source Code Form, and Modifications of such Source +Code Form, in each case including portions thereof. + +1.5. "Incompatible With Secondary Licenses" means (a) that the initial Contributor has attached the notice described in Exhibit B to the Covered Software; or @@ -31,23 +25,17 @@ Mozilla Public License Version 2.0 version 1.1 or earlier of the License, but not also under the terms of a Secondary License. -1.6. "Executable Form" - means any form of the work other than Source Code Form. +1.6. "Executable Form" means any form of the work other than Source Code Form. -1.7. "Larger Work" - means a work that combines Covered Software with other material, in - a separate file or files, that is not Covered Software. +1.7. "Larger Work" means a work that combines Covered Software with other material, in a separate +file or files, that is not Covered Software. -1.8. "License" - means this document. +1.8. "License" means this document. -1.9. "Licensable" - means having the right to grant, to the maximum extent possible, - whether at the time of the initial grant or subsequently, any and - all of the rights conveyed by this License. +1.9. "Licensable" means having the right to grant, to the maximum extent possible, whether at the +time of the initial grant or subsequently, any and all of the rights conveyed by this License. -1.10. "Modifications" - means any of the following: +1.10. "Modifications" means any of the following: (a) any file in Source Code Form that results from an addition to, deletion from, or modification of the contents of Covered @@ -56,319 +44,284 @@ Mozilla Public License Version 2.0 (b) any new file in Source Code Form that contains any Covered Software. -1.11. "Patent Claims" of a Contributor - means any patent claim(s), including without limitation, method, - process, and apparatus claims, in any patent Licensable by such - Contributor that would be infringed, but for the grant of the - License, by the making, using, selling, offering for sale, having - made, import, or transfer of either its Contributions or its - Contributor Version. +1.11. "Patent Claims" of a Contributor means any patent claim(s), including without limitation, +method, process, and apparatus claims, in any patent Licensable by such Contributor that would be +infringed, but for the grant of the License, by the making, using, selling, offering for sale, +having made, import, or transfer of either its Contributions or its Contributor Version. -1.12. "Secondary License" - means either the GNU General Public License, Version 2.0, the GNU - Lesser General Public License, Version 2.1, the GNU Affero General - Public License, Version 3.0, or any later versions of those - licenses. +1.12. "Secondary License" means either the GNU General Public License, Version 2.0, the GNU Lesser +General Public License, Version 2.1, the GNU Affero General Public License, Version 3.0, or any +later versions of those licenses. -1.13. "Source Code Form" - means the form of the work preferred for making modifications. +1.13. "Source Code Form" means the form of the work preferred for making modifications. -1.14. "You" (or "Your") - means an individual or a legal entity exercising rights under this - License. For legal entities, "You" includes any entity that - controls, is controlled by, or is under common control with You. For - purposes of this definition, "control" means (a) the power, direct - or indirect, to cause the direction or management of such entity, - whether by contract or otherwise, or (b) ownership of more than - fifty percent (50%) of the outstanding shares or beneficial - ownership of such entity. +1.14. "You" (or "Your") means an individual or a legal entity exercising rights under this License. +For legal entities, "You" includes any entity that controls, is controlled by, or is under common +control with You. For purposes of this definition, "control" means (a) the power, direct or +indirect, to cause the direction or management of such entity, whether by contract or otherwise, or +(b) ownership of more than fifty percent (50%) of the outstanding shares or beneficial ownership of +such entity. 2. License Grants and Conditions --------------------------------- + +--- 2.1. Grants -Each Contributor hereby grants You a world-wide, royalty-free, -non-exclusive license: +Each Contributor hereby grants You a world-wide, royalty-free, non-exclusive license: -(a) under intellectual property rights (other than patent or trademark) - Licensable by such Contributor to use, reproduce, make available, - modify, display, perform, distribute, and otherwise exploit its - Contributions, either on an unmodified basis, with Modifications, or - as part of a Larger Work; and +(a) under intellectual property rights (other than patent or trademark) Licensable by such +Contributor to use, reproduce, make available, modify, display, perform, distribute, and otherwise +exploit its Contributions, either on an unmodified basis, with Modifications, or as part of a Larger +Work; and -(b) under Patent Claims of such Contributor to make, use, sell, offer - for sale, have made, import, and otherwise transfer either its - Contributions or its Contributor Version. +(b) under Patent Claims of such Contributor to make, use, sell, offer for sale, have made, import, +and otherwise transfer either its Contributions or its Contributor Version. 2.2. Effective Date -The licenses granted in Section 2.1 with respect to any Contribution -become effective for each Contribution on the date the Contributor first -distributes such Contribution. +The licenses granted in Section 2.1 with respect to any Contribution become effective for each +Contribution on the date the Contributor first distributes such Contribution. 2.3. Limitations on Grant Scope -The licenses granted in this Section 2 are the only rights granted under -this License. No additional rights or licenses will be implied from the -distribution or licensing of Covered Software under this License. -Notwithstanding Section 2.1(b) above, no patent license is granted by a -Contributor: +The licenses granted in this Section 2 are the only rights granted under this License. No additional +rights or licenses will be implied from the distribution or licensing of Covered Software under this +License. Notwithstanding Section 2.1(b) above, no patent license is granted by a Contributor: -(a) for any code that a Contributor has removed from Covered Software; - or +(a) for any code that a Contributor has removed from Covered Software; or -(b) for infringements caused by: (i) Your and any other third party's - modifications of Covered Software, or (ii) the combination of its - Contributions with other software (except as part of its Contributor - Version); or +(b) for infringements caused by: (i) Your and any other third party's modifications of Covered +Software, or (ii) the combination of its Contributions with other software (except as part of its +Contributor Version); or -(c) under Patent Claims infringed by Covered Software in the absence of - its Contributions. +(c) under Patent Claims infringed by Covered Software in the absence of its Contributions. -This License does not grant any rights in the trademarks, service marks, -or logos of any Contributor (except as may be necessary to comply with -the notice requirements in Section 3.4). +This License does not grant any rights in the trademarks, service marks, or logos of any Contributor +(except as may be necessary to comply with the notice requirements in Section 3.4). 2.4. Subsequent Licenses -No Contributor makes additional grants as a result of Your choice to -distribute the Covered Software under a subsequent version of this -License (see Section 10.2) or under the terms of a Secondary License (if -permitted under the terms of Section 3.3). +No Contributor makes additional grants as a result of Your choice to distribute the Covered Software +under a subsequent version of this License (see Section 10.2) or under the terms of a Secondary +License (if permitted under the terms of Section 3.3). 2.5. Representation -Each Contributor represents that the Contributor believes its -Contributions are its original creation(s) or it has sufficient rights -to grant the rights to its Contributions conveyed by this License. +Each Contributor represents that the Contributor believes its Contributions are its original +creation(s) or it has sufficient rights to grant the rights to its Contributions conveyed by this +License. 2.6. Fair Use -This License is not intended to limit any rights You have under -applicable copyright doctrines of fair use, fair dealing, or other -equivalents. +This License is not intended to limit any rights You have under applicable copyright doctrines of +fair use, fair dealing, or other equivalents. 2.7. Conditions -Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted -in Section 2.1. +Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in Section 2.1. 3. Responsibilities -------------------- + +--- 3.1. Distribution of Source Form -All distribution of Covered Software in Source Code Form, including any -Modifications that You create or to which You contribute, must be under -the terms of this License. You must inform recipients that the Source -Code Form of the Covered Software is governed by the terms of this -License, and how they can obtain a copy of this License. You may not -attempt to alter or restrict the recipients' rights in the Source Code -Form. +All distribution of Covered Software in Source Code Form, including any Modifications that You +create or to which You contribute, must be under the terms of this License. You must inform +recipients that the Source Code Form of the Covered Software is governed by the terms of this +License, and how they can obtain a copy of this License. You may not attempt to alter or restrict +the recipients' rights in the Source Code Form. 3.2. Distribution of Executable Form If You distribute Covered Software in Executable Form then: -(a) such Covered Software must also be made available in Source Code - Form, as described in Section 3.1, and You must inform recipients of - the Executable Form how they can obtain a copy of such Source Code - Form by reasonable means in a timely manner, at a charge no more - than the cost of distribution to the recipient; and +(a) such Covered Software must also be made available in Source Code Form, as described in Section +3.1, and You must inform recipients of the Executable Form how they can obtain a copy of such Source +Code Form by reasonable means in a timely manner, at a charge no more than the cost of distribution +to the recipient; and -(b) You may distribute such Executable Form under the terms of this - License, or sublicense it under different terms, provided that the - license for the Executable Form does not attempt to limit or alter - the recipients' rights in the Source Code Form under this License. +(b) You may distribute such Executable Form under the terms of this License, or sublicense it under +different terms, provided that the license for the Executable Form does not attempt to limit or +alter the recipients' rights in the Source Code Form under this License. 3.3. Distribution of a Larger Work -You may create and distribute a Larger Work under terms of Your choice, -provided that You also comply with the requirements of this License for -the Covered Software. If the Larger Work is a combination of Covered -Software with a work governed by one or more Secondary Licenses, and the -Covered Software is not Incompatible With Secondary Licenses, this -License permits You to additionally distribute such Covered Software -under the terms of such Secondary License(s), so that the recipient of -the Larger Work may, at their option, further distribute the Covered -Software under the terms of either this License or such Secondary -License(s). +You may create and distribute a Larger Work under terms of Your choice, provided that You also +comply with the requirements of this License for the Covered Software. If the Larger Work is a +combination of Covered Software with a work governed by one or more Secondary Licenses, and the +Covered Software is not Incompatible With Secondary Licenses, this License permits You to +additionally distribute such Covered Software under the terms of such Secondary License(s), so that +the recipient of the Larger Work may, at their option, further distribute the Covered Software under +the terms of either this License or such Secondary License(s). 3.4. Notices -You may not remove or alter the substance of any license notices -(including copyright notices, patent notices, disclaimers of warranty, -or limitations of liability) contained within the Source Code Form of -the Covered Software, except that You may alter any license notices to -the extent required to remedy known factual inaccuracies. +You may not remove or alter the substance of any license notices (including copyright notices, +patent notices, disclaimers of warranty, or limitations of liability) contained within the Source +Code Form of the Covered Software, except that You may alter any license notices to the extent +required to remedy known factual inaccuracies. 3.5. Application of Additional Terms -You may choose to offer, and to charge a fee for, warranty, support, -indemnity or liability obligations to one or more recipients of Covered -Software. However, You may do so only on Your own behalf, and not on -behalf of any Contributor. You must make it absolutely clear that any -such warranty, support, indemnity, or liability obligation is offered by -You alone, and You hereby agree to indemnify every Contributor for any -liability incurred by such Contributor as a result of warranty, support, -indemnity or liability terms You offer. You may include additional -disclaimers of warranty and limitations of liability specific to any -jurisdiction. +You may choose to offer, and to charge a fee for, warranty, support, indemnity or liability +obligations to one or more recipients of Covered Software. However, You may do so only on Your own +behalf, and not on behalf of any Contributor. You must make it absolutely clear that any such +warranty, support, indemnity, or liability obligation is offered by You alone, and You hereby agree +to indemnify every Contributor for any liability incurred by such Contributor as a result of +warranty, support, indemnity or liability terms You offer. You may include additional disclaimers of +warranty and limitations of liability specific to any jurisdiction. 4. Inability to Comply Due to Statute or Regulation ---------------------------------------------------- -If it is impossible for You to comply with any of the terms of this -License with respect to some or all of the Covered Software due to -statute, judicial order, or regulation then You must: (a) comply with -the terms of this License to the maximum extent possible; and (b) -describe the limitations and the code they affect. Such description must -be placed in a text file included with all distributions of the Covered -Software under this License. Except to the extent prohibited by statute -or regulation, such description must be sufficiently detailed for a -recipient of ordinary skill to be able to understand it. +--- + +If it is impossible for You to comply with any of the terms of this License with respect to some or +all of the Covered Software due to statute, judicial order, or regulation then You must: (a) comply +with the terms of this License to the maximum extent possible; and (b) describe the limitations and +the code they affect. Such description must be placed in a text file included with all distributions +of the Covered Software under this License. Except to the extent prohibited by statute or +regulation, such description must be sufficiently detailed for a recipient of ordinary skill to be +able to understand it. 5. Termination --------------- -5.1. The rights granted under this License will terminate automatically -if You fail to comply with any of its terms. However, if You become -compliant, then the rights granted under this License from a particular -Contributor are reinstated (a) provisionally, unless and until such -Contributor explicitly and finally terminates Your grants, and (b) on an -ongoing basis, if such Contributor fails to notify You of the -non-compliance by some reasonable means prior to 60 days after You have -come back into compliance. Moreover, Your grants from a particular -Contributor are reinstated on an ongoing basis if such Contributor -notifies You of the non-compliance by some reasonable means, this is the -first time You have received notice of non-compliance with this License -from such Contributor, and You become compliant prior to 30 days after -Your receipt of the notice. +--- -5.2. If You initiate litigation against any entity by asserting a patent -infringement claim (excluding declaratory judgment actions, -counter-claims, and cross-claims) alleging that a Contributor Version -directly or indirectly infringes any patent, then the rights granted to -You by any and all Contributors for the Covered Software under Section -2.1 of this License shall terminate. +5.1. The rights granted under this License will terminate automatically if You fail to comply with +any of its terms. However, if You become compliant, then the rights granted under this License from +a particular Contributor are reinstated (a) provisionally, unless and until such Contributor +explicitly and finally terminates Your grants, and (b) on an ongoing basis, if such Contributor +fails to notify You of the non-compliance by some reasonable means prior to 60 days after You have +come back into compliance. Moreover, Your grants from a particular Contributor are reinstated on an +ongoing basis if such Contributor notifies You of the non-compliance by some reasonable means, this +is the first time You have received notice of non-compliance with this License from such +Contributor, and You become compliant prior to 30 days after Your receipt of the notice. -5.3. In the event of termination under Sections 5.1 or 5.2 above, all -end user license agreements (excluding distributors and resellers) which -have been validly granted by You or Your distributors under this License -prior to termination shall survive termination. +5.2. If You initiate litigation against any entity by asserting a patent infringement claim +(excluding declaratory judgment actions, counter-claims, and cross-claims) alleging that a +Contributor Version directly or indirectly infringes any patent, then the rights granted to You by +any and all Contributors for the Covered Software under Section 2.1 of this License shall terminate. -************************************************************************ -* * -* 6. Disclaimer of Warranty * -* ------------------------- * -* * -* Covered Software is provided under this License on an "as is" * -* basis, without warranty of any kind, either expressed, implied, or * -* statutory, including, without limitation, warranties that the * -* Covered Software is free of defects, merchantable, fit for a * -* particular purpose or non-infringing. The entire risk as to the * -* quality and performance of the Covered Software is with You. * -* Should any Covered Software prove defective in any respect, You * -* (not any Contributor) assume the cost of any necessary servicing, * -* repair, or correction. This disclaimer of warranty constitutes an * -* essential part of this License. No use of any Covered Software is * -* authorized under this License except under this disclaimer. * -* * -************************************************************************ +5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user license agreements +(excluding distributors and resellers) which have been validly granted by You or Your distributors +under this License prior to termination shall survive termination. -************************************************************************ -* * -* 7. Limitation of Liability * -* -------------------------- * -* * -* Under no circumstances and under no legal theory, whether tort * -* (including negligence), contract, or otherwise, shall any * -* Contributor, or anyone who distributes Covered Software as * -* permitted above, be liable to You for any direct, indirect, * -* special, incidental, or consequential damages of any character * -* including, without limitation, damages for lost profits, loss of * -* goodwill, work stoppage, computer failure or malfunction, or any * -* and all other commercial damages or losses, even if such party * -* shall have been informed of the possibility of such damages. This * -* limitation of liability shall not apply to liability for death or * -* personal injury resulting from such party's negligence to the * -* extent applicable law prohibits such limitation. Some * -* jurisdictions do not allow the exclusion or limitation of * -* incidental or consequential damages, so this exclusion and * -* limitation may not apply to You. * -* * -************************************************************************ +--- + +- + - +- + 6. Disclaimer of Warranty * +- ------------------------- * +- + - +- Covered Software is provided under this License on an "as is" * +- basis, without warranty of any kind, either expressed, implied, or * +- statutory, including, without limitation, warranties that the * +- Covered Software is free of defects, merchantable, fit for a * +- particular purpose or non-infringing. The entire risk as to the * +- quality and performance of the Covered Software is with You. * +- Should any Covered Software prove defective in any respect, You * +- (not any Contributor) assume the cost of any necessary servicing, * +- repair, or correction. This disclaimer of warranty constitutes an * +- essential part of this License. No use of any Covered Software is * +- authorized under this License except under this disclaimer. * +- + - + +--- + +--- + +- + - +- + 7. Limitation of Liability * +- -------------------------- * +- + - +- Under no circumstances and under no legal theory, whether tort * +- (including negligence), contract, or otherwise, shall any * +- Contributor, or anyone who distributes Covered Software as * +- permitted above, be liable to You for any direct, indirect, * +- special, incidental, or consequential damages of any character * +- including, without limitation, damages for lost profits, loss of * +- goodwill, work stoppage, computer failure or malfunction, or any * +- and all other commercial damages or losses, even if such party * +- shall have been informed of the possibility of such damages. This * +- limitation of liability shall not apply to liability for death or * +- personal injury resulting from such party's negligence to the * +- extent applicable law prohibits such limitation. Some * +- jurisdictions do not allow the exclusion or limitation of * +- incidental or consequential damages, so this exclusion and * +- limitation may not apply to You. * +- + - + +--- 8. Litigation -------------- -Any litigation relating to this License may be brought only in the -courts of a jurisdiction where the defendant maintains its principal -place of business and such litigation shall be governed by laws of that -jurisdiction, without reference to its conflict-of-law provisions. -Nothing in this Section shall prevent a party's ability to bring -cross-claims or counter-claims. +--- + +Any litigation relating to this License may be brought only in the courts of a jurisdiction where +the defendant maintains its principal place of business and such litigation shall be governed by +laws of that jurisdiction, without reference to its conflict-of-law provisions. Nothing in this +Section shall prevent a party's ability to bring cross-claims or counter-claims. 9. Miscellaneous ----------------- -This License represents the complete agreement concerning the subject -matter hereof. If any provision of this License is held to be -unenforceable, such provision shall be reformed only to the extent -necessary to make it enforceable. Any law or regulation which provides -that the language of a contract shall be construed against the drafter -shall not be used to construe this License against a Contributor. +--- + +This License represents the complete agreement concerning the subject matter hereof. If any +provision of this License is held to be unenforceable, such provision shall be reformed only to the +extent necessary to make it enforceable. Any law or regulation which provides that the language of a +contract shall be construed against the drafter shall not be used to construe this License against a +Contributor. 10. Versions of the License ---------------------------- + +--- 10.1. New Versions -Mozilla Foundation is the license steward. Except as provided in Section -10.3, no one other than the license steward has the right to modify or -publish new versions of this License. Each version will be given a -distinguishing version number. +Mozilla Foundation is the license steward. Except as provided in Section 10.3, no one other than the +license steward has the right to modify or publish new versions of this License. Each version will +be given a distinguishing version number. 10.2. Effect of New Versions -You may distribute the Covered Software under the terms of the version -of the License under which You originally received the Covered Software, -or under the terms of any subsequent version published by the license -steward. +You may distribute the Covered Software under the terms of the version of the License under which +You originally received the Covered Software, or under the terms of any subsequent version published +by the license steward. 10.3. Modified Versions -If you create software not governed by this License, and you want to -create a new license for such software, you may create and use a -modified version of this License if you rename the license and remove -any references to the name of the license steward (except to note that -such modified license differs from this License). +If you create software not governed by this License, and you want to create a new license for such +software, you may create and use a modified version of this License if you rename the license and +remove any references to the name of the license steward (except to note that such modified license +differs from this License). -10.4. Distributing Source Code Form that is Incompatible With Secondary -Licenses +10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses -If You choose to distribute Source Code Form that is Incompatible With -Secondary Licenses under the terms of this version of the License, the -notice described in Exhibit B of this License must be attached. +If You choose to distribute Source Code Form that is Incompatible With Secondary Licenses under the +terms of this version of the License, the notice described in Exhibit B of this License must be +attached. -Exhibit A - Source Code Form License Notice -------------------------------------------- +## Exhibit A - Source Code Form License Notice - This Source Code Form is subject to the terms of the Mozilla Public - License, v. 2.0. If a copy of the MPL was not distributed with this - file, You can obtain one at http://mozilla.org/MPL/2.0/. +This Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of +the MPL was not distributed with this file, You can obtain one at http://mozilla.org/MPL/2.0/. -If it is not possible or desirable to put the notice in a particular -file, then You may include the notice in a location (such as a LICENSE -file in a relevant directory) where a recipient would be likely to look -for such a notice. +If it is not possible or desirable to put the notice in a particular file, then You may include the +notice in a location (such as a LICENSE file in a relevant directory) where a recipient would be +likely to look for such a notice. You may add additional accurate notices of copyright ownership. -Exhibit B - "Incompatible With Secondary Licenses" Notice ---------------------------------------------------------- - - This Source Code Form is "Incompatible With Secondary Licenses", as - defined by the Mozilla Public License, v. 2.0. +## Exhibit B - "Incompatible With Secondary Licenses" Notice +This Source Code Form is "Incompatible With Secondary Licenses", as defined by the Mozilla Public +License, v. 2.0. diff --git a/kanidmd_web_ui/pkg/README.md b/kanidmd_web_ui/pkg/README.md index a0328d2d0..c376d813a 100644 --- a/kanidmd_web_ui/pkg/README.md +++ b/kanidmd_web_ui/pkg/README.md @@ -6,30 +6,31 @@ ## About -Kanidm is a simple and secure identity management platform, which provides services to allow -other systems and application to authenticate against. The project aims for the highest levels -of reliability, security and ease of use. +Kanidm is a simple and secure identity management platform, which provides services to allow other +systems and application to authenticate against. The project aims for the highest levels of +reliability, security and ease of use. The goal of this project is to be a complete identity management provider, covering the broadest -possible set of requirements and integrations. You should not need any other components (like Keycloak) -when you use Kanidm. We want to create a project that will be suitable for everything -from personal home deployments, to the largest enterprise needs. +possible set of requirements and integrations. You should not need any other components (like +Keycloak) when you use Kanidm. We want to create a project that will be suitable for everything from +personal home deployments, to the largest enterprise needs. -To achieve this we rely heavily on strict defaults, simple configuration, and self-healing components. +To achieve this we rely heavily on strict defaults, simple configuration, and self-healing +components. The project is still growing and some areas are developing at a fast pace. The core of the server however is reliable and we make all effort to ensure upgrades will always work. Kanidm supports: -* Oauth2/OIDC Authentication provider for web SSO -* Read only LDAPS gateway -* Linux/Unix integration (with offline authentication) -* SSH key distribution to Linux/Unix systems -* RADIUS for network authentication -* Passkeys / Webauthn for secure cryptographic authentication -* A self service web ui -* Complete CLI tooling for administration +- Oauth2/OIDC Authentication provider for web SSO +- Read only LDAPS gateway +- Linux/Unix integration (with offline authentication) +- SSH key distribution to Linux/Unix systems +- RADIUS for network authentication +- Passkeys / Webauthn for secure cryptographic authentication +- A self service web ui +- Complete CLI tooling for administration If you want to host your own centralised authentication service, then Kanidm is for you! @@ -40,7 +41,8 @@ If you want to deploy Kanidm to see what it can do, you should read the Kanidm b - [Kanidm book (Latest stable)](https://kanidm.github.io/kanidm/stable/) - [Kanidm book (Latest commit)](https://kanidm.github.io/kanidm/master/) -We also publish [support guidelines](https://github.com/kanidm/kanidm/blob/master/project_docs/RELEASE_AND_SUPPORT.md) +We also publish +[support guidelines](https://github.com/kanidm/kanidm/blob/master/project_docs/RELEASE_AND_SUPPORT.md) for what the project will support. ## Code of Conduct / Ethics @@ -54,8 +56,8 @@ See our documentation on [rights and ethics] ## Getting in Contact / Questions -We have a [gitter community channel] where we can talk. Firstyear is also happy to -answer questions via email, which can be found on their github profile. +We have a [gitter community channel] where we can talk. Firstyear is also happy to answer questions +via email, which can be found on their github profile. [gitter community channel]: https://gitter.im/kanidm/community @@ -63,29 +65,29 @@ answer questions via email, which can be found on their github profile. ### LLDAP -[LLDAP](https://github.com/nitnelave/lldap) is a similar project aiming for a small and easy to administer -LDAP server with a web administration portal. Both projects use the [Kanidm LDAP bindings](https://github.com/kanidm/ldap3), and have -many similar ideas. +[LLDAP](https://github.com/nitnelave/lldap) is a similar project aiming for a small and easy to +administer LDAP server with a web administration portal. Both projects use the +[Kanidm LDAP bindings](https://github.com/kanidm/ldap3), and have many similar ideas. The primary benefit of Kanidm over LLDAP is that Kanidm offers a broader set of "built in" features -like Oauth2 and OIDC. To use these from LLDAP you need an external portal like Keycloak, where in Kanidm -they are "built in". However that is also a strength of LLDAP is that is offers "less" which may make -it easier to administer and deploy for you. +like Oauth2 and OIDC. To use these from LLDAP you need an external portal like Keycloak, where in +Kanidm they are "built in". However that is also a strength of LLDAP is that is offers "less" which +may make it easier to administer and deploy for you. If Kanidm is too complex for your needs, you should check out LLDAP as a smaller alternative. If you want a project which has a broader feature set out of the box, then Kanidm might be a better fit. ### 389-ds / OpenLDAP -Both 389-ds and OpenLDAP are generic LDAP servers. This means they only provide LDAP and you need -to bring your own IDM configuration on top. +Both 389-ds and OpenLDAP are generic LDAP servers. This means they only provide LDAP and you need to +bring your own IDM configuration on top. If you need the highest levels of customisation possible from your LDAP deployment, then these are probably better alternatives. If you want a service that is easier to setup and focused on IDM, then Kanidm is a better choice. -Kanidm was originally inspired by many elements of both 389-ds and OpenLDAP. Already Kanidm is as fast -as (or faster than) 389-ds for performance and scaling. +Kanidm was originally inspired by many elements of both 389-ds and OpenLDAP. Already Kanidm is as +fast as (or faster than) 389-ds for performance and scaling. ### FreeIPA @@ -101,15 +103,14 @@ Kanidm is probably for you. ## Developer Getting Started -If you want to develop on the server, there is a getting started [guide for developers]. IDM -is a diverse topic and we encourage contributions of many kinds in the project, from people of -all backgrounds. +If you want to develop on the server, there is a getting started [guide for developers]. IDM is a +diverse topic and we encourage contributions of many kinds in the project, from people of all +backgrounds. [guide for developers]: https://kanidm.github.io/kanidm/master/DEVELOPER_README.html ## What does Kanidm mean? -The original project name was rsidm while it was a thought experiment. Now that it's growing -and developing, we gave it a better project name. Kani is Japanese for "crab". Rust's mascot is a crab. +The original project name was rsidm while it was a thought experiment. Now that it's growing and +developing, we gave it a better project name. Kani is Japanese for "crab". Rust's mascot is a crab. IDM is the common industry term for identity management services. - diff --git a/project_docs/DEVELOPER_PRINCIPLES.md b/project_docs/DEVELOPER_PRINCIPLES.md index cd1ddf326..cda17bded 100644 --- a/project_docs/DEVELOPER_PRINCIPLES.md +++ b/project_docs/DEVELOPER_PRINCIPLES.md @@ -1,14 +1,12 @@ # Developer Principles -As a piece of software that stores the identities of people, the project becomes -bound to social and political matters. The decisions we make have consequences -on many people - many who never have the chance to choose what software is used -to store their identities (think employees in a business). +As a piece of software that stores the identities of people, the project becomes bound to social and +political matters. The decisions we make have consequences on many people - many who never have the +chance to choose what software is used to store their identities (think employees in a business). -This means we have a responsibility to not only be aware of our impact on our -direct users (developers, system administrators, dev ops, security and more) -but also the impact on indirect consumers - many of who are unlikely to be in -a position to contact us to ask for changes and help. +This means we have a responsibility to not only be aware of our impact on our direct users +(developers, system administrators, dev ops, security and more) but also the impact on indirect +consumers - many of who are unlikely to be in a position to contact us to ask for changes and help. ## Ethics / Rights @@ -18,58 +16,52 @@ If you have not already, please see our documentation on [rights and ethics] ## Humans First -We must at all times make decisions that put humans first. We must respect -all cultures, languages, and identities and how they are represented. +We must at all times make decisions that put humans first. We must respect all cultures, languages, +and identities and how they are represented. -This may mean we make technical choices that are difficult or more complex, -or different to "how things have always been done". But we do this to -ensure that all people can have their identities stored how they choose. +This may mean we make technical choices that are difficult or more complex, or different to "how +things have always been done". But we do this to ensure that all people can have their identities +stored how they choose. -For example, any user may change their name, display name and legal name at -any time. Many applications will break as they primary key from name when -this occurs. But this is the fault of the application. Name changes must -be allowed. Our job as technical experts is to allow that to happen. +For example, any user may change their name, display name and legal name at any time. Many +applications will break as they primary key from name when this occurs. But this is the fault of the +application. Name changes must be allowed. Our job as technical experts is to allow that to happen. -We will never put a burden on the user to correct for poor designs on -our part. For example, locking an account if it logs in from a different -country unless the user logs in before hand to indicate where they are -going. This makes the user responsible for a burden (changing the allowed login -country) when the real problem is preventing bruteforce attacks - which -can be technically solved in better ways that don't put administrative -load to humans. +We will never put a burden on the user to correct for poor designs on our part. For example, locking +an account if it logs in from a different country unless the user logs in before hand to indicate +where they are going. This makes the user responsible for a burden (changing the allowed login +country) when the real problem is preventing bruteforce attacks - which can be technically solved in +better ways that don't put administrative load to humans. ## Correct and Simple -As a piece of security sensitive software we must always put correctness -first. All code must have tests. All developers must be able to run all -tests on their machine and environment of choice. +As a piece of security sensitive software we must always put correctness first. All code must have +tests. All developers must be able to run all tests on their machine and environment of choice. This means that the following must always work: - git clone ... - cargo test +```bash +git clone ... +cargo test +``` -If a test or change would require extra requirements, dependencies, or -preconfiguration, then we can no longer provide the above. Testing must -be easy and accesible, else we wont do it, and that leads to poor -software quality. +If a test or change would require extra requirements, dependencies, or preconfiguration, then we can +no longer provide the above. Testing must be easy and accesible, else we wont do it, and that leads +to poor software quality. -The project must be simple. Any one should be able to understand how it -works and why those decisions were made. +The project must be simple. Any one should be able to understand how it works and why those +decisions were made. ## Languages -The core server will (for now) always be written in Rust. This is due to -the strong type guarantees it gives, and how that can help raise the -quality of our project. +The core server will (for now) always be written in Rust. This is due to the strong type guarantees +it gives, and how that can help raise the quality of our project. ## Over-Configuration -Configuration will be allowed, but only if it does not impact the statements -above. Having configuration is good, but allowing too much (IE a scripting -engine for security rules) can give deployments the ability to violate human -first principles, which reflects badly on us. +Configuration will be allowed, but only if it does not impact the statements above. Having +configuration is good, but allowing too much (IE a scripting engine for security rules) can give +deployments the ability to violate human first principles, which reflects badly on us. -All configuration items, must be constrained to fit within our principles -so that every kanidm deployment, will always provide a positive experience -to all people. +All configuration items, must be constrained to fit within our principles so that every kanidm +deployment, will always provide a positive experience to all people. diff --git a/project_docs/RELEASE_AND_SUPPORT.md b/project_docs/RELEASE_AND_SUPPORT.md index f12abad78..6ac0e45d0 100644 --- a/project_docs/RELEASE_AND_SUPPORT.md +++ b/project_docs/RELEASE_AND_SUPPORT.md @@ -2,24 +2,24 @@ Kanidm is released on a 3 month (quarterly) basis. -* February 1st -* May 1st -* August 1st -* November 1st +- February 1st +- May 1st +- August 1st +- November 1st Releases will be tagged and branched in git. -1.2.0 will be released as the first supported version once the project believes the project is -in a maintainable longterm state, without requiring backward breaking changes. There is no current +1.2.0 will be released as the first supported version once the project believes the project is in a +maintainable longterm state, without requiring backward breaking changes. There is no current estimated date for 1.2.0. ## Support Releases during alpha will recieve limited fixes once released. Specifically we will resolve: -* Moderate security issues and above -* Flaws leading to dataloss or corruption -* Other quality fixes at the discrestion of the project team +- Moderate security issues and above +- Flaws leading to dataloss or corruption +- Other quality fixes at the discrestion of the project team These will be backported to the latest stable branch only. @@ -27,23 +27,25 @@ These will be backported to the latest stable branch only. There are a number of "surfaces" that can be considered as "API" in Kanidm. -* JSON HTTP end points of kanidmd -* unix domain socket API of `kanidm_unixd` resolver -* LDAP interface of kanidm -* CLI interface of kanidm admin command -* Many other interaction surfaces +- JSON HTTP end points of kanidmd +- unix domain socket API of `kanidm_unixd` resolver +- LDAP interface of kanidm +- CLI interface of kanidm admin command +- Many other interaction surfaces -During the Alpha, there is no guarantee that *any* of these APIs named here or not named will remain stable. -Only elements from "the same release" are guaranteed to work with each other. +During the Alpha, there is no guarantee that _any_ of these APIs named here or not named will remain +stable. Only elements from "the same release" are guaranteed to work with each other. Once an official release is made, only the JSON API and LDAP interface will be declared stable. The unix domain socket API is internal and will never be "stable". -The CLI is *not* an API and can change with the interest of human interaction during any release. +The CLI is _not_ an API and can change with the interest of human interaction during any release. ## Python module -The python module will typically trail changes in functionality of the core Rust code, and will be developed as we it for our own needs - please feel free to add functionality or improvements, or [ask for them in a Github issue](http://github.com/kanidm/kanidm/issues/new/choose)! +The python module will typically trail changes in functionality of the core Rust code, and will be +developed as we it for our own needs - please feel free to add functionality or improvements, or +[ask for them in a Github issue](http://github.com/kanidm/kanidm/issues/new/choose)! All code changes will include full type-casting wherever possible. diff --git a/project_docs/RELEASE_CHECKLIST.md b/project_docs/RELEASE_CHECKLIST.md index b5f47612b..14ec85420 100644 --- a/project_docs/RELEASE_CHECKLIST.md +++ b/project_docs/RELEASE_CHECKLIST.md @@ -1,74 +1,74 @@ - ## Pre-Reqs - cargo install cargo-audit - cargo install cargo-outdated +```bash +cargo install cargo-audit +cargo install cargo-outdated +``` ## Check List ### Start a release -* [ ] git checkout -b YYYYMMDD-release +- [ ] git checkout -b YYYYMMDD-release ### Cargo Tasks -* [ ] cargo outdated -R -* [ ] cargo audit -* [ ] cargo test +- [ ] cargo outdated -R +- [ ] cargo audit +- [ ] cargo test ### Code Changes -* [ ] upgrade crypto policy values if requires -* [ ] bump index version in constants -* [ ] check for breaking db entry changes. +- [ ] upgrade crypto policy values if requires +- [ ] bump index version in constants +- [ ] check for breaking db entry changes. ### Administration -* [ ] update version in ./kanidmd\_web\_ui/Cargo.toml -* [ ] update version in ./Cargo.toml -* [ ] cargo test -* [ ] build wasm components with release profile -* [ ] Update `RELEASE_NOTES.md` -* [ ] git commit -* [ ] git rebase -i HEAD~X -* [ ] git push origin YYYYMMDD-release -* [ ] Merge PR +- [ ] update version in ./kanidmd\_web\_ui/Cargo.toml +- [ ] update version in ./Cargo.toml +- [ ] cargo test +- [ ] build wasm components with release profile +- [ ] Update `RELEASE_NOTES.md` +- [ ] git commit +- [ ] git rebase -i HEAD~X +- [ ] git push origin YYYYMMDD-release +- [ ] Merge PR ### Git Management -* [ ] git checkout master -* [ ] git branch 1.1.0-alpha.x (Note no v to prevent ref conflict) -* [ ] git checkout v1.1.0-alpha.x -* [ ] git tag v1.1.0-alpha.x +- [ ] git checkout master +- [ ] git branch 1.1.0-alpha.x (Note no v to prevent ref conflict) +- [ ] git checkout v1.1.0-alpha.x +- [ ] git tag v1.1.0-alpha.x -* [ ] Final inspect of the branch +- [ ] Final inspect of the branch -* [ ] git push origin 1.1.0-alpha.x -* [ ] git push origin 1.1.0-alpha.x --tags +- [ ] git push origin 1.1.0-alpha.x +- [ ] git push origin 1.1.0-alpha.x --tags ### Cargo publish -* [ ] publish `kanidm_proto` -* [ ] publish `kanidmd/kanidm` -* [ ] publish `kanidm_client` -* [ ] publish `kanidm_tools` +- [ ] publish `kanidm_proto` +- [ ] publish `kanidmd/kanidm` +- [ ] publish `kanidm_client` +- [ ] publish `kanidm_tools` ### Docker -* [ ] docker buildx use cluster -* [ ] `make buildx/kanidmd/x86_64_v3 buildx/kanidmd buildx/radiusd` -* [ ] Update the readme on docker https://hub.docker.com/repository/docker/kanidm/server +- [ ] docker buildx use cluster +- [ ] `make buildx/kanidmd/x86_64_v3 buildx/kanidmd buildx/radiusd` +- [ ] Update the readme on docker https://hub.docker.com/repository/docker/kanidm/server ### Distro -* [ ] vendor and release to build.opensuse.org +- [ ] vendor and release to build.opensuse.org ### Follow up -* [ ] git checkout master -* [ ] git pull -* [ ] git branch YYYYMMDD-dev-version -* [ ] update version in ./kanidmd\_web\_ui/Cargo.toml -* [ ] update version in ./Cargo.toml -* [ ] build wasm components with debug profile - +- [ ] git checkout master +- [ ] git pull +- [ ] git branch YYYYMMDD-dev-version +- [ ] update version in ./kanidmd\_web\_ui/Cargo.toml +- [ ] update version in ./Cargo.toml +- [ ] build wasm components with debug profile diff --git a/pykanidm/README.md b/pykanidm/README.md index 104691361..c05ef59c6 100644 --- a/pykanidm/README.md +++ b/pykanidm/README.md @@ -2,28 +2,32 @@ A Python module for interacting with Kanidm. -Currently in very very very early beta, please [log an issue](https://github.com/kanidm/kanidm/issues/new/choose) for feature requests and bugs. +Currently in very very very early beta, please +[log an issue](https://github.com/kanidm/kanidm/issues/new/choose) for feature requests and bugs. ## Installation -```shell +```bash python -m pip install kanidm ``` ## Documentation -Documentation can be generated by [cloning the repository](https://github.com/kanidm/kanidm) and running `make docs/pykanidm/build`. The documentation will appear in `./pykanidm/site`. You'll need make and the [poetry](https://pypi.org/project/poetry/) package installed. +Documentation can be generated by [cloning the repository](https://github.com/kanidm/kanidm) and +running `make docs/pykanidm/build`. The documentation will appear in `./pykanidm/site`. You'll need +make and the [poetry](https://pypi.org/project/poetry/) package installed. ## Testing Set up your dev environment using `poetry` - `python -m pip install poetry && poetry install`. -Pytest it used for testing, if you don't have a live server to test against and config set up, use `poetry run pytest -m 'not network'`. +Pytest it used for testing, if you don't have a live server to test against and config set up, use +`poetry run pytest -m 'not network'`. ## Changelog -| Version | Date | Notes | -| --- | --- | --- | -| 0.0.1 | 2022-08-16 | Initial release | -| 0.0.2 | 2022-08-16 | Updated license, including test code in package | +| Version | Date | Notes | +| ------- | ---------- | ----------------------------------------------------- | +| 0.0.1 | 2022-08-16 | Initial release | +| 0.0.2 | 2022-08-16 | Updated license, including test code in package | | 0.0.3 | 2022-08-17 | Updated test suite to allow skipping of network tests | diff --git a/pykanidm/docs/kanidmclient.md b/pykanidm/docs/kanidmclient.md index f3bc19646..b28860350 100644 --- a/pykanidm/docs/kanidmclient.md +++ b/pykanidm/docs/kanidmclient.md @@ -1,4 +1,3 @@ - # kanidm.KanidmClient -::: kanidm.KanidmClient \ No newline at end of file +::: kanidm.KanidmClient diff --git a/pykanidm/docs/kanidmclientconfig.md b/pykanidm/docs/kanidmclientconfig.md index d6f0fdf1f..4e4304e05 100644 --- a/pykanidm/docs/kanidmclientconfig.md +++ b/pykanidm/docs/kanidmclientconfig.md @@ -1,4 +1,3 @@ - # kanidm.types.KanidmClientConfig ::: kanidm.types.KanidmClientConfig diff --git a/pykanidm/docs/radiusclient.md b/pykanidm/docs/radiusclient.md index ea2d03fa3..33a89b220 100644 --- a/pykanidm/docs/radiusclient.md +++ b/pykanidm/docs/radiusclient.md @@ -1,4 +1,3 @@ - # kanidm.types.RadiusClient ::: kanidm.types.RadiusClient diff --git a/pykanidm/docs/tokenstore.md b/pykanidm/docs/tokenstore.md index bb78ca97a..79f497f1f 100644 --- a/pykanidm/docs/tokenstore.md +++ b/pykanidm/docs/tokenstore.md @@ -1,3 +1 @@ - - -::: kanidm.tokens \ No newline at end of file +::: kanidm.tokens