docs: reformat book and introduce workflow to ensure it stays formatted (#1286)

This commit is contained in:
Jan Christoph Ebersbach 2022-12-26 23:52:03 +01:00 committed by GitHub
parent 6207c3ff51
commit fd8afa065f
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
72 changed files with 3624 additions and 3432 deletions

10
.editorconfig Normal file
View file

@ -0,0 +1,10 @@
# Documentation: https://editorconfig.org/
root = true
[*.md]
charset = utf-8
end_of_line = lf
indent_size = 2
max_line_length = 100
trim_trailing_whitespace = true

View file

@ -8,13 +8,17 @@ assignees: ''
--- ---
**Is your feature request related to a problem? Please describe.** **Is your feature request related to a problem? Please describe.**
A clear description of what the problem is. Ex. I'm confused by, or would like to know how to... A clear description of what the problem is. Ex. I'm confused by, or would like to know how to...
**Describe the solution you'd like** **Describe the solution you'd like**
A description of what you'd expect to happen. A description of what you'd expect to happen.
**Describe alternatives you've considered** **Describe alternatives you've considered**
Are there any alternative solutions or features you've considered. Are there any alternative solutions or features you've considered.
**Additional context** **Additional context**
Add any other context or screenshots about the feature request here. Add any other context or screenshots about the feature request here.

View file

@ -24,10 +24,19 @@ jobs:
libsqlite3-dev libudev-dev \ libsqlite3-dev libudev-dev \
libpam0g-dev libpam0g-dev
- name: Setup deno
uses: denoland/setup-deno@v1 # Documentation: https://github.com/denoland/setup-deno
with:
deno-version: v1.x
- name: Test document formatting
run: |
make test/doc/format
- name: Setup mdBook - name: Setup mdBook
uses: peaceiris/actions-mdbook@v1 uses: peaceiris/actions-mdbook@v1
with: with:
mdbook-version: 'latest' mdbook-version: "latest"
- uses: actions-rs/toolchain@v1 - uses: actions-rs/toolchain@v1
with: with:
@ -46,7 +55,7 @@ jobs:
- name: Install python 3.10 - name: Install python 3.10
uses: actions/setup-python@v4 uses: actions/setup-python@v4
with: with:
python-version: '3.10' python-version: "3.10"
- name: pykanidm docs - name: pykanidm docs
run: | run: |
python -m pip install poetry python -m pip install poetry

View file

@ -1,6 +1,10 @@
## Our Pledge ## Our Pledge
In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. In the interest of fostering an open and welcoming environment, we as contributors and maintainers
pledge to making participation in our project and our community a harassment-free experience for
everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity
and expression, level of experience, education, socio-economic status, nationality, personal
appearance, race, religion, or sexual identity and orientation.
## Our Standards ## Our Standards
@ -17,29 +21,44 @@ Examples of unacceptable behavior by participants include:
- The use of sexualized language or imagery and unwelcome sexual attention or advances - The use of sexualized language or imagery and unwelcome sexual attention or advances
- Trolling, insulting/derogatory comments, and personal or political attacks - Trolling, insulting/derogatory comments, and personal or political attacks
- Public or private harassment - Public or private harassment
- Publishing others private information, such as a physical or electronic address, without explicit permission - Publishing others private information, such as a physical or electronic address, without explicit
permission
- Other conduct which could reasonably be considered inappropriate in a professional setting - Other conduct which could reasonably be considered inappropriate in a professional setting
## Our Responsibilities ## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. Project maintainers are responsible for clarifying the standards of acceptable behavior and are
expected to take appropriate and fair corrective action in response to any instances of unacceptable
behavior.
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits,
Scope code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or
to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful. Scope
This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. This Code of Conduct applies both within project spaces and in public spaces when an individual is
representing the project or its community. Examples of representing a project or community include
using an official project e-mail address, posting via an official social media account, or acting as
an appointed representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
## Enforcement ## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at: Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting
the project team at:
* william at blackhats.net.au - william at blackhats.net.au
* charcol at redhat.com - charcol at redhat.com
All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. All complaints will be reviewed and investigated and will result in a response that is deemed
necessary and appropriate to the circumstances. The project team is obligated to maintain
confidentiality with regard to the reporter of an incident. Further details of specific enforcement
policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the projects leadership. Project maintainers who do not follow or enforce the Code of Conduct in good faith may face
temporary or permanent repercussions as determined by other members of the projects leadership.
## Attribution ## Attribution
This Code of Conduct is adapted from the Contributor Covenant, version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html This Code of Conduct is adapted from the Contributor Covenant, version 1.4, available at
https://www.contributor-covenant.org/version/1/4/code-of-conduct.html

134
FAQ.md
View file

@ -1,35 +1,32 @@
Frequently Asked Questions ## Frequently Asked Questions
--------------------------
This is a list of common questions that are generally raised by developers This is a list of common questions that are generally raised by developers or technical users.
or technical users.
Why don't you use library/project X? ## Why don't you use library/project X?
------------------------------------
A critical aspect of kanidm is the ability to test it. Generally requests to add libraries A critical aspect of kanidm is the ability to test it. Generally requests to add libraries or
or projects can come in different forms so I'll answer to a few of them: projects can come in different forms so I'll answer to a few of them:
## Is the library in Rust? ## Is the library in Rust?
If it's not in Rust, it's not ellegible for inclusion. There is a single exception today If it's not in Rust, it's not ellegible for inclusion. There is a single exception today (rlm
(rlm python) but it's very likely this will also be removed in the future. Keeping a single python) but it's very likely this will also be removed in the future. Keeping a single language
language helps with testing, but also makes the project more accesible and consistent to helps with testing, but also makes the project more accesible and consistent to developers.
developers. Additionally, features exist in Rust that help to improve quality of the project Additionally, features exist in Rust that help to improve quality of the project from development to
from development to production. production.
## Is the project going to create a microservice like architecture? ## Is the project going to create a microservice like architecture?
If the project (such as an external OAuth/OIDC gateway, or a different DB layer) would be used in a tight-knit If the project (such as an external OAuth/OIDC gateway, or a different DB layer) would be used in a
manner to Kanidm then it is no longer a microservice, but a monolith with multiple tight-knit manner to Kanidm then it is no longer a microservice, but a monolith with multiple moving
moving parts. This creates production fragility and issues such as: parts. This creates production fragility and issues such as:
* Differences and difficulties in correlating log events - Differences and difficulties in correlating log events
* Design choices of the project not being compatible with Kanidm's model - Design choices of the project not being compatible with Kanidm's model
* Extra requirements for testing/production configuration - Extra requirements for testing/production configuration
This last point is key. It is a critical part of kanidm that the following must This last point is key. It is a critical part of kanidm that the following must work on all
work on all machines, and run every single test in the suite. machines, and run every single test in the suite.
``` ```
git clone https://github.com/kanidm/kanidm.git git clone https://github.com/kanidm/kanidm.git
@ -37,80 +34,83 @@ cd kanidm
cargo test cargo test
``` ```
Not only this, but it's very important for quality that running `cargo test` truly tests the Not only this, but it's very important for quality that running `cargo test` truly tests the entire
entire stack of the application - from the database, all the way to the client utilities and stack of the application - from the database, all the way to the client utilities and other daemons
other daemons communicating to a real server. Many developer choices have already been made to communicating to a real server. Many developer choices have already been made to ensure that testing
ensure that testing is the most important aspect of the project to ensure that every feature is is the most important aspect of the project to ensure that every feature is high quality and
high quality and reliable. reliable.
Additon of extra projects or dependencies, would violate this principle and lead to a situation Additon of extra projects or dependencies, would violate this principle and lead to a situation
where it would not be possible to effectively test for all developers. where it would not be possible to effectively test for all developers.
Why don't you use Raft/Etcd/MongoDB/Other to solve replication? ## Why don't you use Raft/Etcd/MongoDB/Other to solve replication?
---------------------------------------------------------------
There are a number of reasons why these are generally not compatible. Generally these databases There are a number of reasons why these are generally not compatible. Generally these databases or
or technolgies do solve problems, but they are not the problems in Kanidm. technolgies do solve problems, but they are not the problems in Kanidm.
## CAP theorem ## CAP theorem
CAP theorem states that in a database you must choose only two of the three possible CAP theorem states that in a database you must choose only two of the three possible elements:
elements:
* Consistency - All servers in a topology see the same data at all times - Consistency - All servers in a topology see the same data at all times
* Availability - All servers in a a topology can accept write operations at all times - Availability - All servers in a a topology can accept write operations at all times
* Partitioning - In the case of a network seperation in the topology, all systems can continue to process read operations - Partitioning - In the case of a network seperation in the topology, all systems can continue to
process read operations
Many protocols like Raft or Etcd are databases that provide PC guarantees. They guarantee that Many protocols like Raft or Etcd are databases that provide PC guarantees. They guarantee that they
they are always consistent, and can always be read in the face of patitioning, but to accept are always consistent, and can always be read in the face of patitioning, but to accept a write,
a write, they must not be experiencing a partitioning event. Generally this is achieved by they must not be experiencing a partitioning event. Generally this is achieved by the fact that
the fact that these systems elect a single node to process all operations, and then re-elect these systems elect a single node to process all operations, and then re-elect a new node in the
a new node in the case of partitioning events. The elections will fail if a quorum is not met case of partitioning events. The elections will fail if a quorum is not met disallowing writes
disallowing writes throughout the topology. throughout the topology.
This doesn't work for Authentication systems, and global scale databases. As you introduce non-negligible This doesn't work for Authentication systems, and global scale databases. As you introduce
network latency, the processing of write operations will decrease in these systems. This is why non-negligible network latency, the processing of write operations will decrease in these systems.
Google's Spanner is a PA system. This is why Google's Spanner is a PA system.
PA systems are also considered to be "eventually consistent". All nodes can provide reads and writes PA systems are also considered to be "eventually consistent". All nodes can provide reads and writes
at all times, but during a network partitioning or after a write there is a delay for all nodes to at all times, but during a network partitioning or after a write there is a delay for all nodes to
arrive at a consistent database state. A key element is that the nodes perform an consistency operation arrive at a consistent database state. A key element is that the nodes perform an consistency
that uses application aware rules to allow all servers to arrive at the same state *without* operation that uses application aware rules to allow all servers to arrive at the same state
communication between the nodes. _without_ communication between the nodes.
## Update Resolutionn ## Update Resolutionn
Many databases do exist that are PA, such as CouchDB or MongoDB. However, they often do not have Many databases do exist that are PA, such as CouchDB or MongoDB. However, they often do not have the
the properties required in update resoultion that is required for Kanidm. properties required in update resoultion that is required for Kanidm.
An example of this is that CouchDB uses object-level resolution. This means that if two servers An example of this is that CouchDB uses object-level resolution. This means that if two servers
update the same entry the "latest write wins". An example of where this won't work for Kanidm update the same entry the "latest write wins". An example of where this won't work for Kanidm is if
is if one server locks the account as an admin is revoking the access of an account, but another one server locks the account as an admin is revoking the access of an account, but another account
account updates the username. If the username update happenned second, the lock event would updates the username. If the username update happenned second, the lock event would be lost creating
be lost creating a security risk. There are certainly cases where this resolution method is a security risk. There are certainly cases where this resolution method is valid, but Kanidm is not
valid, but Kanidm is not one. one.
Another example is MongoDB. While it does attribute level resolution, it does this without the application Another example is MongoDB. While it does attribute level resolution, it does this without the
awareness of Kanidm. For example, in Kanidm if we have an account lock based on time, we can select application awareness of Kanidm. For example, in Kanidm if we have an account lock based on time, we
the latest time value to over-write the following, or we could have a counter that can correctly can select the latest time value to over-write the following, or we could have a counter that can
increment/advance between the servers. However, Mongo is not aware of these rules, and it would correctly increment/advance between the servers. However, Mongo is not aware of these rules, and it
not be able to give the experience we desire. Mongo is a very good database, it's just not the would not be able to give the experience we desire. Mongo is a very good database, it's just not the
right choice for Kanidm. right choice for Kanidm.
Additionally, it's worth noting that most of these other database would violate the previous Additionally, it's worth noting that most of these other database would violate the previous desires
desires to keep the language as Rust and may require external configuration or daemons which to keep the language as Rust and may require external configuration or daemons which may not be
may not be possible to test. possible to test.
## How PAM/nsswitch Work ## How PAM/nsswitch Work
Linux and BSD clients can resolve identities from Kanidm into accounts via PAM and nsswitch. Linux and BSD clients can resolve identities from Kanidm into accounts via PAM and nsswitch.
Name Service Switch (NSS) is used for connecting the computers with different data sources to resolve name-service information. Name Service Switch (NSS) is used for connecting the computers with different data sources to
By adding the nsswitch libraries to /etc/nsswitch.conf, we are telling NSS to lookup password info and group identities in Kanidm: resolve name-service information. By adding the nsswitch libraries to /etc/nsswitch.conf, we are
telling NSS to lookup password info and group identities in Kanidm:
```
passwd: compat kanidm passwd: compat kanidm
group: compat kanidm group: compat kanidm
```
When a service like sudo, sshd, su etc. wants to authenticate someone, it opens the pam.d config of that service, When a service like sudo, sshd, su etc. wants to authenticate someone, it opens the pam.d config of
then performs authentication according to the modules defined in the pam.d config. that service, then performs authentication according to the modules defined in the pam.d config. For
For example, if you run `ls -al /etc/pam.d /usr/etc/pam.d` in SUSE, you can see the services and their respective pam.d config. example, if you run `ls -al /etc/pam.d /usr/etc/pam.d` in SUSE, you can see the services and their
respective pam.d config.

View file

@ -1,28 +1,22 @@
Mozilla Public License Version 2.0 # Mozilla Public License Version 2.0
==================================
1. Definitions 1. Definitions
--------------
1.1. "Contributor" ---
means each individual or legal entity that creates, contributes to
the creation of, or owns Covered Software.
1.2. "Contributor Version" 1.1. "Contributor" means each individual or legal entity that creates, contributes to the creation
means the combination of the Contributions of others (if any) used of, or owns Covered Software.
by a Contributor and that particular Contributor's Contribution.
1.3. "Contribution" 1.2. "Contributor Version" means the combination of the Contributions of others (if any) used by a
means Covered Software of a particular Contributor. Contributor and that particular Contributor's Contribution.
1.4. "Covered Software" 1.3. "Contribution" means Covered Software of a particular Contributor.
means Source Code Form to which the initial Contributor has attached
the notice in Exhibit A, the Executable Form of such Source Code
Form, and Modifications of such Source Code Form, in each case
including portions thereof.
1.5. "Incompatible With Secondary Licenses" 1.4. "Covered Software" means Source Code Form to which the initial Contributor has attached the
means notice in Exhibit A, the Executable Form of such Source Code Form, and Modifications of such Source
Code Form, in each case including portions thereof.
1.5. "Incompatible With Secondary Licenses" means
(a) that the initial Contributor has attached the notice described (a) that the initial Contributor has attached the notice described
in Exhibit B to the Covered Software; or in Exhibit B to the Covered Software; or
@ -31,23 +25,17 @@ Mozilla Public License Version 2.0
version 1.1 or earlier of the License, but not also under the version 1.1 or earlier of the License, but not also under the
terms of a Secondary License. terms of a Secondary License.
1.6. "Executable Form" 1.6. "Executable Form" means any form of the work other than Source Code Form.
means any form of the work other than Source Code Form.
1.7. "Larger Work" 1.7. "Larger Work" means a work that combines Covered Software with other material, in a separate
means a work that combines Covered Software with other material, in file or files, that is not Covered Software.
a separate file or files, that is not Covered Software.
1.8. "License" 1.8. "License" means this document.
means this document.
1.9. "Licensable" 1.9. "Licensable" means having the right to grant, to the maximum extent possible, whether at the
means having the right to grant, to the maximum extent possible, time of the initial grant or subsequently, any and all of the rights conveyed by this License.
whether at the time of the initial grant or subsequently, any and
all of the rights conveyed by this License.
1.10. "Modifications" 1.10. "Modifications" means any of the following:
means any of the following:
(a) any file in Source Code Form that results from an addition to, (a) any file in Source Code Form that results from an addition to,
deletion from, or modification of the contents of Covered deletion from, or modification of the contents of Covered
@ -56,319 +44,284 @@ Mozilla Public License Version 2.0
(b) any new file in Source Code Form that contains any Covered (b) any new file in Source Code Form that contains any Covered
Software. Software.
1.11. "Patent Claims" of a Contributor 1.11. "Patent Claims" of a Contributor means any patent claim(s), including without limitation,
means any patent claim(s), including without limitation, method, method, process, and apparatus claims, in any patent Licensable by such Contributor that would be
process, and apparatus claims, in any patent Licensable by such infringed, but for the grant of the License, by the making, using, selling, offering for sale,
Contributor that would be infringed, but for the grant of the having made, import, or transfer of either its Contributions or its Contributor Version.
License, by the making, using, selling, offering for sale, having
made, import, or transfer of either its Contributions or its
Contributor Version.
1.12. "Secondary License" 1.12. "Secondary License" means either the GNU General Public License, Version 2.0, the GNU Lesser
means either the GNU General Public License, Version 2.0, the GNU General Public License, Version 2.1, the GNU Affero General Public License, Version 3.0, or any
Lesser General Public License, Version 2.1, the GNU Affero General later versions of those licenses.
Public License, Version 3.0, or any later versions of those
licenses.
1.13. "Source Code Form" 1.13. "Source Code Form" means the form of the work preferred for making modifications.
means the form of the work preferred for making modifications.
1.14. "You" (or "Your") 1.14. "You" (or "Your") means an individual or a legal entity exercising rights under this License.
means an individual or a legal entity exercising rights under this For legal entities, "You" includes any entity that controls, is controlled by, or is under common
License. For legal entities, "You" includes any entity that control with You. For purposes of this definition, "control" means (a) the power, direct or
controls, is controlled by, or is under common control with You. For indirect, to cause the direction or management of such entity, whether by contract or otherwise, or
purposes of this definition, "control" means (a) the power, direct (b) ownership of more than fifty percent (50%) of the outstanding shares or beneficial ownership of
or indirect, to cause the direction or management of such entity, such entity.
whether by contract or otherwise, or (b) ownership of more than
fifty percent (50%) of the outstanding shares or beneficial
ownership of such entity.
2. License Grants and Conditions 2. License Grants and Conditions
--------------------------------
---
2.1. Grants 2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free, Each Contributor hereby grants You a world-wide, royalty-free, non-exclusive license:
non-exclusive license:
(a) under intellectual property rights (other than patent or trademark) (a) under intellectual property rights (other than patent or trademark) Licensable by such
Licensable by such Contributor to use, reproduce, make available, Contributor to use, reproduce, make available, modify, display, perform, distribute, and otherwise
modify, display, perform, distribute, and otherwise exploit its exploit its Contributions, either on an unmodified basis, with Modifications, or as part of a Larger
Contributions, either on an unmodified basis, with Modifications, or Work; and
as part of a Larger Work; and
(b) under Patent Claims of such Contributor to make, use, sell, offer (b) under Patent Claims of such Contributor to make, use, sell, offer for sale, have made, import,
for sale, have made, import, and otherwise transfer either its and otherwise transfer either its Contributions or its Contributor Version.
Contributions or its Contributor Version.
2.2. Effective Date 2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution The licenses granted in Section 2.1 with respect to any Contribution become effective for each
become effective for each Contribution on the date the Contributor first Contribution on the date the Contributor first distributes such Contribution.
distributes such Contribution.
2.3. Limitations on Grant Scope 2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under The licenses granted in this Section 2 are the only rights granted under this License. No additional
this License. No additional rights or licenses will be implied from the rights or licenses will be implied from the distribution or licensing of Covered Software under this
distribution or licensing of Covered Software under this License. License. Notwithstanding Section 2.1(b) above, no patent license is granted by a Contributor:
Notwithstanding Section 2.1(b) above, no patent license is granted by a
Contributor:
(a) for any code that a Contributor has removed from Covered Software; (a) for any code that a Contributor has removed from Covered Software; or
or
(b) for infringements caused by: (i) Your and any other third party's (b) for infringements caused by: (i) Your and any other third party's modifications of Covered
modifications of Covered Software, or (ii) the combination of its Software, or (ii) the combination of its Contributions with other software (except as part of its
Contributions with other software (except as part of its Contributor Contributor Version); or
Version); or
(c) under Patent Claims infringed by Covered Software in the absence of (c) under Patent Claims infringed by Covered Software in the absence of its Contributions.
its Contributions.
This License does not grant any rights in the trademarks, service marks, This License does not grant any rights in the trademarks, service marks, or logos of any Contributor
or logos of any Contributor (except as may be necessary to comply with (except as may be necessary to comply with the notice requirements in Section 3.4).
the notice requirements in Section 3.4).
2.4. Subsequent Licenses 2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to No Contributor makes additional grants as a result of Your choice to distribute the Covered Software
distribute the Covered Software under a subsequent version of this under a subsequent version of this License (see Section 10.2) or under the terms of a Secondary
License (see Section 10.2) or under the terms of a Secondary License (if License (if permitted under the terms of Section 3.3).
permitted under the terms of Section 3.3).
2.5. Representation 2.5. Representation
Each Contributor represents that the Contributor believes its Each Contributor represents that the Contributor believes its Contributions are its original
Contributions are its original creation(s) or it has sufficient rights creation(s) or it has sufficient rights to grant the rights to its Contributions conveyed by this
to grant the rights to its Contributions conveyed by this License. License.
2.6. Fair Use 2.6. Fair Use
This License is not intended to limit any rights You have under This License is not intended to limit any rights You have under applicable copyright doctrines of
applicable copyright doctrines of fair use, fair dealing, or other fair use, fair dealing, or other equivalents.
equivalents.
2.7. Conditions 2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in Section 2.1.
in Section 2.1.
3. Responsibilities 3. Responsibilities
-------------------
---
3.1. Distribution of Source Form 3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any All distribution of Covered Software in Source Code Form, including any Modifications that You
Modifications that You create or to which You contribute, must be under create or to which You contribute, must be under the terms of this License. You must inform
the terms of this License. You must inform recipients that the Source recipients that the Source Code Form of the Covered Software is governed by the terms of this
Code Form of the Covered Software is governed by the terms of this License, and how they can obtain a copy of this License. You may not attempt to alter or restrict
License, and how they can obtain a copy of this License. You may not the recipients' rights in the Source Code Form.
attempt to alter or restrict the recipients' rights in the Source Code
Form.
3.2. Distribution of Executable Form 3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then: If You distribute Covered Software in Executable Form then:
(a) such Covered Software must also be made available in Source Code (a) such Covered Software must also be made available in Source Code Form, as described in Section
Form, as described in Section 3.1, and You must inform recipients of 3.1, and You must inform recipients of the Executable Form how they can obtain a copy of such Source
the Executable Form how they can obtain a copy of such Source Code Code Form by reasonable means in a timely manner, at a charge no more than the cost of distribution
Form by reasonable means in a timely manner, at a charge no more to the recipient; and
than the cost of distribution to the recipient; and
(b) You may distribute such Executable Form under the terms of this (b) You may distribute such Executable Form under the terms of this License, or sublicense it under
License, or sublicense it under different terms, provided that the different terms, provided that the license for the Executable Form does not attempt to limit or
license for the Executable Form does not attempt to limit or alter alter the recipients' rights in the Source Code Form under this License.
the recipients' rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work 3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice, You may create and distribute a Larger Work under terms of Your choice, provided that You also
provided that You also comply with the requirements of this License for comply with the requirements of this License for the Covered Software. If the Larger Work is a
the Covered Software. If the Larger Work is a combination of Covered combination of Covered Software with a work governed by one or more Secondary Licenses, and the
Software with a work governed by one or more Secondary Licenses, and the Covered Software is not Incompatible With Secondary Licenses, this License permits You to
Covered Software is not Incompatible With Secondary Licenses, this additionally distribute such Covered Software under the terms of such Secondary License(s), so that
License permits You to additionally distribute such Covered Software the recipient of the Larger Work may, at their option, further distribute the Covered Software under
under the terms of such Secondary License(s), so that the recipient of the terms of either this License or such Secondary License(s).
the Larger Work may, at their option, further distribute the Covered
Software under the terms of either this License or such Secondary
License(s).
3.4. Notices 3.4. Notices
You may not remove or alter the substance of any license notices You may not remove or alter the substance of any license notices (including copyright notices,
(including copyright notices, patent notices, disclaimers of warranty, patent notices, disclaimers of warranty, or limitations of liability) contained within the Source
or limitations of liability) contained within the Source Code Form of Code Form of the Covered Software, except that You may alter any license notices to the extent
the Covered Software, except that You may alter any license notices to required to remedy known factual inaccuracies.
the extent required to remedy known factual inaccuracies.
3.5. Application of Additional Terms 3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support, You may choose to offer, and to charge a fee for, warranty, support, indemnity or liability
indemnity or liability obligations to one or more recipients of Covered obligations to one or more recipients of Covered Software. However, You may do so only on Your own
Software. However, You may do so only on Your own behalf, and not on behalf, and not on behalf of any Contributor. You must make it absolutely clear that any such
behalf of any Contributor. You must make it absolutely clear that any warranty, support, indemnity, or liability obligation is offered by You alone, and You hereby agree
such warranty, support, indemnity, or liability obligation is offered by to indemnify every Contributor for any liability incurred by such Contributor as a result of
You alone, and You hereby agree to indemnify every Contributor for any warranty, support, indemnity or liability terms You offer. You may include additional disclaimers of
liability incurred by such Contributor as a result of warranty, support, warranty and limitations of liability specific to any jurisdiction.
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
4. Inability to Comply Due to Statute or Regulation 4. Inability to Comply Due to Statute or Regulation
---------------------------------------------------
If it is impossible for You to comply with any of the terms of this ---
License with respect to some or all of the Covered Software due to
statute, judicial order, or regulation then You must: (a) comply with If it is impossible for You to comply with any of the terms of this License with respect to some or
the terms of this License to the maximum extent possible; and (b) all of the Covered Software due to statute, judicial order, or regulation then You must: (a) comply
describe the limitations and the code they affect. Such description must with the terms of this License to the maximum extent possible; and (b) describe the limitations and
be placed in a text file included with all distributions of the Covered the code they affect. Such description must be placed in a text file included with all distributions
Software under this License. Except to the extent prohibited by statute of the Covered Software under this License. Except to the extent prohibited by statute or
or regulation, such description must be sufficiently detailed for a regulation, such description must be sufficiently detailed for a recipient of ordinary skill to be
recipient of ordinary skill to be able to understand it. able to understand it.
5. Termination 5. Termination
--------------
5.1. The rights granted under this License will terminate automatically ---
if You fail to comply with any of its terms. However, if You become
compliant, then the rights granted under this License from a particular
Contributor are reinstated (a) provisionally, unless and until such
Contributor explicitly and finally terminates Your grants, and (b) on an
ongoing basis, if such Contributor fails to notify You of the
non-compliance by some reasonable means prior to 60 days after You have
come back into compliance. Moreover, Your grants from a particular
Contributor are reinstated on an ongoing basis if such Contributor
notifies You of the non-compliance by some reasonable means, this is the
first time You have received notice of non-compliance with this License
from such Contributor, and You become compliant prior to 30 days after
Your receipt of the notice.
5.2. If You initiate litigation against any entity by asserting a patent 5.1. The rights granted under this License will terminate automatically if You fail to comply with
infringement claim (excluding declaratory judgment actions, any of its terms. However, if You become compliant, then the rights granted under this License from
counter-claims, and cross-claims) alleging that a Contributor Version a particular Contributor are reinstated (a) provisionally, unless and until such Contributor
directly or indirectly infringes any patent, then the rights granted to explicitly and finally terminates Your grants, and (b) on an ongoing basis, if such Contributor
You by any and all Contributors for the Covered Software under Section fails to notify You of the non-compliance by some reasonable means prior to 60 days after You have
2.1 of this License shall terminate. come back into compliance. Moreover, Your grants from a particular Contributor are reinstated on an
ongoing basis if such Contributor notifies You of the non-compliance by some reasonable means, this
is the first time You have received notice of non-compliance with this License from such
Contributor, and You become compliant prior to 30 days after Your receipt of the notice.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all 5.2. If You initiate litigation against any entity by asserting a patent infringement claim
end user license agreements (excluding distributors and resellers) which (excluding declaratory judgment actions, counter-claims, and cross-claims) alleging that a
have been validly granted by You or Your distributors under this License Contributor Version directly or indirectly infringes any patent, then the rights granted to You by
prior to termination shall survive termination. any and all Contributors for the Covered Software under Section 2.1 of this License shall terminate.
************************************************************************ 5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user license agreements
* * (excluding distributors and resellers) which have been validly granted by You or Your distributors
* 6. Disclaimer of Warranty * under this License prior to termination shall survive termination.
* ------------------------- *
* *
* Covered Software is provided under this License on an "as is" *
* basis, without warranty of any kind, either expressed, implied, or *
* statutory, including, without limitation, warranties that the *
* Covered Software is free of defects, merchantable, fit for a *
* particular purpose or non-infringing. The entire risk as to the *
* quality and performance of the Covered Software is with You. *
* Should any Covered Software prove defective in any respect, You *
* (not any Contributor) assume the cost of any necessary servicing, *
* repair, or correction. This disclaimer of warranty constitutes an *
* essential part of this License. No use of any Covered Software is *
* authorized under this License except under this disclaimer. *
* *
************************************************************************
************************************************************************ ---
* *
* 7. Limitation of Liability * -
* -------------------------- * -
* * -
* Under no circumstances and under no legal theory, whether tort * 6. Disclaimer of Warranty *
* (including negligence), contract, or otherwise, shall any * - ------------------------- *
* Contributor, or anyone who distributes Covered Software as * -
* permitted above, be liable to You for any direct, indirect, * -
* special, incidental, or consequential damages of any character * - Covered Software is provided under this License on an "as is" *
* including, without limitation, damages for lost profits, loss of * - basis, without warranty of any kind, either expressed, implied, or *
* goodwill, work stoppage, computer failure or malfunction, or any * - statutory, including, without limitation, warranties that the *
* and all other commercial damages or losses, even if such party * - Covered Software is free of defects, merchantable, fit for a *
* shall have been informed of the possibility of such damages. This * - particular purpose or non-infringing. The entire risk as to the *
* limitation of liability shall not apply to liability for death or * - quality and performance of the Covered Software is with You. *
* personal injury resulting from such party's negligence to the * - Should any Covered Software prove defective in any respect, You *
* extent applicable law prohibits such limitation. Some * - (not any Contributor) assume the cost of any necessary servicing, *
* jurisdictions do not allow the exclusion or limitation of * - repair, or correction. This disclaimer of warranty constitutes an *
* incidental or consequential damages, so this exclusion and * - essential part of this License. No use of any Covered Software is *
* limitation may not apply to You. * - authorized under this License except under this disclaimer. *
* * -
************************************************************************ -
---
---
-
-
-
7. Limitation of Liability *
- -------------------------- *
-
-
- Under no circumstances and under no legal theory, whether tort *
- (including negligence), contract, or otherwise, shall any *
- Contributor, or anyone who distributes Covered Software as *
- permitted above, be liable to You for any direct, indirect, *
- special, incidental, or consequential damages of any character *
- including, without limitation, damages for lost profits, loss of *
- goodwill, work stoppage, computer failure or malfunction, or any *
- and all other commercial damages or losses, even if such party *
- shall have been informed of the possibility of such damages. This *
- limitation of liability shall not apply to liability for death or *
- personal injury resulting from such party's negligence to the *
- extent applicable law prohibits such limitation. Some *
- jurisdictions do not allow the exclusion or limitation of *
- incidental or consequential damages, so this exclusion and *
- limitation may not apply to You. *
-
-
---
8. Litigation 8. Litigation
-------------
Any litigation relating to this License may be brought only in the ---
courts of a jurisdiction where the defendant maintains its principal
place of business and such litigation shall be governed by laws of that Any litigation relating to this License may be brought only in the courts of a jurisdiction where
jurisdiction, without reference to its conflict-of-law provisions. the defendant maintains its principal place of business and such litigation shall be governed by
Nothing in this Section shall prevent a party's ability to bring laws of that jurisdiction, without reference to its conflict-of-law provisions. Nothing in this
cross-claims or counter-claims. Section shall prevent a party's ability to bring cross-claims or counter-claims.
9. Miscellaneous 9. Miscellaneous
----------------
This License represents the complete agreement concerning the subject ---
matter hereof. If any provision of this License is held to be
unenforceable, such provision shall be reformed only to the extent This License represents the complete agreement concerning the subject matter hereof. If any
necessary to make it enforceable. Any law or regulation which provides provision of this License is held to be unenforceable, such provision shall be reformed only to the
that the language of a contract shall be construed against the drafter extent necessary to make it enforceable. Any law or regulation which provides that the language of a
shall not be used to construe this License against a Contributor. contract shall be construed against the drafter shall not be used to construe this License against a
Contributor.
10. Versions of the License 10. Versions of the License
---------------------------
---
10.1. New Versions 10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section Mozilla Foundation is the license steward. Except as provided in Section 10.3, no one other than the
10.3, no one other than the license steward has the right to modify or license steward has the right to modify or publish new versions of this License. Each version will
publish new versions of this License. Each version will be given a be given a distinguishing version number.
distinguishing version number.
10.2. Effect of New Versions 10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version You may distribute the Covered Software under the terms of the version of the License under which
of the License under which You originally received the Covered Software, You originally received the Covered Software, or under the terms of any subsequent version published
or under the terms of any subsequent version published by the license by the license steward.
steward.
10.3. Modified Versions 10.3. Modified Versions
If you create software not governed by this License, and you want to If you create software not governed by this License, and you want to create a new license for such
create a new license for such software, you may create and use a software, you may create and use a modified version of this License if you rename the license and
modified version of this License if you rename the license and remove remove any references to the name of the license steward (except to note that such modified license
any references to the name of the license steward (except to note that differs from this License).
such modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary 10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses
Licenses
If You choose to distribute Source Code Form that is Incompatible With If You choose to distribute Source Code Form that is Incompatible With Secondary Licenses under the
Secondary Licenses under the terms of this version of the License, the terms of this version of the License, the notice described in Exhibit B of this License must be
notice described in Exhibit B of this License must be attached. attached.
Exhibit A - Source Code Form License Notice ## Exhibit A - Source Code Form License Notice
-------------------------------------------
This Source Code Form is subject to the terms of the Mozilla Public This Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of
License, v. 2.0. If a copy of the MPL was not distributed with this the MPL was not distributed with this file, You can obtain one at http://mozilla.org/MPL/2.0/.
file, You can obtain one at http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular If it is not possible or desirable to put the notice in a particular file, then You may include the
file, then You may include the notice in a location (such as a LICENSE notice in a location (such as a LICENSE file in a relevant directory) where a recipient would be
file in a relevant directory) where a recipient would be likely to look likely to look for such a notice.
for such a notice.
You may add additional accurate notices of copyright ownership. You may add additional accurate notices of copyright ownership.
Exhibit B - "Incompatible With Secondary Licenses" Notice ## Exhibit B - "Incompatible With Secondary Licenses" Notice
---------------------------------------------------------
This Source Code Form is "Incompatible With Secondary Licenses", as
defined by the Mozilla Public License, v. 2.0.
This Source Code Form is "Incompatible With Secondary Licenses", as defined by the Mozilla Public
License, v. 2.0.

View file

@ -3,6 +3,7 @@ IMAGE_VERSION ?= devel
CONTAINER_TOOL_ARGS ?= CONTAINER_TOOL_ARGS ?=
IMAGE_ARCH ?= "linux/amd64,linux/arm64" IMAGE_ARCH ?= "linux/amd64,linux/arm64"
CONTAINER_BUILD_ARGS ?= CONTAINER_BUILD_ARGS ?=
MARKDOWN_FORMAT_ARGS ?= --options-line-width=100
# Example of using redis with sccache # Example of using redis with sccache
# --build-arg "SCCACHE_REDIS=redis://redis.dev.blackhats.net.au:6379" # --build-arg "SCCACHE_REDIS=redis://redis.dev.blackhats.net.au:6379"
CONTAINER_TOOL ?= docker CONTAINER_TOOL ?= docker
@ -139,6 +140,10 @@ test/pykanidm/mypy:
test/pykanidm: ## run the test suite (mypy/pylint/pytest) for the kanidm python module test/pykanidm: ## run the test suite (mypy/pylint/pytest) for the kanidm python module
test/pykanidm: test/pykanidm/pytest test/pykanidm/mypy test/pykanidm/pylint test/pykanidm: test/pykanidm/pytest test/pykanidm/mypy test/pykanidm/pylint
.PHONY: test/doc/format
test/doc/format: ## Format docs and the Kanidm book
find . -type f -name \*.md -exec deno fmt --check $(MARKDOWN_FORMAT_ARGS) "{}" +
######################################################################## ########################################################################
.PHONY: doc .PHONY: doc
@ -146,6 +151,10 @@ doc: ## Build the rust documentation locally
doc: doc:
cargo doc --document-private-items cargo doc --document-private-items
.PHONY: doc/format
doc/format: ## Format docs and the Kanidm book
find . -type f -name \*.md -exec deno fmt $(MARKDOWN_FORMAT_ARGS) "{}" +
.PHONY: book .PHONY: book
book: ## Build the Kanidm book book: ## Build the Kanidm book
book: book:

View file

@ -6,30 +6,31 @@
## About ## About
Kanidm is a simple and secure identity management platform, which provides services to allow Kanidm is a simple and secure identity management platform, which provides services to allow other
other systems and application to authenticate against. The project aims for the highest levels systems and application to authenticate against. The project aims for the highest levels of
of reliability, security and ease of use. reliability, security and ease of use.
The goal of this project is to be a complete identity management provider, covering the broadest The goal of this project is to be a complete identity management provider, covering the broadest
possible set of requirements and integrations. You should not need any other components (like Keycloak) possible set of requirements and integrations. You should not need any other components (like
when you use Kanidm. We want to create a project that will be suitable for everything Keycloak) when you use Kanidm. We want to create a project that will be suitable for everything from
from personal home deployments, to the largest enterprise needs. personal home deployments, to the largest enterprise needs.
To achieve this we rely heavily on strict defaults, simple configuration, and self-healing components. To achieve this we rely heavily on strict defaults, simple configuration, and self-healing
components.
The project is still growing and some areas are developing at a fast pace. The core of the server The project is still growing and some areas are developing at a fast pace. The core of the server
however is reliable and we make all effort to ensure upgrades will always work. however is reliable and we make all effort to ensure upgrades will always work.
Kanidm supports: Kanidm supports:
* Oauth2/OIDC Authentication provider for web SSO - Oauth2/OIDC Authentication provider for web SSO
* Read only LDAPS gateway - Read only LDAPS gateway
* Linux/Unix integration (with offline authentication) - Linux/Unix integration (with offline authentication)
* SSH key distribution to Linux/Unix systems - SSH key distribution to Linux/Unix systems
* RADIUS for network authentication - RADIUS for network authentication
* Passkeys / Webauthn for secure cryptographic authentication - Passkeys / Webauthn for secure cryptographic authentication
* A self service web ui - A self service web ui
* Complete CLI tooling for administration - Complete CLI tooling for administration
If you want to host your own centralised authentication service, then Kanidm is for you! If you want to host your own centralised authentication service, then Kanidm is for you!
@ -40,7 +41,8 @@ If you want to deploy Kanidm to see what it can do, you should read the Kanidm b
- [Kanidm book (Latest stable)](https://kanidm.github.io/kanidm/stable/) - [Kanidm book (Latest stable)](https://kanidm.github.io/kanidm/stable/)
- [Kanidm book (Latest commit)](https://kanidm.github.io/kanidm/master/) - [Kanidm book (Latest commit)](https://kanidm.github.io/kanidm/master/)
We also publish [support guidelines](https://github.com/kanidm/kanidm/blob/master/project_docs/RELEASE_AND_SUPPORT.md) We also publish
[support guidelines](https://github.com/kanidm/kanidm/blob/master/project_docs/RELEASE_AND_SUPPORT.md)
for what the project will support. for what the project will support.
## Code of Conduct / Ethics ## Code of Conduct / Ethics
@ -54,8 +56,8 @@ See our documentation on [rights and ethics]
## Getting in Contact / Questions ## Getting in Contact / Questions
We have a [gitter community channel] where we can talk. Firstyear is also happy to We have a [gitter community channel] where we can talk. Firstyear is also happy to answer questions
answer questions via email, which can be found on their github profile. via email, which can be found on their github profile.
[gitter community channel]: https://gitter.im/kanidm/community [gitter community channel]: https://gitter.im/kanidm/community
@ -63,29 +65,29 @@ answer questions via email, which can be found on their github profile.
### LLDAP ### LLDAP
[LLDAP](https://github.com/nitnelave/lldap) is a similar project aiming for a small and easy to administer [LLDAP](https://github.com/nitnelave/lldap) is a similar project aiming for a small and easy to
LDAP server with a web administration portal. Both projects use the [Kanidm LDAP bindings](https://github.com/kanidm/ldap3), and have administer LDAP server with a web administration portal. Both projects use the
many similar ideas. [Kanidm LDAP bindings](https://github.com/kanidm/ldap3), and have many similar ideas.
The primary benefit of Kanidm over LLDAP is that Kanidm offers a broader set of "built in" features The primary benefit of Kanidm over LLDAP is that Kanidm offers a broader set of "built in" features
like Oauth2 and OIDC. To use these from LLDAP you need an external portal like Keycloak, where in Kanidm like Oauth2 and OIDC. To use these from LLDAP you need an external portal like Keycloak, where in
they are "built in". However that is also a strength of LLDAP is that is offers "less" which may make Kanidm they are "built in". However that is also a strength of LLDAP is that is offers "less" which
it easier to administer and deploy for you. may make it easier to administer and deploy for you.
If Kanidm is too complex for your needs, you should check out LLDAP as a smaller alternative. If you If Kanidm is too complex for your needs, you should check out LLDAP as a smaller alternative. If you
want a project which has a broader feature set out of the box, then Kanidm might be a better fit. want a project which has a broader feature set out of the box, then Kanidm might be a better fit.
### 389-ds / OpenLDAP ### 389-ds / OpenLDAP
Both 389-ds and OpenLDAP are generic LDAP servers. This means they only provide LDAP and you need Both 389-ds and OpenLDAP are generic LDAP servers. This means they only provide LDAP and you need to
to bring your own IDM configuration on top. bring your own IDM configuration on top.
If you need the highest levels of customisation possible from your LDAP deployment, then these are If you need the highest levels of customisation possible from your LDAP deployment, then these are
probably better alternatives. If you want a service that is easier to setup and focused on IDM, then probably better alternatives. If you want a service that is easier to setup and focused on IDM, then
Kanidm is a better choice. Kanidm is a better choice.
Kanidm was originally inspired by many elements of both 389-ds and OpenLDAP. Already Kanidm is as fast Kanidm was originally inspired by many elements of both 389-ds and OpenLDAP. Already Kanidm is as
as (or faster than) 389-ds for performance and scaling. fast as (or faster than) 389-ds for performance and scaling.
### FreeIPA ### FreeIPA
@ -101,15 +103,14 @@ Kanidm is probably for you.
## Developer Getting Started ## Developer Getting Started
If you want to develop on the server, there is a getting started [guide for developers]. IDM If you want to develop on the server, there is a getting started [guide for developers]. IDM is a
is a diverse topic and we encourage contributions of many kinds in the project, from people of diverse topic and we encourage contributions of many kinds in the project, from people of all
all backgrounds. backgrounds.
[guide for developers]: https://kanidm.github.io/kanidm/master/DEVELOPER_README.html [guide for developers]: https://kanidm.github.io/kanidm/master/DEVELOPER_README.html
## What does Kanidm mean? ## What does Kanidm mean?
The original project name was rsidm while it was a thought experiment. Now that it's growing The original project name was rsidm while it was a thought experiment. Now that it's growing and
and developing, we gave it a better project name. Kani is Japanese for "crab". Rust's mascot is a crab. developing, we gave it a better project name. Kani is Japanese for "crab". Rust's mascot is a crab.
IDM is the common industry term for identity management services. IDM is the common industry term for identity management services.

View file

@ -1,4 +1,3 @@
<p align="center"> <p align="center">
<img src="https://raw.githubusercontent.com/kanidm/kanidm/master/artwork/logo-small.png" width="20%" height="auto" /> <img src="https://raw.githubusercontent.com/kanidm/kanidm/master/artwork/logo-small.png" width="20%" height="auto" />
</p> </p>
@ -9,247 +8,244 @@ To get started, see the [kanidm book]
# Feedback # Feedback
We value your feedback! First, please see our [code of conduct]. If you We value your feedback! First, please see our [code of conduct]. If you have questions please join
have questions please join our [gitter community channel] so that we our [gitter community channel] so that we can help. If you find a bug or issue, we'd love you to
can help. If you find a bug or issue, we'd love you to report it to our report it to our [issue tracker].
[issue tracker].
# Release Notes # Release Notes
## 2022-11-01 - Kanidm 1.1.0-alpha10 ## 2022-11-01 - Kanidm 1.1.0-alpha10
This is the tenth alpha series release of the Kanidm Identity Management This is the tenth alpha series release of the Kanidm Identity Management project. Alpha releases are
project. Alpha releases are to help get feedback and ideas from the community to help get feedback and ideas from the community on how we can continue to make this project better
on how we can continue to make this project better for a future supported release. for a future supported release.
The project is shaping up very nicely, and a beta will be coming soon! The project is shaping up very nicely, and a beta will be coming soon!
### Upgrade Note! ### Upgrade Note!
This version will *require* TLS on all servers, even if behind a load balancer or This version will _require_ TLS on all servers, even if behind a load balancer or TLS terminating
TLS terminating proxy. You should be ready for this change when you upgrade to the proxy. You should be ready for this change when you upgrade to the latest version.
latest version.
### Release Highlights ### Release Highlights
* Management and tracking of authenticated sessions - Management and tracking of authenticated sessions
* Make upgrade migrations more robust when upgrading over multiple versions - Make upgrade migrations more robust when upgrading over multiple versions
* Add support for service account tokens via ldap for extended read permissions - Add support for service account tokens via ldap for extended read permissions
* Unix password management in web ui for posix accounts - Unix password management in web ui for posix accounts
* Support internal dynamic group entries - Support internal dynamic group entries
* Allow selection of name/spn in oidc claims - Allow selection of name/spn in oidc claims
* Admin UI wireframes and basic elements - Admin UI wireframes and basic elements
* TLS enforced as a requirement for all servers - TLS enforced as a requirement for all servers
* Support API service account tokens - Support API service account tokens
* Make name rules stricter due to issues found in production - Make name rules stricter due to issues found in production
* Improve Oauth2 PKCE testing - Improve Oauth2 PKCE testing
* Add support for new password import hashes - Add support for new password import hashes
* Allow configuration of trusting x forward for headers - Allow configuration of trusting x forward for headers
* Components for account permission elevation modes - Components for account permission elevation modes
* Make pam\_unix more robust in high latency environments - Make pam\_unix more robust in high latency environments
* Add proc macros for test cases - Add proc macros for test cases
* Improve authentication requests with cookie/token seperation - Improve authentication requests with cookie/token seperation
* Cleanup of expired authentication sessions - Cleanup of expired authentication sessions
* Improved administration of password badlists - Improved administration of password badlists
## 2022-08-02 - Kanidm 1.1.0-alpha9 ## 2022-08-02 - Kanidm 1.1.0-alpha9
This is the ninth alpha series release of the Kanidm Identity Management This is the ninth alpha series release of the Kanidm Identity Management project. Alpha releases are
project. Alpha releases are to help get feedback and ideas from the community to help get feedback and ideas from the community on how we can continue to make this project better
on how we can continue to make this project better for a future supported release. for a future supported release.
The project is shaping up very nicely, and a beta will be coming soon! The project is shaping up very nicely, and a beta will be coming soon!
### Release Highlights ### Release Highlights
* Inclusion of a Python3 API library - Inclusion of a Python3 API library
* Improve orca usability - Improve orca usability
* Improved content security hashes of js/wasm elements - Improved content security hashes of js/wasm elements
* Performance improvements in builds - Performance improvements in builds
* Windows development and service support - Windows development and service support
* WebUI polish and improvements - WebUI polish and improvements
* Consent is remembered in oauth2 improving access flows - Consent is remembered in oauth2 improving access flows
* Replication changelog foundations - Replication changelog foundations
* Compression middleware for static assests to reduce load times - Compression middleware for static assests to reduce load times
* User on boarding now possible with self service credential reset - User on boarding now possible with self service credential reset
* TOTP and Webauthn/Passkey support in self service credential reset - TOTP and Webauthn/Passkey support in self service credential reset
* CTAP2+ support in Webauthn via CLI - CTAP2+ support in Webauthn via CLI
* Radius supports EAP TLS identities in addition to EAP PEAP - Radius supports EAP TLS identities in addition to EAP PEAP
## 2022-05-01 - Kanidm 1.1.0-alpha8 ## 2022-05-01 - Kanidm 1.1.0-alpha8
This is the eighth alpha series release of the Kanidm Identity Management This is the eighth alpha series release of the Kanidm Identity Management project. Alpha releases
project. Alpha releases are to help get feedback and ideas from the community are to help get feedback and ideas from the community on how we can continue to make this project
on how we can continue to make this project better for a future supported release. better for a future supported release.
### Release Highlights ### Release Highlights
* Foundations for cryptographic trusted device authentication - Foundations for cryptographic trusted device authentication
* Foundations for new user onboarding and credential reset - Foundations for new user onboarding and credential reset
* Improve acis for administration of radius secrets - Improve acis for administration of radius secrets
* Simplify initial server setup related to domain naming - Simplify initial server setup related to domain naming
* Improve authentication performance during high load - Improve authentication performance during high load
* Developer documentation improvements - Developer documentation improvements
* Resolve issues with client tool outputs not being displayed - Resolve issues with client tool outputs not being displayed
* Show more errors on api failures - Show more errors on api failures
* Extend the features of account person set - Extend the features of account person set
* Link pam with pkg-config allowing more portable builds - Link pam with pkg-config allowing more portable builds
* Allow self-service email addresses to be delegated - Allow self-service email addresses to be delegated
* Highlight that the WebUI is in alpha to prevent confusion - Highlight that the WebUI is in alpha to prevent confusion
* Remove sync only client paths - Remove sync only client paths
## 2022-01-01 - Kanidm 1.1.0-alpha7 ## 2022-01-01 - Kanidm 1.1.0-alpha7
This is the seventh alpha series release of the Kanidm Identity Management This is the seventh alpha series release of the Kanidm Identity Management project. Alpha releases
project. Alpha releases are to help get feedback and ideas from the community are to help get feedback and ideas from the community on how we can continue to make this project
on how we can continue to make this project better for a future supported release. better for a future supported release.
### Release Highlights ### Release Highlights
* Oauth2 scope to group mappings - Oauth2 scope to group mappings
* Webauthn subdomain support - Webauthn subdomain support
* Oauth2 rfc7662 token introspection - Oauth2 rfc7662 token introspection
* Basic OpenID Connect support - Basic OpenID Connect support
* Improve performance of domain rename - Improve performance of domain rename
* Refactor of entry value internals to improve performance - Refactor of entry value internals to improve performance
* Addition of email address attributes - Addition of email address attributes
* Web UI improvements for Oauth2 - Web UI improvements for Oauth2
## 2021-10-01 - Kanidm 1.1.0-alpha6 ## 2021-10-01 - Kanidm 1.1.0-alpha6
This is the sixth alpha series release of the Kanidm Identity Management This is the sixth alpha series release of the Kanidm Identity Management project. Alpha releases are
project. Alpha releases are to help get feedback and ideas from the community to help get feedback and ideas from the community on how we can continue to make this project better
on how we can continue to make this project better for a future supported release. for a future supported release.
It's also a special release as Kanidm has just turned 3 years old! Thank you all It's also a special release as Kanidm has just turned 3 years old! Thank you all for helping to
for helping to bring the project this far! 🎉 🦀 bring the project this far! 🎉 🦀
### Release Highlights ### Release Highlights
* Support backup codes as MFA in case of lost TOTP/Webauthn - Support backup codes as MFA in case of lost TOTP/Webauthn
* Dynamic menus on CLI for usernames when multiple sessions exist - Dynamic menus on CLI for usernames when multiple sessions exist
* Dynamic menus on CLI for auth factors when choices exist - Dynamic menus on CLI for auth factors when choices exist
* Better handle missing resources for web ui elements at server startup - Better handle missing resources for web ui elements at server startup
* Add WAL checkpointing to improve disk usage - Add WAL checkpointing to improve disk usage
* Oauth2 user interface flows for simple authorisation scenarioes - Oauth2 user interface flows for simple authorisation scenarioes
* Improve entry memory usage based on valueset rewrite - Improve entry memory usage based on valueset rewrite
* Allow online backups to be scheduled and taken - Allow online backups to be scheduled and taken
* Reliability improvements for unixd components with missing sockets - Reliability improvements for unixd components with missing sockets
* Error message improvements for humans - Error message improvements for humans
* Improve client address logging for auditing - Improve client address logging for auditing
* Add strict HTTP resource headers for incoming/outgoing requests - Add strict HTTP resource headers for incoming/outgoing requests
* Replace rustls with openssl for HTTPS endpoint - Replace rustls with openssl for HTTPS endpoint
* Remove auditscope in favour of the new tracing logging subsystem - Remove auditscope in favour of the new tracing logging subsystem
* Reduce server memory usage with entry tracking improvements - Reduce server memory usage with entry tracking improvements
* Improvements to performance with high cache sizes - Improvements to performance with high cache sizes
* Session tokens persist over a session restart - Session tokens persist over a session restart
## 2021-07-07 - Kanidm 1.1.0-alpha5 ## 2021-07-07 - Kanidm 1.1.0-alpha5
This is the fifth alpha series release of the Kanidm Identity Management This is the fifth alpha series release of the Kanidm Identity Management project. Alpha releases are
project. Alpha releases are to help get feedback and ideas from the community to help get feedback and ideas from the community on how we can continue to make this project better
on how we can continue to make this project better for a future supported release. for a future supported release.
### Release Highlights ### Release Highlights
* Fix a major defect in how backup/restore worked - Fix a major defect in how backup/restore worked
* Improve query performance by caching partial queries - Improve query performance by caching partial queries
* Clarity of error messages and user communication - Clarity of error messages and user communication
* Password badlist caching - Password badlist caching
* Orca, a kanidm and ldap load testing system - Orca, a kanidm and ldap load testing system
* TOTP usability improvements - TOTP usability improvements
* Oauth2 foundations - Oauth2 foundations
* CLI tool session management improvements - CLI tool session management improvements
* Default shell falls back if the requested shell is not found - Default shell falls back if the requested shell is not found
* Optional backup codes in case of lost MFA device - Optional backup codes in case of lost MFA device
* Statistical analysis of indexes to improve query optimisation - Statistical analysis of indexes to improve query optimisation
* Handle broken TOTP authenticator apps - Handle broken TOTP authenticator apps
## 2021-04-01 - Kanidm 1.1.0-alpha4 ## 2021-04-01 - Kanidm 1.1.0-alpha4
This is the fourth alpha series release of the Kanidm Identity Management This is the fourth alpha series release of the Kanidm Identity Management project. Alpha releases
project. Alpha releases are to help get feedback and ideas from the community are to help get feedback and ideas from the community on how we can continue to make this project
on how we can continue to make this project better for a future supported release. better for a future supported release.
### Release Highlights ### Release Highlights
* Performance Improvements - Performance Improvements
* TOTP CLI enrollment - TOTP CLI enrollment
* Jemalloc in main server instead of system allocator - Jemalloc in main server instead of system allocator
* Command line completion - Command line completion
* TLS file handling improvements - TLS file handling improvements
* Webauthn authentication and enrollment on CLI - Webauthn authentication and enrollment on CLI
* Add db vacuum task - Add db vacuum task
* Unix tasks daemon that automatically creates home directories - Unix tasks daemon that automatically creates home directories
* Support for sk-ecdsa public ssh keys - Support for sk-ecdsa public ssh keys
* Badlist checked at login to determine account compromise - Badlist checked at login to determine account compromise
* Minor Fixes for attribute display - Minor Fixes for attribute display
## 2021-01-01 - Kanidm 1.1.0-alpha3 ## 2021-01-01 - Kanidm 1.1.0-alpha3
This is the third alpha series release of the Kanidm Identity Management This is the third alpha series release of the Kanidm Identity Management project. Alpha releases are
project. Alpha releases are to help get feedback and ideas from the community to help get feedback and ideas from the community on how we can continue to make this project better
on how we can continue to make this project better for a future supported release. for a future supported release.
### Release Highlights ### Release Highlights
* Account "valid from" and "expiry" times. - Account "valid from" and "expiry" times.
* Rate limiting and softlocking of account credentials to prevent bruteforcing. - Rate limiting and softlocking of account credentials to prevent bruteforcing.
* Foundations of webauthn and multiple credential support. - Foundations of webauthn and multiple credential support.
* Rewrite of json authentication protocol components. - Rewrite of json authentication protocol components.
* Unixd will cache "non-existant" items to improve nss/pam latency. - Unixd will cache "non-existant" items to improve nss/pam latency.
## 2020-10-01 - Kanidm 1.1.0-alpha2 ## 2020-10-01 - Kanidm 1.1.0-alpha2
This is the second alpha series release of the Kanidm Identity Management This is the second alpha series release of the Kanidm Identity Management project. Alpha releases
project. Alpha releases are to help get feedback and ideas from the community are to help get feedback and ideas from the community on how we can continue to make this project
on how we can continue to make this project better for a future supported release. better for a future supported release.
### Release Highlights ### Release Highlights
* SIMD key lookups in container builds for datastructures - SIMD key lookups in container builds for datastructures
* Server and Client hardening warnings for running users and file permissions - Server and Client hardening warnings for running users and file permissions
* Search limits and denial of unindexed searches to prevent denial-of-service - Search limits and denial of unindexed searches to prevent denial-of-service
* Dynamic Rounds for PBKDF2 based on CPU performance - Dynamic Rounds for PBKDF2 based on CPU performance
* Radius module upgraded to python 3 - Radius module upgraded to python 3
* On-login PW upgrade, allowing weaker hashes to be re-computed to stronger variants on login. - On-login PW upgrade, allowing weaker hashes to be re-computed to stronger variants on login.
* Replace actix with tide and async - Replace actix with tide and async
* Reduction in memory footprint during searches - Reduction in memory footprint during searches
* Change authentication from cookies to auth-bearer tokens - Change authentication from cookies to auth-bearer tokens
## 2020-07-01 - Kanidm 1.1.0-alpha1 ## 2020-07-01 - Kanidm 1.1.0-alpha1
This is the first alpha series release of the Kanidm Identity Management This is the first alpha series release of the Kanidm Identity Management project. Alpha releases are
project. Alpha releases are to help get feedback and ideas from the community to help get feedback and ideas from the community on how we can continue to make this project better
on how we can continue to make this project better for a future supported release. for a future supported release.
It would not be possible to create a project like this, without the contributions It would not be possible to create a project like this, without the contributions and help of many
and help of many people. I would especially like to thank: people. I would especially like to thank:
* Pando85 - Pando85
* Alberto Planas (aplanas) - Alberto Planas (aplanas)
* Jake (slipperyBishop) - Jake (slipperyBishop)
* Charelle (Charcol) - Charelle (Charcol)
* Leigh (excitedleigh) - Leigh (excitedleigh)
* Jamie (JJJollyjim) - Jamie (JJJollyjim)
* Triss Healy (NiryaAestus) - Triss Healy (NiryaAestus)
* Samuel Cabrero (scabrero) - Samuel Cabrero (scabrero)
* Jim McDonough - Jim McDonough
### Release Highlights ### Release Highlights
* A working identity management server, including database - A working identity management server, including database
* RADIUS authentication and docker images - RADIUS authentication and docker images
* Pam and Nsswitch resolvers for Linux/Unix authentication - Pam and Nsswitch resolvers for Linux/Unix authentication
* SSH public key distribution - SSH public key distribution
* LDAP server front end for legacy applications - LDAP server front end for legacy applications
* Password badlisting and quality checking - Password badlisting and quality checking
* Memberof and reverse group management with referential integrity - Memberof and reverse group management with referential integrity
* Recycle Bin - Recycle Bin
* Performance analysis tools - Performance analysis tools
[issue tracker]: https://github.com/kanidm/kanidm/issues [issue tracker]: https://github.com/kanidm/kanidm/issues
[gitter community channel]: https://gitter.im/kanidm/community [gitter community channel]: https://gitter.im/kanidm/community
[code of conduct]: https://github.com/kanidm/kanidm/blob/master/CODE_OF_CONDUCT.md [code of conduct]: https://github.com/kanidm/kanidm/blob/master/CODE_OF_CONDUCT.md
[kanidm book]: https://kanidm.github.io/kanidm/stable/ [kanidm book]: https://kanidm.github.io/kanidm/stable/

View file

@ -1,18 +1,21 @@
# Security Policy # Security Policy
Thanks for taking the time to engage with the project! We believe in the concept of [coordinated disclosure](https://en.wikipedia.org/wiki/Coordinated_vulnerability_disclosure) and currently expect a 60 day grace period for resolution of any outstanding issues. Thanks for taking the time to engage with the project! We believe in the concept of
[coordinated disclosure](https://en.wikipedia.org/wiki/Coordinated_vulnerability_disclosure) and
currently expect a 60 day grace period for resolution of any outstanding issues.
## Supported Versions ## Supported Versions
As Kanidm is in early Alpha and under heavy development, only the most recent release and current HEAD of the main branch will be supported. As Kanidm is in early Alpha and under heavy development, only the most recent release and current
HEAD of the main branch will be supported.
## Reporting a Vulnerability ## Reporting a Vulnerability
Please log a security-template issue on Github and directly contact one of the core team members via email: Please log a security-template issue on Github and directly contact one of the core team members via
email:
- [William](mailto:william@blackhats.net.au) - [William](mailto:william@blackhats.net.au)
- [James](mailto:james+kanidm@terminaloutcomes.com) - [James](mailto:james+kanidm@terminaloutcomes.com)
We will endeavour to respond to your request as soon as possible, no bounties are available, but acknowledgement will be made if you like. We will endeavour to respond to your request as soon as possible, no bounties are available, but
acknowledgement will be made if you like.

View file

@ -1,6 +1,4 @@
## About these artworks
About these artworks
--------------------
The original artworks were commissioned and produced by Jesse Irwin (tw: @wizardfortress). The original artworks were commissioned and produced by Jesse Irwin (tw: @wizardfortress).
@ -9,4 +7,3 @@ The christmas logo was donated and produced by @ateneatla ( https://github.com/a
They are all very much appreciated! They are all very much appreciated!
All artworks are licensed as CC-BY-NC-ND. All artworks are licensed as CC-BY-NC-ND.

View file

@ -1,161 +1,143 @@
## Architectural Overview
Architectural Overview Kanidm has a number of components and layers that make it up. As this project is continually
---------------------- evolving, if you have questions or notice discrepancies with this document please contact William
(Firstyear) at any time.
Kanidm has a number of components and layers that make it up. As this project ## Tools
is continually evolving, if you have questions or notice discrepancies
with this document please contact William (Firstyear) at any time.
Tools Kanidm Tools are a set of command line clients that are intended to help administrators deploy,
----- interact with, and support a Kanidm server installation. These tools may also be used for servers or
machines to authenticate and identify users. This is the "human interaction" part of the server from
a CLI perspective.
Kanidm Tools are a set of command line clients that are intended to help ## Clients
administrators deploy, interact with, and support a Kanidm server installation.
These tools may also be used for servers or machines to authenticate and
identify users. This is the "human interaction" part of the server from a
CLI perspective.
Clients The `kanidm` client is a reference implementation of the client library, that others may consume or
------- interact with to communicate with a Kanidm server instance. The tools above use this client library
for all of its actions. This library is intended to encapsulate some high level logic as an
abstraction over the REST API.
The `kanidm` client is a reference implementation of the client library, that ## Proto
others may consume or interact with to communicate with a Kanidm server instance.
The tools above use this client library for all of its actions. This library
is intended to encapsulate some high level logic as an abstraction over the REST API.
Proto The `kanidm` proto is a set of structures that are used by the REST and raw API's for HTTP
----- communication. These are intended to be a reference implementation of the on-the-wire protocol, but
importantly these are also how the server represents its communication. This makes this the
authorative source of protocol layouts with regard to REST or raw communication.
The `kanidm` proto is a set of structures that are used by the REST and raw API's ## Kanidmd (main server)
for HTTP communication. These are intended to be a reference implementation of the on-the-wire protocol, but importantly these are also how the server represents its communication. This makes this the authorative source of protocol layouts
with regard to REST or raw communication.
Kanidmd (main server) Kanidmd is intended to have minimal (thin) client tools, where the server itself contains most logic
--------------------- for operations, transformations, and routing of requests to their relevant datatypes. As a result,
the `kanidmd` section is the largest component of the project as it implements nearly everything
required for IDM functionality to exist.
Kanidmd is intended to have minimal (thin) client tools, where the server itself # Search
contains most logic for operations, transformations, and routing of requests to
their relevant datatypes. As a result, the `kanidmd` section is the largest component
of the project as it implements nearly everything required for IDM functionality to exist.
Search Search is the "hard worker" of the server, intended to be a fast path with minimal overhead so that
====== clients can acquire data as quickly as possible. The server follows the below pattern.
Search is the "hard worker" of the server, intended to be a fast path with minimal overhead
so that clients can acquire data as quickly as possible. The server follows the below pattern.
![Search flow diagram](diagrams/search-flow.png) ![Search flow diagram](diagrams/search-flow.png)
(1) All incoming requests are from a client on the left. These are either REST (1) All incoming requests are from a client on the left. These are either REST requests, or a
requests, or a structured protocol request via the raw interface. It's structured protocol request via the raw interface. It's interesting to note the raw request is
interesting to note the raw request is almost identical to the queryserver almost identical to the queryserver event types - where as REST requests we have to generate request
event types - where as REST requests we have to generate request messages that can messages that can become events.
become events.
The frontend uses a webserver with a thread-pool to process and decode The frontend uses a webserver with a thread-pool to process and decode network I/O operations
network I/O operations concurrently. This then sends asynchronous messages concurrently. This then sends asynchronous messages to a worker (actor) pool for handing.
to a worker (actor) pool for handing.
(2) These search messages in the actors are transformed into "events" - a self (2) These search messages in the actors are transformed into "events" - a self contained structure
contained structure containing all relevant data related to the operation at hand. containing all relevant data related to the operation at hand. This may be the event origin (a user
This may be the event origin (a user or internal), the requested filter (query), or internal), the requested filter (query), and perhaps even a list of attributes requested. These
and perhaps even a list of attributes requested. These events are designed events are designed to ensure correctness. When a search message is transformed to a search event,
to ensure correctness. When a search message is transformed to a search event, it it is checked by the schema to ensure that the request is valid and can be satisfied securely.
is checked by the schema to ensure that the request is valid and can be
satisfied securely.
As these workers are in a thread pool, it's important that these are concurrent and As these workers are in a thread pool, it's important that these are concurrent and do not lock or
do not lock or block - this concurrency is key to high performance and safety. block - this concurrency is key to high performance and safety. It's also worth noting that this is
It's also worth noting that this is the level where read transactions are created the level where read transactions are created and commited - all operations are transactionally
and commited - all operations are transactionally proctected from an early stage proctected from an early stage to guarantee consistency of the operations.
to guarantee consistency of the operations.
3. When the event is known to be consistent, it is then handed to the queryserver - the query server 3. When the event is known to be consistent, it is then handed to the queryserver - the query server
begins a process of steps on the event to apply it and determine the results for the request. begins a process of steps on the event to apply it and determine the results for the request.
This process involves further validation of the query, association of metadata to the query This process involves further validation of the query, association of metadata to the query for
for the backend, and then submission of the high-level query to the backend. the backend, and then submission of the high-level query to the backend.
4. The backend takes the request and begins the low-level processing to actually determine 4. The backend takes the request and begins the low-level processing to actually determine a
a candidate set. The first step in query optimisation, to ensure we apply the query in the candidate set. The first step in query optimisation, to ensure we apply the query in the most
most effecient manner. Once optimised, we then use the query to query indexes and create effecient manner. Once optimised, we then use the query to query indexes and create a potential
a potential candidate set of identifiers for matching entries (5.). Once we have this candidate set of identifiers for matching entries (5.). Once we have this candidate id set, we
candidate id set, we then retrieve the relevant entries as our result candidate set (6.) then retrieve the relevant entries as our result candidate set (6.) and return them (7.) to the
and return them (7.) to the backend. backend.
8. The backend now deserialises the databases candidate entries into a higher level and 5. The backend now deserialises the databases candidate entries into a higher level and structured
structured (and strongly typed) format that the query server knows how to operate on. These (and strongly typed) format that the query server knows how to operate on. These are then sent
are then sent back to the query server. back to the query server.
9. The query server now applies access controls over what you can / can't see. This happens 6. The query server now applies access controls over what you can / can't see. This happens in two
in two phases. The first is to determine "which candidate entries you have the rights to phases. The first is to determine "which candidate entries you have the rights to query and view"
query and view" and the second is to determine "which attributes of each entry you have and the second is to determine "which attributes of each entry you have the right to percieve".
the right to percieve". This seperation exists so that other parts of the server can This seperation exists so that other parts of the server can _impersonate_ users and conduct
*impersonate* users and conduct searches on their behalf, but still internally operate searches on their behalf, but still internally operate on the full entry without access controls
on the full entry without access controls limiting their scope of attributes we can view. limiting their scope of attributes we can view.
10. From the entries reduced set (ie access controls applied), we can then transform 7. From the entries reduced set (ie access controls applied), we can then transform each entry into
each entry into it's protocol forms - where we transform each strong type into a string it's protocol forms - where we transform each strong type into a string representation for
representation for simpler processing for clients. These protoentries are returned to the simpler processing for clients. These protoentries are returned to the front end.
front end.
11. Finally, the protoentries are now sent to the client in response to their request. 8. Finally, the protoentries are now sent to the client in response to their request.
Write # Write
=====
The write path is similar to the search path, but has some subtle differences that are The write path is similar to the search path, but has some subtle differences that are worth paying
worth paying attention to. attention to.
.. image:: diagrams/write-flow.png .. image:: diagrams/write-flow.png :width: 800
:width: 800
(1), (2) Like search, all client operations come from the REST or raw apis, and are transformed or (1), (2) Like search, all client operations come from the REST or raw apis, and are transformed or
generated into messages. These messages are sent to a single write worker. There is only a single generated into messages. These messages are sent to a single write worker. There is only a single
write worker due to the use of copy-on-write structures in the server, limiting us to a single writer, write worker due to the use of copy-on-write structures in the server, limiting us to a single
but allowing search transaction to proceed without blocking in parallel. writer, but allowing search transaction to proceed without blocking in parallel.
(3) From the worker, the relevent event is created. This may be a "Create", "Modify" or "Delete" event. (3) From the worker, the relevent event is created. This may be a "Create", "Modify" or "Delete"
The query server handles these slightly differently. In the create path, we take the set of entries event. The query server handles these slightly differently. In the create path, we take the set of
you wish to create as our candidate set. In modify or delete, we perform an impersonation search, entries you wish to create as our candidate set. In modify or delete, we perform an impersonation
and use the set of entries within your read bounds to generate the candidate set. This candidate search, and use the set of entries within your read bounds to generate the candidate set. This
set will now be used for the remainder of the writing operation. candidate set will now be used for the remainder of the writing operation.
It is at this point, we assert access controls over the candidate set and the changes you wish It is at this point, we assert access controls over the candidate set and the changes you wish to
to make. If you are not within rights to perform these operations the event returns an error. make. If you are not within rights to perform these operations the event returns an error.
(4) The entries are now sent to the pre-operation plugins for the relevant operation type. This allows (4) The entries are now sent to the pre-operation plugins for the relevant operation type. This
transformation of the candidate entries beyond the scope of your access controls, and to maintain allows transformation of the candidate entries beyond the scope of your access controls, and to
some elements of data consistency. For example one plugin prevents creation of system protected types maintain some elements of data consistency. For example one plugin prevents creation of system
where another ensures that uuid exists on every entry. protected types where another ensures that uuid exists on every entry.
(5) These transformed entries are now returned to the query server. (5) These transformed entries are now returned to the query server.
(6) The backend is sent the list of entries for writing. Indexes are generated (7) as required based (6) The backend is sent the list of entries for writing. Indexes are generated (7) as required based
on the new or modified entries, and the entries themself are written (8) into the core db tables. This on the new or modified entries, and the entries themself are written (8) into the core db tables.
operation returns a result (9) to the backend, which is then filtered up to the query server (10) This operation returns a result (9) to the backend, which is then filtered up to the query server
(10)
(11) Provided all operations to this point have been successful, we now apply post write plugins which (11) Provided all operations to this point have been successful, we now apply post write plugins
may enforce or generate different properties in the transaction. This is similar to the pre plugins, which may enforce or generate different properties in the transaction. This is similar to the pre
but allows different operations. For example, a post plugin ensurs uuid reference types are plugins, but allows different operations. For example, a post plugin ensurs uuid reference types are
consistent and valid across the set of changes in the database. The most critical is memberof, consistent and valid across the set of changes in the database. The most critical is memberof, which
which generates reverse reference links from entries to their group memberships, enabling fast generates reverse reference links from entries to their group memberships, enabling fast rbac
rbac operations. These are done as post plugins because at this point internal searches can now operations. These are done as post plugins because at this point internal searches can now yield and
yield and see the modified entries that we have just added to the indexes and datatables, which see the modified entries that we have just added to the indexes and datatables, which is important
is important for consistency (and simplicity) especially when you consider batched operations. for consistency (and simplicity) especially when you consider batched operations.
(12) Finally the result is returned up (13) through (14) the layers (15) to the client to (12) Finally the result is returned up (13) through (14) the layers (15) to the client to inform
inform them of the success (or failure) of the operation. them of the success (or failure) of the operation.
# IDM
IDM
===
TBD TBD
Radius ## Radius
-------
The radius components are intended to be minimal to support a common set of radius operations in
a container image that is simple to configure. If you require a custom configuration you should
use the python tools here and configure your own radius instance as required.
The radius components are intended to be minimal to support a common set of radius operations in a
container image that is simple to configure. If you require a custom configuration you should use
the python tools here and configure your own radius instance as required.

View file

@ -1,6 +1,7 @@
# Domain Display Name # Domain Display Name
A human-facing string to use in places like web page titles, TOTP issuer codes, the Oauth authorisation server name etc. A human-facing string to use in places like web page titles, TOTP issuer codes, the Oauth
authorisation server name etc.
On system creation, or if it hasn't been set, it'll default to `format!("Kanidm {}", domain_name)` so that you'll see `Kanidm idm.example.com` if your domain is `idm.example.com`.
On system creation, or if it hasn't been set, it'll default to `format!("Kanidm {}", domain_name)`
so that you'll see `Kanidm idm.example.com` if your domain is `idm.example.com`.

View file

@ -1,29 +1,26 @@
## Indexing
Indexing
--------
Indexing is deeply tied to the concept of filtering. Indexes exist to make the application of a Indexing is deeply tied to the concept of filtering. Indexes exist to make the application of a
search term (filter) faster. search term (filter) faster.
World without indexing ## World without indexing
----------------------
Almost all databases are built ontop of a key-value storage engine of some nature. Almost all databases are built ontop of a key-value storage engine of some nature. In our case we
In our case we are using (feb 2019) sqlite and hopefully SLED in the future. are using (feb 2019) sqlite and hopefully SLED in the future.
So our entries that contain sets of avas, these are serialised into a byte format So our entries that contain sets of avas, these are serialised into a byte format (feb 2019, json
(feb 2019, json but soon cbor) and stored in a table of "id: entry". For example: but soon cbor) and stored in a table of "id: entry". For example:
| ID | data | | ID | data |
|------|-----------------------------------------------------------------------------| | -- | ------------------------------------------------------------------------- |
| 01 | `{ 'Entry': { 'name': ['name'], 'class': ['person'], 'uuid': ['...'] } }` | | 01 | `{ 'Entry': { 'name': ['name'], 'class': ['person'], 'uuid': ['...'] } }` |
| 02 | `{ 'Entry': { 'name': ['beth'], 'class': ['person'], 'uuid': ['...'] } }` | | 02 | `{ 'Entry': { 'name': ['beth'], 'class': ['person'], 'uuid': ['...'] } }` |
| 03 | `{ 'Entry': { 'name': ['alan'], 'class': ['person'], 'uuid': ['...'] } }` | | 03 | `{ 'Entry': { 'name': ['alan'], 'class': ['person'], 'uuid': ['...'] } }` |
| 04 | `{ 'Entry': { 'name': ['john'], 'class': ['person'], 'uuid': ['...'] } }` | | 04 | `{ 'Entry': { 'name': ['john'], 'class': ['person'], 'uuid': ['...'] } }` |
| 05 | `{ 'Entry': { 'name': ['kris'], 'class': ['person'], 'uuid': ['...'] } }` | | 05 | `{ 'Entry': { 'name': ['kris'], 'class': ['person'], 'uuid': ['...'] } }` |
The ID column is *private* to the backend implementation and is never revealed to the higher The ID column is _private_ to the backend implementation and is never revealed to the higher level
level components. However the ID is very important to indexing :) components. However the ID is very important to indexing :)
If we wanted to find `Eq(name, john)` here, what do we need to do? A full table scan is where we If we wanted to find `Eq(name, john)` here, what do we need to do? A full table scan is where we
perform: perform:
@ -33,20 +30,19 @@ perform:
entry = deserialise(row) entry = deserialise(row)
entry.match_filter(...) // check Eq(name, john) entry.match_filter(...) // check Eq(name, john)
For a small database (maybe up to 20 objects), this is probably fine. But once you start to get For a small database (maybe up to 20 objects), this is probably fine. But once you start to get much
much larger this is really costly. We continually load, deserialise, check and free data that larger this is really costly. We continually load, deserialise, check and free data that is not
is not relevant to the search. This is why full table scans of any database (sql, ldap, anything) relevant to the search. This is why full table scans of any database (sql, ldap, anything) are so
are so costly. It's really really scanning everything! costly. It's really really scanning everything!
How does indexing work? ## How does indexing work?
-----------------------
Indexing is a pre-computed lookup table of what you *might* search in a specific format. Let's say Indexing is a pre-computed lookup table of what you _might_ search in a specific format. Let's say
in our example we have an equality index on "name" as an attribute. Now in our backend we define in our example we have an equality index on "name" as an attribute. Now in our backend we define an
an extra table called "index_eq_name". Its contents would look like: extra table called "index_eq_name". Its contents would look like:
| index | idl (ID List) | | index | idl (ID List) |
|-----------|---------------| | ----- | ------------- |
| alan | [03, ] | | alan | [03, ] |
| beth | [02, ] | | beth | [02, ] |
| john | [04, ] | | john | [04, ] |
@ -68,44 +64,44 @@ We can now take this back to our id2entry table and perform:
data = sqlite.do(SELECT * from id2entry where ID = 04) data = sqlite.do(SELECT * from id2entry where ID = 04)
``` ```
The key-value engine only gives us the entry for john, and we have a match! If id2entry The key-value engine only gives us the entry for john, and we have a match! If id2entry had 1
had 1 million entries, a full table scan would be 1 million loads and compares - with the million entries, a full table scan would be 1 million loads and compares - with the index, it was 2
index, it was 2 loads and one compare. That's 30000x faster (potentially ;) )! loads and one compare. That's 30000x faster (potentially ;) )!
To improve on this, if we had a query like Or(Eq(name, john), Eq(name, kris)) we can use our To improve on this, if we had a query like Or(Eq(name, john), Eq(name, kris)) we can use our indexes
indexes to speed this up. to speed this up.
We would query index_eq_name again, and we would perform the search for both john, and kris. Because this is an OR we then union the two idl's, and we would have: We would query index_eq_name again, and we would perform the search for both john, and kris. Because
this is an OR we then union the two idl's, and we would have:
```
[04, 05,] [04, 05,]
```
Now we just have to get entries 04,05 from id2entry, and we have our matching query. This means Now we just have to get entries 04,05 from id2entry, and we have our matching query. This means
filters are often applied as idl set operations. filters are often applied as idl set operations.
Compressed ID lists ## Compressed ID lists
-------------------
In order to make idl loading faster, and the set operations faster there is an idl library In order to make idl loading faster, and the set operations faster there is an idl library
(developed by me, firstyear), which will be used for this. To read more see: (developed by me, firstyear), which will be used for this. To read more see:
https://github.com/Firstyear/idlset https://github.com/Firstyear/idlset
Filter Optimisation ## Filter Optimisation
-------------------
Filter optimisation begins to play an important role when we have indexes. If we indexed Filter optimisation begins to play an important role when we have indexes. If we indexed something
something like `Pres(class)`, then the idl for that search is the set of all database like `Pres(class)`, then the idl for that search is the set of all database entries. Similar, if our
entries. Similar, if our database of 1 million entries has 250,000 `class=person`, then database of 1 million entries has 250,000 `class=person`, then `Eq(class, person)`, will have an idl
`Eq(class, person)`, will have an idl containing 250,000 ids. Even with idl compression, this containing 250,000 ids. Even with idl compression, this is still a lot of data!
is still a lot of data!
There tend to be two types of searches against a directory like Kanidm. There tend to be two types of searches against a directory like Kanidm.
* Broad searches - Broad searches
* Targetted single entry searches - Targetted single entry searches
For broad searches, filter optimising does little - we just have to load those large idls, and For broad searches, filter optimising does little - we just have to load those large idls, and use
use them. (Yes, loading the large idl and using it is still better than full table scan though!) them. (Yes, loading the large idl and using it is still better than full table scan though!)
However, for targeted searches, filter optimisation really helps. However, for targeted searches, filter optimisation really helps.
@ -121,13 +117,13 @@ In this case with our database of 250,000 persons, our idl's would have:
And( idl[250,000 ids], idl(1 id)) And( idl[250,000 ids], idl(1 id))
``` ```
Which means the result will always be the *single* id in the idl or *no* value Which means the result will always be the _single_ id in the idl or _no_ value because it wasn't
because it wasn't present. present.
We add a single concept to the server called the "filter test threshold". This is the We add a single concept to the server called the "filter test threshold". This is the state in which
state in which a candidate set that is not completed operation, is shortcut, and we a candidate set that is not completed operation, is shortcut, and we then apply the filter in the
then apply the filter in the manner of a full table scan to the partial set because manner of a full table scan to the partial set because it will be faster than the index loading and
it will be faster than the index loading and testing. testing.
When we have this test threshold, there exists two possibilities for this filter. When we have this test threshold, there exists two possibilities for this filter.
@ -135,70 +131,66 @@ When we have this test threshold, there exists two possibilities for this filter
And( idl[250,000 ids], idl(1 id)) And( idl[250,000 ids], idl(1 id))
``` ```
We load 250,000 idl and then perform the intersection with the idl of 1 value, and We load 250,000 idl and then perform the intersection with the idl of 1 value, and result in 1 or 0.
result in 1 or 0.
``` ```
And( idl(1 id), idl[250,000 ids]) And( idl(1 id), idl[250,000 ids])
``` ```
We load the single idl value for name, and then as we are below the test-threshold we We load the single idl value for name, and then as we are below the test-threshold we shortcut out
shortcut out and apply the filter to entry ID 1 - yielding a match or no match. and apply the filter to entry ID 1 - yielding a match or no match.
Notice in the second, by promoting the "smaller" idl, we were able to save the work of the idl load and intersection as our first equality of "name" was more targetted? Notice in the second, by promoting the "smaller" idl, we were able to save the work of the idl load
and intersection as our first equality of "name" was more targetted?
Filter optimisation is about re-arranging these filters in the server using our insight to Filter optimisation is about re-arranging these filters in the server using our insight to data to
data to provide faster searches and avoid indexes that are costly unless they are needed. provide faster searches and avoid indexes that are costly unless they are needed.
In this case, we would *demote* any filter where Eq(class, ...) to the *end* of the And, In this case, we would _demote_ any filter where Eq(class, ...) to the _end_ of the And, because it
because it is highly likely to be less targetted than the other Eq types. Another example is highly likely to be less targetted than the other Eq types. Another example would be promotion of
would be promotion of Eq filters to the front of an And over a Sub term, wherh Sub indexes Eq filters to the front of an And over a Sub term, wherh Sub indexes tend to be larger and have
tend to be larger and have longer IDLs. longer IDLs.
Implementation Details and Notes ## Implementation Details and Notes
--------------------------------
Before we discuss the details of the states and update processes, we need to consider the index Before we discuss the details of the states and update processes, we need to consider the index
types we require. types we require.
Index types # Index types
===========
The standard index is a key-value, where the key is the lookup, and the value is the idl set The standard index is a key-value, where the key is the lookup, and the value is the idl set of the
of the candidates. The examples follow the above. candidates. The examples follow the above.
For us, we will format the table names as: For us, we will format the table names as:
* idx_eq_<attrname> - idx_eq_<attrname>
* idx_sub_<attrname> - idx_sub_<attrname>
* idx_pres_<attrname> - idx_pres_<attrname>
These will be string, blob for SQL. The string is the pkey. These will be string, blob for SQL. The string is the pkey.
We will have the Value's "to_index_str" emit the set of values. It's important We will have the Value's "to_index_str" emit the set of values. It's important to remember this is a
to remember this is a *set* of possible index emissions, where we could have multiple values _set_ of possible index emissions, where we could have multiple values returned. This will be
returned. This will be important with claims for credentials so that the claims can be indexed important with claims for credentials so that the claims can be indexed correctly.
correctly.
We also require a special name to uuid, and uuid to name index. These are to accelerate We also require a special name to uuid, and uuid to name index. These are to accelerate the
the name2uuid and uuid2name functions which are common in resolving on search. These will name2uuid and uuid2name functions which are common in resolving on search. These will be named in
be named in the tables as: the tables as:
* idx_name2uuid - idx_name2uuid
* idx_uuid2name - idx_uuid2name
They will be structured as string, string for both - where the uuid and name column matches They will be structured as string, string for both - where the uuid and name column matches the
the correct direction, and is the primary key. We could use a single table, but if correct direction, and is the primary key. We could use a single table, but if we change to sled we
we change to sled we need to split this, so we pre-empt this change and duplicate the data here. need to split this, so we pre-empt this change and duplicate the data here.
Indexing States # Indexing States
===============
* Reindex - Reindex
A reindex is the only time when we create the tables needed for indexing. In all other phases A reindex is the only time when we create the tables needed for indexing. In all other phases if we
if we do not have the table for the insertion, we log the error, and move on, instructing in do not have the table for the insertion, we log the error, and move on, instructing in the logs to
the logs to reindex asap. reindex asap.
Reindexing should be performed after we join a replication group, or when we "setup" the instance Reindexing should be performed after we join a replication group, or when we "setup" the instance
for the first time. This means we need an "initial indexed" flag or similar. for the first time. This means we need an "initial indexed" flag or similar.
@ -206,64 +198,67 @@ for the first time. This means we need an "initial indexed" flag or similar.
For all intents, a reindex is likely the same as "create" but just without replacing the entry. We For all intents, a reindex is likely the same as "create" but just without replacing the entry. We
would just remove all the index tables before hand. would just remove all the index tables before hand.
* Write operation index metadata - Write operation index metadata
At the start of a write transaction, the schema passes us a map of the current attribute At the start of a write transaction, the schema passes us a map of the current attribute index
index states so that on filter application or modification we are aware of what attrs are indexed. It is assumed that `name2uuid` and `uuid2name` are always indexed. states so that on filter application or modification we are aware of what attrs are indexed. It is
assumed that `name2uuid` and `uuid2name` are always indexed.
* Search Index Metadata - Search Index Metadata
When filters are resolved they are tagged by their indexed state to allow optimisation When filters are resolved they are tagged by their indexed state to allow optimisation to occur. We
to occur. We then process each filter element and their tag to determine the indexes then process each filter element and their tag to determine the indexes needed to built a candidate
needed to built a candidate set. Once we reach threshold we return the partial candidate set, set. Once we reach threshold we return the partial candidate set, and begin the `id2entry` process
and begin the `id2entry` process and the `entry_match_no_index` routine. and the `entry_match_no_index` routine.
`And` and `Or` terms have flags if they are partial or fully indexed, meaning we could have a `And` and `Or` terms have flags if they are partial or fully indexed, meaning we could have a
shortcut where if the outermost term is a full indexed term, then we can avoid the `entry_match_no_index` Scall. shortcut where if the outermost term is a full indexed term, then we can avoid the
`entry_match_no_index` Scall.
* Create - Create
This is one of the simplest steps. On create we iterate over the entries ava's and This is one of the simplest steps. On create we iterate over the entries ava's and referencing the
referencing the index metadata of the transaction, we create the indexes as needed from index metadata of the transaction, we create the indexes as needed from the values (before dbv
the values (before dbv conversion). conversion).
* Delete - Delete
Given the Entry to delete, we remove the ava's and id's from each set as needed. Generally this Given the Entry to delete, we remove the ava's and id's from each set as needed. Generally this will
will only be for tombstones, but we still should check the process works. Important to check will only be for tombstones, but we still should check the process works. Important to check will be
be entries with and without names, ensuring the name2uuid/uuid2name is correctly changed, and entries with and without names, ensuring the name2uuid/uuid2name is correctly changed, and removal
removal of all the other attributes. of all the other attributes.
* Modify - Modify
This is the truly scary and difficult situation. The simple method would be to "delete" all indexes based on the pre-entry state, and then to create again. However the current design This is the truly scary and difficult situation. The simple method would be to "delete" all indexes
of Entry and modification doesn't work like this as we only get the Entry to add. based on the pre-entry state, and then to create again. However the current design of Entry and
modification doesn't work like this as we only get the Entry to add.
Most likely we will need to change modify to take the set of (pre, post) candidates as a pair Most likely we will need to change modify to take the set of (pre, post) candidates as a pair _OR_
*OR* we have the entry store it's own pre-post internally. Given we already need to store the pre we have the entry store it's own pre-post internally. Given we already need to store the pre /post
/post entries in the txn, it's likely better to have a pairing of these, and that allows us to entries in the txn, it's likely better to have a pairing of these, and that allows us to then index
then index replication metadata later as the entry will contain it's own changelog internally. replication metadata later as the entry will contain it's own changelog internally.
Given the pair, we then assert that they are the same entry (id). We can then use the Given the pair, we then assert that they are the same entry (id). We can then use the index metadata
index metadata to generate an indexing diff between them, containing a set of index items to generate an indexing diff between them, containing a set of index items to remove (due to removal
to remove (due to removal of the attr or value), and what to add (due to addition). of the attr or value), and what to add (due to addition).
The major transformation cases for testing are: The major transformation cases for testing are:
* Add a multivalue (one) - Add a multivalue (one)
* Add a multivalue (many) - Add a multivalue (many)
* On a mulitvalue, add another value - On a mulitvalue, add another value
* On multivalue, remove a value, but leave others - On multivalue, remove a value, but leave others
* Delete a multivalue - Delete a multivalue
* Add a new single value - Add a new single value
* Replace a single value - Replace a single value
* Delete a single value - Delete a single value
We also need to check that modification of name correctly changes name2uuid and uuid2name. We also need to check that modification of name correctly changes name2uuid and uuid2name.
* Recycle to Tombstone (removal of name) - Recycle to Tombstone (removal of name)
* Change of UUID (may happen in repl conflict scenario) - Change of UUID (may happen in repl conflict scenario)
* Change of name - Change of name
* Change of name and uuid - Change of name and uuid
Of course, these should work as above too. Of course, these should work as above too.

View file

@ -1,20 +1,19 @@
# Examples of situations for consideration # Examples of situations for consideration
## Ability to be forgotten ## Ability to be forgotten
### Deletion is delete not flagging ### Deletion is delete not flagging
When an account is deleted it must be truly deleted, not just flagged for future delete. Note When an account is deleted it must be truly deleted, not just flagged for future delete. Note that
that some functionality like the recycle bin, we must keep the account details, but a recycle some functionality like the recycle bin, we must keep the account details, but a recycle bin purge
bin purge does truly delete the account. does truly delete the account.
## Self determination and autonomy ## Self determination and autonomy
### Self name change ### Self name change
People should be able to change their own name at anytime. Consider divorce, leaving abusive partners People should be able to change their own name at anytime. Consider divorce, leaving abusive
or other personal decisions around why a name change is relevant. partners or other personal decisions around why a name change is relevant.
This is why names are self-service writeable at any time. This is why names are self-service writeable at any time.
@ -22,15 +21,15 @@ This is why names are self-service writeable at any time.
### Cultural and Social awareness of name formats ### Cultural and Social awareness of name formats
All name fields should be case sensitive utf8 with no max or min length limit. This is All name fields should be case sensitive utf8 with no max or min length limit. This is because names
because names can take many forms such as. can take many forms such as.
* firstname middlename lastname - firstname middlename lastname
* firstname lastname - firstname lastname
* firstname firstname lastname - firstname firstname lastname
* firstname lastname lastname - firstname lastname lastname
* firstname - firstname
* lastname firstname - lastname firstname
And many many more that are not listed here. This is why our names are displayName as a freetext And many many more that are not listed here. This is why our names are displayName as a freetext
UTF8 field, with case sensitivity and no limits. UTF8 field, with case sensitivity and no limits.
@ -39,11 +38,7 @@ UTF8 field, with case sensitivity and no limits.
### Access to legalName field ### Access to legalName field
legalName should only be on a "need to know" basis, and only collected if required. This is legalName should only be on a "need to know" basis, and only collected if required. This is to help
to help people who may be stalked or harassed, or otherwise conscious of their privacy. people who may be stalked or harassed, or otherwise conscious of their privacy.
## To use and access this software regardless of ability ## To use and access this software regardless of ability

View file

@ -1,18 +1,18 @@
# Statement of ethics and rights # Statement of ethics and rights
Kanidm is a project that will store, process and present people's personal data. This means Kanidm is a project that will store, process and present people's personal data. This means we have
we have a responsibility to respect the data of all people who could be using our system - a responsibility to respect the data of all people who could be using our system - many who interact
many who interact indirectly or do not have a choice in this platform. indirectly or do not have a choice in this platform.
## Rights of people ## Rights of people
All people using this software should expect to have the right to: All people using this software should expect to have the right to:
* Self control over their data, including the ability to alter or delete at any time. - Self control over their data, including the ability to alter or delete at any time.
* Free from harmful discrimination of any kind - Free from harmful discrimination of any kind
* Informed consent over control and privacy of their data, including access and understand data held and shared on their behalf - Informed consent over control and privacy of their data, including access and understand data held
* To be able to use and access this software regardless of ability, culture or language. and shared on their behalf
- To be able to use and access this software regardless of ability, culture or language.
## More? ## More?

View file

@ -1,7 +1,7 @@
# Apache OAuth config example # Apache OAuth config example
This example is here mainly for devs to come up with super complicated ways to test the changes they're This example is here mainly for devs to come up with super complicated ways to test the changes
making which affect OAuth things. they're making which affect OAuth things.
## Example of how to run it ## Example of how to run it

View file

@ -2,6 +2,3 @@
This directory contains developer and integration resources to assist with migrations from other This directory contains developer and integration resources to assist with migrations from other
identity and access management services. identity and access management services.

View file

@ -21,14 +21,13 @@ The MSRV is specified in the package `Cargo.toml` files.
### Build Profiles ### Build Profiles
Setting different developer profiles while building is done by setting the Setting different developer profiles while building is done by setting the environment variable
environment `KANIDM_BUILD_PROFILE` to one of the bare filename of the TOML files in `/profiles`.
variable `KANIDM_BUILD_PROFILE` to one of the bare filename of the TOML files in
`/profiles`.
For example, this will set the CPU flags to "none" and the location for the Web UI files to `/usr/share/kanidm/ui/pkg`: For example, this will set the CPU flags to "none" and the location for the Web UI files to
`/usr/share/kanidm/ui/pkg`:
```shell ```bash
KANIDM_BUILD_PROFILE=release_suse_generic cargo build --release --bin kanidmd KANIDM_BUILD_PROFILE=release_suse_generic cargo build --release --bin kanidmd
``` ```
@ -40,27 +39,34 @@ You will need [rustup](https://rustup.rs/) to install a Rust toolchain.
#### SUSE #### SUSE
You will need [rustup](https://rustup.rs/) to install a Rust toolchain. If You will need [rustup](https://rustup.rs/) to install a Rust toolchain. If you're using the
you're Tumbleweed release, it's packaged in `zypper`.
using the Tumbleweed release, it's packaged in `zypper`.
You will also need some system libraries to build this: You will also need some system libraries to build this:
```
libudev-devel sqlite3-devel libopenssl-devel libudev-devel sqlite3-devel libopenssl-devel
```
#### Fedora #### Fedora
You need to install the Rust toolchain packages: You need to install the Rust toolchain packages:
```bash
rust cargo rust cargo
```
You will also need some system libraries to build this: You will also need some system libraries to build this:
```
systemd-devel sqlite-devel openssl-devel pam-devel systemd-devel sqlite-devel openssl-devel pam-devel
```
Building the Web UI requires additional packages: Building the Web UI requires additional packages:
```
perl-FindBin perl-File-Compare rust-std-static-wasm32-unknown-unknown perl-FindBin perl-File-Compare rust-std-static-wasm32-unknown-unknown
```
#### Ubuntu #### Ubuntu
@ -68,7 +74,7 @@ You need [rustup](https://rustup.rs/) to install a Rust toolchain.
You will also need some system libraries to build this, which can be installed by running: You will also need some system libraries to build this, which can be installed by running:
```shell ```bash
sudo apt-get install libsqlite3-dev libudev-dev libssl-dev pkg-config libpam0g-dev sudo apt-get install libsqlite3-dev libudev-dev libssl-dev pkg-config libpam0g-dev
``` ```
@ -78,68 +84,72 @@ Tested with Ubuntu 20.04 and 22.04.
You need [rustup](https://rustup.rs/) to install a Rust toolchain. You need [rustup](https://rustup.rs/) to install a Rust toolchain.
An easy way to grab the dependencies is to install [vcpkg](https://vcpkg.io/en/getting-started.html). An easy way to grab the dependencies is to install
[vcpkg](https://vcpkg.io/en/getting-started.html).
This is how it works in the automated build: This is how it works in the automated build:
1. Enable use of installed packages for the user system-wide: 1. Enable use of installed packages for the user system-wide:
```shell
```bash
vcpkg integrate install vcpkg integrate install
``` ```
2. Install the openssl dependency, which compiles it from source. This downloads all sorts of dependencies, including perl for the build.
```shell 2. Install the openssl dependency, which compiles it from source. This downloads all sorts of
dependencies, including perl for the build.
```bash
vcpkg install openssl:x64-windows-static-md vcpkg install openssl:x64-windows-static-md
``` ```
There's a powershell script in the root directory of the repository which, in concert with `openssl` will generate a config file and certs for testing. There's a powershell script in the root directory of the repository which, in concert with `openssl`
will generate a config file and certs for testing.
### Get Involved ### Get Involved
To get started, you'll need to fork or branch, and we'll merge based on pull To get started, you'll need to fork or branch, and we'll merge based on pull requests.
requests.
If you are a contributor to the project, simply clone: If you are a contributor to the project, simply clone:
```shell ```bash
git clone git@github.com:kanidm/kanidm.git git clone git@github.com:kanidm/kanidm.git
``` ```
If you are forking, then fork in GitHub and clone with: If you are forking, then fork in GitHub and clone with:
```shell ```bash
git clone https://github.com/kanidm/kanidm.git git clone https://github.com/kanidm/kanidm.git
cd kanidm cd kanidm
git remote add myfork git@github.com:<YOUR USERNAME>/kanidm.git git remote add myfork git@github.com:<YOUR USERNAME>/kanidm.git
``` ```
Select an issue (always feel free to reach out to us for advice!), and create a Select an issue (always feel free to reach out to us for advice!), and create a branch to start
branch to start working: working:
```shell ```bash
git branch <feature-branch-name> git branch <feature-branch-name>
git checkout <feature-branch-name> git checkout <feature-branch-name>
cargo test cargo test
``` ```
When you are ready for review (even if the feature isn't complete and you just When you are ready for review (even if the feature isn't complete and you just want some advice):
want some advice):
1. Run the test suite: `cargo test --workspace` 1. Run the test suite: `cargo test --workspace`
2. Ensure rust formatting standards are followed: `cargo fmt --check` 2. Ensure rust formatting standards are followed: `cargo fmt --check`
3. Try following the suggestions from clippy, after running `cargo clippy`. 3. Try following the suggestions from clippy, after running `cargo clippy`. This is not a blocker on
This is not a blocker on us accepting your code! us accepting your code!
4. Then commit your changes: 4. Then commit your changes:
```shell ```bash
git commit -m 'Commit message' change_file.rs ... git commit -m 'Commit message' change_file.rs ...
git push <myfork/origin> <feature-branch-name> git push <myfork/origin> <feature-branch-name>
``` ```
If you receive advice or make further changes, just keep commiting to the branch, If you receive advice or make further changes, just keep commiting to the branch, and pushing to
and pushing to your branch. When we are happy with the code, we'll merge in GitHub, your branch. When we are happy with the code, we'll merge in GitHub, meaning you can now clean up
meaning you can now clean up your branch. your branch.
``` ```bash
git checkout master git checkout master
git pull git pull
git branch -D <feature-branch-name> git branch -D <feature-branch-name>
@ -149,17 +159,17 @@ git branch -D <feature-branch-name>
If you are asked to rebase your change, follow these steps: If you are asked to rebase your change, follow these steps:
``` ```bash
git checkout master git checkout master
git pull git pull
git checkout <feature-branch-name> git checkout <feature-branch-name>
git rebase master git rebase master
``` ```
Then be sure to fix any merge issues or other comments as they arise. If you Then be sure to fix any merge issues or other comments as they arise. If you have issues, you can
have issues, you can always stop and reset with: always stop and reset with:
``` ```bash
git rebase --abort git rebase --abort
``` ```
@ -168,47 +178,55 @@ git rebase --abort
After getting the code, you will need a rust environment. Please investigate After getting the code, you will need a rust environment. Please investigate
[rustup](https://rustup.rs) for your platform to establish this. [rustup](https://rustup.rs) for your platform to establish this.
Once you have the source code, you need encryption certificates to use with the server, Once you have the source code, you need encryption certificates to use with the server, because
because without certificates, authentication will fail. without certificates, authentication will fail.
We recommend using [Let's Encrypt](https://letsencrypt.org), but if this is not We recommend using [Let's Encrypt](https://letsencrypt.org), but if this is not possible, please use
possible, please use our insecure certificate tool (`insecure_generate_tls.sh`). our insecure certificate tool (`insecure_generate_tls.sh`).
__NOTE:__ Windows developers can use `insecure_generate_tls.ps1`, which puts everything (including a templated confi gfile) in `$TEMP\kanidm`. Please adjust paths below to suit. **NOTE:** Windows developers can use `insecure_generate_tls.ps1`, which puts everything (including a
templated confi gfile) in `$TEMP\kanidm`. Please adjust paths below to suit.
The insecure certificate tool creates `/tmp/kanidm` and puts some self-signed certificates there. The insecure certificate tool creates `/tmp/kanidm` and puts some self-signed certificates there.
You can now build and run the server with the commands below. It will use a database You can now build and run the server with the commands below. It will use a database in
in `/tmp/kanidm.db`. `/tmp/kanidm.db`.
Create the initial database and generate an `admin` username: Create the initial database and generate an `admin` username:
```bash
cargo run --bin kanidmd recover_account -c ./examples/insecure_server.toml admin cargo run --bin kanidmd recover_account -c ./examples/insecure_server.toml admin
<snip> <snip>
Success - password reset to -> Et8QRJgQkMJu3v1AQxcbxRWW44qRUZPpr6BJ9fCGapAB9cT4 Success - password reset to -> Et8QRJgQkMJu3v1AQxcbxRWW44qRUZPpr6BJ9fCGapAB9cT4
```
Record the password above, then run the server start command: Record the password above, then run the server start command:
```bash
cd kanidmd/daemon cd kanidmd/daemon
cargo run --bin kanidmd server -c ../../examples/insecure_server.toml cargo run --bin kanidmd server -c ../../examples/insecure_server.toml
```
(The server start command is also a script in `kanidmd/daemon/run_insecure_dev_server.sh`) (The server start command is also a script in `kanidmd/daemon/run_insecure_dev_server.sh`)
In a new terminal, you can now build and run the client tools with: In a new terminal, you can now build and run the client tools with:
```bash
cargo run --bin kanidm -- --help cargo run --bin kanidm -- --help
cargo run --bin kanidm -- login -H https://localhost:8443 -D anonymous -C /tmp/kanidm/ca.pem cargo run --bin kanidm -- login -H https://localhost:8443 -D anonymous -C /tmp/kanidm/ca.pem
cargo run --bin kanidm -- self whoami -H https://localhost:8443 -D anonymous -C /tmp/kanidm/ca.pem cargo run --bin kanidm -- self whoami -H https://localhost:8443 -D anonymous -C /tmp/kanidm/ca.pem
cargo run --bin kanidm -- login -H https://localhost:8443 -D admin -C /tmp/kanidm/ca.pem cargo run --bin kanidm -- login -H https://localhost:8443 -D admin -C /tmp/kanidm/ca.pem
cargo run --bin kanidm -- self whoami -H https://localhost:8443 -D admin -C /tmp/kanidm/ca.pem cargo run --bin kanidm -- self whoami -H https://localhost:8443 -D admin -C /tmp/kanidm/ca.pem
```
### Raw actions ### Raw actions
The server has a low-level stateful API you can use for more complex or advanced tasks on large numbers The server has a low-level stateful API you can use for more complex or advanced tasks on large
of entries at once. Some examples are below, but generally we advise you to use the APIs or CLI tools. These are numbers of entries at once. Some examples are below, but generally we advise you to use the APIs or
very handy to "unbreak" something if you make a mistake however! CLI tools. These are very handy to "unbreak" something if you make a mistake however!
```bash
# Create from json (group or account) # Create from json (group or account)
kanidm raw create -H https://localhost:8443 -C ../insecure/ca.pem -D admin example.create.account.json kanidm raw create -H https://localhost:8443 -C ../insecure/ca.pem -D admin example.create.account.json
kanidm raw create -H https://localhost:8443 -C ../insecure/ca.pem -D idm_admin example.create.group.json kanidm raw create -H https://localhost:8443 -C ../insecure/ca.pem -D idm_admin example.create.group.json
@ -222,21 +240,26 @@ very handy to "unbreak" something if you make a mistake however!
# Delete all entries matching a filter # Delete all entries matching a filter
kanidm raw delete -H https://localhost:8443 -C ../insecure/ca.pem -D idm_admin '{"eq": ["name", "test_account_delete_me"]}' kanidm raw delete -H https://localhost:8443 -C ../insecure/ca.pem -D idm_admin '{"eq": ["name", "test_account_delete_me"]}'
```
### Building the Web UI ### Building the Web UI
__NOTE:__ There is a pre-packaged version of the Web UI at `/kanidmd_web_ui/pkg/`, **NOTE:** There is a pre-packaged version of the Web UI at `/kanidmd_web_ui/pkg/`, which can be used
which can be used directly. This means you don't need to build the Web UI yourself. directly. This means you don't need to build the Web UI yourself.
The Web UI uses Rust WebAssembly rather than Javascript. To build this you need The Web UI uses Rust WebAssembly rather than Javascript. To build this you need to set up the
to set up the environment: environment:
```bash
cargo install wasm-pack cargo install wasm-pack
```
Then you are able to build the UI: Then you are able to build the UI:
```bash
cd kanidmd_web_ui/ cd kanidmd_web_ui/
./build_wasm_dev.sh ./build_wasm_dev.sh
```
To build for release, run `build_wasm_release.sh`. To build for release, run `build_wasm_release.sh`.
@ -246,14 +269,16 @@ The "developer" profile for kanidmd will automatically use the pkg output in thi
Build a container with the current branch using: Build a container with the current branch using:
```bash
make <TARGET> make <TARGET>
```
Check `make help` for a list of valid targets. Check `make help` for a list of valid targets.
The following environment variables control the build: The following environment variables control the build:
| ENV variable | Definition | Default | | ENV variable | Definition | Default |
|-|-|-| | ---------------------- | --------------------------------------------------------- | ------------------------- |
| `IMAGE_BASE` | Base location of the container image. | `kanidm` | | `IMAGE_BASE` | Base location of the container image. | `kanidm` |
| `IMAGE_VERSION` | Determines the container's tag. | None | | `IMAGE_VERSION` | Determines the container's tag. | None |
| `CONTAINER_TOOL_ARGS` | Specify extra options for the container build tool. | None | | `CONTAINER_TOOL_ARGS` | Specify extra options for the container build tool. | None |
@ -266,29 +291,31 @@ The following environment variables control the build:
Build a `kanidm` container using `podman`: Build a `kanidm` container using `podman`:
```bash
CONTAINER_TOOL=podman make build/kanidmd CONTAINER_TOOL=podman make build/kanidmd
```
Build a `kanidm` container and use a redis build cache: Build a `kanidm` container and use a redis build cache:
```bash
CONTAINER_BUILD_ARGS='--build-arg "SCCACHE_REDIS=redis://redis.dev.blackhats.net.au:6379"' make build/kanidmd CONTAINER_BUILD_ARGS='--build-arg "SCCACHE_REDIS=redis://redis.dev.blackhats.net.au:6379"' make build/kanidmd
```
#### Automatically Built Containers #### Automatically Built Containers
To speed up testing across platforms, we're leveraging GitHub actions to build To speed up testing across platforms, we're leveraging GitHub actions to build containers for test
containers for test use. use.
Whenever code is merged with the `master` branch of Kanidm, containers are automatically Whenever code is merged with the `master` branch of Kanidm, containers are automatically built for
built for `kanidmd` and `radius`. Sometimes they fail to build, but we'll try to `kanidmd` and `radius`. Sometimes they fail to build, but we'll try to keep them avilable.
keep them avilable.
To find information on the packages, To find information on the packages,
[visit the Kanidm packages page](https://github.com/orgs/kanidm/packages?repo_name=kanidm). [visit the Kanidm packages page](https://github.com/orgs/kanidm/packages?repo_name=kanidm).
An example command for pulling and running the radius container is below. You'll An example command for pulling and running the radius container is below. You'll need to
need to
[authenticate with the GitHub container registry first](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry#authenticating-to-the-container-registry). [authenticate with the GitHub container registry first](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry#authenticating-to-the-container-registry).
```shell ```bash
docker pull ghcr.io/kanidm/radius:devel docker pull ghcr.io/kanidm/radius:devel
docker run --rm -it \ docker run --rm -it \
-v $(pwd)/kanidm:/data/kanidm \ -v $(pwd)/kanidm:/data/kanidm \
@ -301,20 +328,20 @@ This assumes you have a `kanidm` client configuration file in the current workin
You'll need `mdbook` to build the book: You'll need `mdbook` to build the book:
```shell ```bash
cargo install mdbook cargo install mdbook
``` ```
To build it: To build it:
```shell ```bash
cd kanidm_book cd kanidm_book
mdbook build mdbook build
``` ```
Or to run a local webserver: Or to run a local webserver:
```shell ```bash
cd kanidm_book cd kanidm_book
mdbook serve mdbook serve
```` ```

View file

@ -5,22 +5,22 @@ for these data. As a result, there are many concepts and important details to un
## Service Accounts vs Person Accounts ## Service Accounts vs Person Accounts
Kanidm seperates accounts into two types. Person accounts (or persons) are intended for use by humans Kanidm seperates accounts into two types. Person accounts (or persons) are intended for use by
that will access the system in an interactive way. Service accounts are intended for use by computers humans that will access the system in an interactive way. Service accounts are intended for use by
or services that need to identify themself to Kanidm. Generally a person or group of persons will computers or services that need to identify themself to Kanidm. Generally a person or group of
be responsible for and will manage service accounts. Because of this distinction these classes of persons will be responsible for and will manage service accounts. Because of this distinction these
accounts have different properties and methods of authentication and management. classes of accounts have different properties and methods of authentication and management.
## Groups ## Groups
Groups represent a collection of entities. This generally is a collection of persons or service accounts. Groups represent a collection of entities. This generally is a collection of persons or service
Groups are commonly used to assign privileges to the accounts that are members of a group. This allows accounts. Groups are commonly used to assign privileges to the accounts that are members of a group.
easier administration over larger systems where privileges can be assigned to groups in a logical This allows easier administration over larger systems where privileges can be assigned to groups in
manner, and then only membership of the groups need administration, rather than needing to assign a logical manner, and then only membership of the groups need administration, rather than needing to
privileges to each entity directly and uniquely. assign privileges to each entity directly and uniquely.
Groups may also be nested, where a group can contain another group as a member. This allows hierarchies Groups may also be nested, where a group can contain another group as a member. This allows
to be created again for easier administration. hierarchies to be created again for easier administration.
## Default Accounts and Groups ## Default Accounts and Groups
@ -30,33 +30,29 @@ Identity Management (IDM) systems.
There are two builtin system administration accounts. There are two builtin system administration accounts.
`admin` is the default service account which has privileges to configure and administer kanidm as a whole. `admin` is the default service account which has privileges to configure and administer kanidm as a
This account can manage access controls, schema, integrations and more. However the `admin` can not whole. This account can manage access controls, schema, integrations and more. However the `admin`
manage persons by default to seperate the priviliges. As this is a service account is is intended can not manage persons by default to seperate the priviliges. As this is a service account is is
for limited use. intended for limited use.
`idm_admin` is the default service account which has privileges to create persons and to manage these `idm_admin` is the default service account which has privileges to create persons and to manage
accounts and groups. They can perform credential resets and more. these accounts and groups. They can perform credential resets and more.
Both the `admin` and the `idm_admin` user should *NOT* be used for daily activities - they exist for initial Both the `admin` and the `idm_admin` user should _NOT_ be used for daily activities - they exist for
system configuration, and for disaster recovery scenarios. You should delegate permissions initial system configuration, and for disaster recovery scenarios. You should delegate permissions
as required to named user accounts instead. as required to named user accounts instead.
The majority of the builtin groups are privilige groups that provide rights over Kanidm The majority of the builtin groups are privilige groups that provide rights over Kanidm
administrative actions. These include groups for account management, person management (personal administrative actions. These include groups for account management, person management (personal and
and sensitive data), group management, and more. sensitive data), group management, and more.
## Recovering the Initial Admin Accounts ## Recovering the Initial Admin Accounts
By default the `admin` and `idm_admin` accounts have no password, and can not be accessed. They need By default the `admin` and `idm_admin` accounts have no password, and can not be accessed. They need
to be "recovered" from the server that is running the kanidmd server. to be "recovered" from the server that is running the kanidmd server.
{{#template {{#template templates/kani-warning.md imagepath=images title=Warning! text=The server must not be
templates/kani-warning.md running at this point, as it requires exclusive access to the database. }}
imagepath=images
title=Warning!
text=The server must not be running at this point, as it requires exclusive access to the database.
}}
```shell ```shell
kanidmd recover_account admin -c /etc/kanidm/server.toml kanidmd recover_account admin -c /etc/kanidm/server.toml
@ -66,7 +62,7 @@ kanidmd recover_account admin -c /etc/kanidm/server.toml
To do this with Docker, you'll need to stop the existing container and use the "command" argument to To do this with Docker, you'll need to stop the existing container and use the "command" argument to
access the kanidmd binary. access the kanidmd binary.
```shell ```bash
docker run --rm -it \ docker run --rm -it \
-v/tmp/kanidm:/data \ -v/tmp/kanidm:/data \
--name kanidmd \ --name kanidmd \
@ -80,7 +76,7 @@ After the recovery is complete the server can be started again.
Once you have access to the admin account, it is able to reset the credentials of the `idm_admin` Once you have access to the admin account, it is able to reset the credentials of the `idm_admin`
account. account.
```shell ```bash
kanidm login -D admin kanidm login -D admin
kanidm service-account credential generate -D admin idm_admin kanidm service-account credential generate -D admin idm_admin
# Success: wJX... # Success: wJX...
@ -90,10 +86,10 @@ These accounts will be used through the remainder of this document for managing
## Viewing Default Groups ## Viewing Default Groups
You should take some time to inspect the default groups which are related to You should take some time to inspect the default groups which are related to default permissions.
default permissions. These can be viewed with: These can be viewed with:
``` ```bash
kanidm group list kanidm group list
kanidm group get <name> kanidm group get <name>
``` ```
@ -102,7 +98,7 @@ kanidm group get <name>
By default `idm_admin` has the privileges to create new persons in the system. By default `idm_admin` has the privileges to create new persons in the system.
```shell ```bash
kanidm login --name idm_admin kanidm login --name idm_admin
kanidm person create demo_user "Demonstration User" --name idm_admin kanidm person create demo_user "Demonstration User" --name idm_admin
kanidm person get demo_user --name idm_admin kanidm person get demo_user --name idm_admin
@ -115,37 +111,37 @@ kanidm group list_members demo_group --name idm_admin
You can also use anonymous to view accounts and groups - note that you won't see certain fields due You can also use anonymous to view accounts and groups - note that you won't see certain fields due
to the limits of the access control anonymous access profile. to the limits of the access control anonymous access profile.
``` ```bash
kanidm login --name anonymous kanidm login --name anonymous
kanidm person get demo_user --name anonymous kanidm person get demo_user --name anonymous
``` ```
Kanidm allows person accounts to include human related attributes, such as their legal name and email address. Kanidm allows person accounts to include human related attributes, such as their legal name and
email address.
Initially, a person does not have these attributes. If desired, a person may be modified to have these attributes. Initially, a person does not have these attributes. If desired, a person may be modified to have
these attributes.
```shell ```bash
# Note, both the --legalname and --mail flags may be omitted # Note, both the --legalname and --mail flags may be omitted
kanidm person update demo_user --legalname "initial name" --mail "initial@email.address" kanidm person update demo_user --legalname "initial name" --mail "initial@email.address"
``` ```
{{#template {{#template templates/kani-warning.md imagepath=images title=Warning! text=Persons may change their
templates/kani-warning.md own displayname, name, and legal name at any time. You MUST NOT use these values as primary keys in
imagepath=images external systems. You MUST use the `uuid` attribute present on all entries as an external primary
title=Warning! key. }}
text=Persons may change their own displayname, name, and legal name at any time. You MUST NOT use these values as primary keys in external systems. You MUST use the `uuid` attribute present on all entries as an external primary key.
}}
## Resetting Person Account Credentials ## Resetting Person Account Credentials
Members of the `idm_account_manage_priv` group have the rights to manage person and service Members of the `idm_account_manage_priv` group have the rights to manage person and service accounts
accounts security and login aspects. This includes resetting account credentials. security and login aspects. This includes resetting account credentials.
You can perform a password reset on the demo_user, for example as the idm_admin user, who is You can perform a password reset on the demo_user, for example as the idm_admin user, who is a
a default member of this group. The lines below prefixed with `#` are the interactive credential default member of this group. The lines below prefixed with `#` are the interactive credential
update interface. update interface.
```shell ```bash
kanidm person credential update demo_user --name idm_admin kanidm person credential update demo_user --name idm_admin
# spn: demo_user@idm.example.com # spn: demo_user@idm.example.com
# Name: Demonstration User # Name: Demonstration User
@ -172,39 +168,39 @@ kanidm self whoami --name demo_user
The `admin` service account can be used to create service accounts. The `admin` service account can be used to create service accounts.
```shell ```bash
kanidm service-account create demo_service "Demonstration Service" --name admin kanidm service-account create demo_service "Demonstration Service" --name admin
kanidm service-account get demo_service --name admin kanidm service-account get demo_service --name admin
``` ```
## Using API Tokens with Service Accounts ## Using API Tokens with Service Accounts
Service accounts can have api tokens generated and associated with them. These tokens can be used for Service accounts can have api tokens generated and associated with them. These tokens can be used
identification of the service account, and for granting extended access rights where the service for identification of the service account, and for granting extended access rights where the service
account may previously have not had the access. Additionally service accounts can have expiry times account may previously have not had the access. Additionally service accounts can have expiry times
and other auditing information attached. and other auditing information attached.
To show api tokens for a service account: To show api tokens for a service account:
```shell ```bash
kanidm service-account api-token status --name admin ACCOUNT_ID kanidm service-account api-token status --name admin ACCOUNT_ID
kanidm service-account api-token status --name admin demo_service kanidm service-account api-token status --name admin demo_service
``` ```
By default api tokens are issued to be "read only", so they are unable to make changes on behalf of the By default api tokens are issued to be "read only", so they are unable to make changes on behalf of
service account they represent. To generate a new read only api token: the service account they represent. To generate a new read only api token:
```shell ```bash
kanidm service-account api-token generate --name admin ACCOUNT_ID LABEL [EXPIRY] kanidm service-account api-token generate --name admin ACCOUNT_ID LABEL [EXPIRY]
kanidm service-account api-token generate --name admin demo_service "Test Token" kanidm service-account api-token generate --name admin demo_service "Test Token"
kanidm service-account api-token generate --name admin demo_service "Test Token" 2020-09-25T11:22:02+10:00 kanidm service-account api-token generate --name admin demo_service "Test Token" 2020-09-25T11:22:02+10:00
``` ```
If you wish to issue a token that is able to make changes on behalf If you wish to issue a token that is able to make changes on behalf of the service account, you must
of the service account, you must add the "--rw" flag during the generate command. It is recommended you add the "--rw" flag during the generate command. It is recommended you only add --rw when the
only add --rw when the api-token is performing writes to Kanidm. api-token is performing writes to Kanidm.
```shell ```bash
kanidm service-account api-token generate --name admin ACCOUNT_ID LABEL [EXPIRY] --rw kanidm service-account api-token generate --name admin ACCOUNT_ID LABEL [EXPIRY] --rw
kanidm service-account api-token generate --name admin demo_service "Test Token" --rw kanidm service-account api-token generate --name admin demo_service "Test Token" --rw
kanidm service-account api-token generate --name admin demo_service "Test Token" 2020-09-25T11:22:02+10:00 --rw kanidm service-account api-token generate --name admin demo_service "Test Token" 2020-09-25T11:22:02+10:00 --rw
@ -213,7 +209,7 @@ kanidm service-account api-token generate --name admin demo_service "Test Token"
To destroy (revoke) an api token you will need it's token id. This can be shown with the "status" To destroy (revoke) an api token you will need it's token id. This can be shown with the "status"
command. command.
```shell ```bash
kanidm service-account api-token destroy --name admin ACCOUNT_ID TOKEN_ID kanidm service-account api-token destroy --name admin ACCOUNT_ID TOKEN_ID
kanidm service-account api-token destroy --name admin demo_service 4de2a4e9-e06a-4c5e-8a1b-33f4e7dd5dc7 kanidm service-account api-token destroy --name admin demo_service 4de2a4e9-e06a-4c5e-8a1b-33f4e7dd5dc7
``` ```
@ -221,7 +217,7 @@ kanidm service-account api-token destroy --name admin demo_service 4de2a4e9-e06a
Api tokens can also be used to gain extended search permissions with LDAP. To do this you can bind Api tokens can also be used to gain extended search permissions with LDAP. To do this you can bind
with a dn of `dn=token` and provide the api token in the password. with a dn of `dn=token` and provide the api token in the password.
```shell ```bash
ldapwhoami -H ldaps://URL -x -D "dn=token" -w "TOKEN" ldapwhoami -H ldaps://URL -x -D "dn=token" -w "TOKEN"
ldapwhoami -H ldaps://idm.example.com -x -D "dn=token" -w "..." ldapwhoami -H ldaps://idm.example.com -x -D "dn=token" -w "..."
# u: demo_service@idm.example.com # u: demo_service@idm.example.com
@ -229,18 +225,15 @@ ldapwhoami -H ldaps://idm.example.com -x -D "dn=token" -w "..."
## Resetting Service Account Credentials (Deprecated) ## Resetting Service Account Credentials (Deprecated)
{{#template {{#template templates/kani-warning.md imagepath=images text=Api Tokens are a better method to manage
templates/kani-warning.md credentials for service accounts, and passwords may be removed in the future! }}
imagepath=images
text=Api Tokens are a better method to manage credentials for service accounts, and passwords may be removed in the future!
}}
Service accounts can not have their credentials interactively updated in the same manner as Service accounts can not have their credentials interactively updated in the same manner as persons.
persons. Service accounts may only have server side generated high entropy passwords. Service accounts may only have server side generated high entropy passwords.
To re-generate this password to an account To re-generate this password to an account
```shell ```bash
kanidm service-account credential generate demo_service --name admin kanidm service-account credential generate demo_service --name admin
``` ```
@ -253,7 +246,7 @@ Kanidm makes all group membership determinations by inspecting an entry's "membe
An example can be easily shown with: An example can be easily shown with:
```shell ```bash
kanidm group create group_1 --name idm_admin kanidm group create group_1 --name idm_admin
kanidm group create group_2 --name idm_admin kanidm group create group_2 --name idm_admin
kanidm person create nest_example "Nesting Account Example" --name idm_admin kanidm person create nest_example "Nesting Account Example" --name idm_admin
@ -264,8 +257,8 @@ kanidm person get nest_example --name anonymous
## Account Validity ## Account Validity
Kanidm supports accounts that are only able to authenticate between a pair of dates and times; the "valid Kanidm supports accounts that are only able to authenticate between a pair of dates and times; the
from" and "expires" timestamps define these points in time. "valid from" and "expires" timestamps define these points in time.
This can be displayed with: This can be displayed with:
@ -276,11 +269,12 @@ This can be displayed with:
These datetimes are stored in the server as UTC, but presented according to your local system time These datetimes are stored in the server as UTC, but presented according to your local system time
to aid correct understanding of when the events will occur. to aid correct understanding of when the events will occur.
To set the values, an account with account management permission is required (for example, idm_admin). To set the values, an account with account management permission is required (for example,
idm_admin).
You may set these time and date values in any timezone you wish (such as your local timezone), and the You may set these time and date values in any timezone you wish (such as your local timezone), and
server will transform these to UTC. These time values are in iso8601 format, and you should specify this the server will transform these to UTC. These time values are in iso8601 format, and you should
as: specify this as:
``` ```
YYYY-MM-DDThh:mm:ssZ+-hh:mm YYYY-MM-DDThh:mm:ssZ+-hh:mm
@ -289,74 +283,77 @@ Year-Month-Day T hour:minutes:seconds Z +- timezone offset
Set the earliest time the account can start authenticating: Set the earliest time the account can start authenticating:
```shell ```bash
kanidm person validity begin_from demo_user '2020-09-25T11:22:04+00:00' --name idm_admin kanidm person validity begin_from demo_user '2020-09-25T11:22:04+00:00' --name idm_admin
``` ```
Set the expiry or end date of the account: Set the expiry or end date of the account:
```shell ```bash
kanidm person validity expire_at demo_user '2020-09-25T11:22:04+00:00' --name idm_admin kanidm person validity expire_at demo_user '2020-09-25T11:22:04+00:00' --name idm_admin
``` ```
To unset or remove these values the following can be used, where `any|clear` means you may use either `any` or `clear`. To unset or remove these values the following can be used, where `any|clear` means you may use
either `any` or `clear`.
```shell ```bash
kanidm person validity begin_from demo_user any|clear --name idm_admin kanidm person validity begin_from demo_user any|clear --name idm_admin
kanidm person validity expire_at demo_user never|clear --name idm_admin kanidm person validity expire_at demo_user never|clear --name idm_admin
``` ```
To "lock" an account, you can set the expire_at value to the past, or unix epoch. Even in the situation To "lock" an account, you can set the expire_at value to the past, or unix epoch. Even in the
where the "valid from" is *after* the expire_at, the expire_at will be respected. situation where the "valid from" is _after_ the expire_at, the expire_at will be respected.
```
kanidm person validity expire_at demo_user 1970-01-01T00:00:00+00:00 --name idm_admin kanidm person validity expire_at demo_user 1970-01-01T00:00:00+00:00 --name idm_admin
```
These validity settings impact all authentication functions of the account (kanidm, ldap, radius). These validity settings impact all authentication functions of the account (kanidm, ldap, radius).
### Allowing people accounts to change their mail attribute ### Allowing people accounts to change their mail attribute
By default, Kanidm allows an account to change some attributes, but not their By default, Kanidm allows an account to change some attributes, but not their mail address.
mail address.
Adding the user to the `idm_people_self_write_mail` group, as shown Adding the user to the `idm_people_self_write_mail` group, as shown below, allows the user to edit
below, allows the user to edit their own mail. their own mail.
```
kanidm group add_members idm_people_self_write_mail_priv demo_user --name idm_admin kanidm group add_members idm_people_self_write_mail_priv demo_user --name idm_admin
```
## Why Can't I Change admin With idm_admin? ## Why Can't I Change admin With idm_admin?
As a security mechanism there is a distinction between "accounts" and "high permission As a security mechanism there is a distinction between "accounts" and "high permission accounts".
accounts". This is to help prevent elevation attacks, where say a member of a This is to help prevent elevation attacks, where say a member of a service desk could attempt to
service desk could attempt to reset the password of idm_admin or admin, or even a member of reset the password of idm_admin or admin, or even a member of HR or System Admin teams to move
HR or System Admin teams to move laterally. laterally.
Generally, membership of a "privilege" group that ships with Kanidm, such as: Generally, membership of a "privilege" group that ships with Kanidm, such as:
* idm_account_manage_priv - idm_account_manage_priv
* idm_people_read_priv - idm_people_read_priv
* idm_schema_manage_priv - idm_schema_manage_priv
* many more ... - many more ...
...indirectly grants you membership to "idm_high_privilege". If you are a member of ...indirectly grants you membership to "idm_high_privilege". If you are a member of this group, the
this group, the standard "account" and "people" rights groups are NOT able to standard "account" and "people" rights groups are NOT able to alter, read or manage these accounts.
alter, read or manage these accounts. To manage these accounts higher rights To manage these accounts higher rights are required, such as those held by the admin account are
are required, such as those held by the admin account are required. required.
Further, groups that are considered "idm_high_privilege" can NOT be managed Further, groups that are considered "idm_high_privilege" can NOT be managed by the standard
by the standard "idm_group_manage_priv" group. "idm_group_manage_priv" group.
Management of high privilege accounts and groups is granted through the Management of high privilege accounts and groups is granted through the the "hp" variants of all
the "hp" variants of all privileges. A non-conclusive list: privileges. A non-conclusive list:
* idm_hp_account_read_priv - idm_hp_account_read_priv
* idm_hp_account_manage_priv - idm_hp_account_manage_priv
* idm_hp_account_write_priv - idm_hp_account_write_priv
* idm_hp_group_manage_priv - idm_hp_group_manage_priv
* idm_hp_group_write_priv - idm_hp_group_write_priv
Membership of any of these groups should be considered to be equivalent to Membership of any of these groups should be considered to be equivalent to system administration
system administration rights in the directory, and by extension, over all network rights in the directory, and by extension, over all network resources that trust Kanidm.
resources that trust Kanidm.
All groups that are flagged as "idm_high_privilege" should be audited and All groups that are flagged as "idm_high_privilege" should be audited and monitored to ensure that
monitored to ensure that they are not altered. they are not altered.

View file

@ -1,7 +1,5 @@
# Administration Tasks # Administration Tasks
This chapter describes some of the routine administration tasks for running This chapter describes some of the routine administration tasks for running a Kanidm server, such as
a Kanidm server, such as making backups and restoring from backups, testing making backups and restoring from backups, testing server configuration, reindexing, verifying data
server configuration, reindexing, verifying data consistency, and renaming consistency, and renaming your domain.
your domain.

View file

@ -1,49 +1,52 @@
# Backup and Restore # Backup and Restore
With any Identity Management (IDM) software, it's important you have the capability to restore in With any Identity Management (IDM) software, it's important you have the capability to restore in
case of a disaster - be that physical damage or a mistake. Kanidm supports backup case of a disaster - be that physical damage or a mistake. Kanidm supports backup and restore of the
and restore of the database with three methods. database with three methods.
## Method 1 - Automatic Backup ## Method 1 - Automatic Backup
Automatic backups can be generated online by a `kanidmd server` instance Automatic backups can be generated online by a `kanidmd server` instance by including the
by including the `[online_backup]` section in the `server.toml`. `[online_backup]` section in the `server.toml`. This allows you to run regular backups, defined by a
This allows you to run regular backups, defined by a cron schedule, and maintain cron schedule, and maintain the number of backup versions to keep. An example is located in
the number of backup versions to keep. An example is located in
[examples/server.toml](https://github.com/kanidm/kanidm/blob/master/examples/server.toml). [examples/server.toml](https://github.com/kanidm/kanidm/blob/master/examples/server.toml).
## Method 2 - Manual Backup ## Method 2 - Manual Backup
This method uses the same process as the automatic process, but is manually invoked. This can This method uses the same process as the automatic process, but is manually invoked. This can be
be useful for pre-upgrade backups useful for pre-upgrade backups
To take the backup (assuming our docker environment) you first need to stop the instance: To take the backup (assuming our docker environment) you first need to stop the instance:
```bash
docker stop <container name> docker stop <container name>
docker run --rm -i -t -v kanidmd:/data -v kanidmd_backups:/backup \ docker run --rm -i -t -v kanidmd:/data -v kanidmd_backups:/backup \
kanidm/server:latest /sbin/kanidmd database backup -c /data/server.toml \ kanidm/server:latest /sbin/kanidmd database backup -c /data/server.toml \
/backup/kanidm.backup.json /backup/kanidm.backup.json
docker start <container name> docker start <container name>
```
You can then restart your instance. DO NOT modify the backup.json as it may introduce You can then restart your instance. DO NOT modify the backup.json as it may introduce data errors
data errors into your instance. into your instance.
To restore from the backup: To restore from the backup:
```bash
docker stop <container name> docker stop <container name>
docker run --rm -i -t -v kanidmd:/data -v kanidmd_backups:/backup \ docker run --rm -i -t -v kanidmd:/data -v kanidmd_backups:/backup \
kanidm/server:latest /sbin/kanidmd database restore -c /data/server.toml \ kanidm/server:latest /sbin/kanidmd database restore -c /data/server.toml \
/backup/kanidm.backup.json /backup/kanidm.backup.json
docker start <container name> docker start <container name>
```
## Method 3 - Manual Database Copy ## Method 3 - Manual Database Copy
This is a simple backup of the data volume. This is a simple backup of the data volume.
```bash
docker stop <container name> docker stop <container name>
# Backup your docker's volume folder # Backup your docker's volume folder
docker start <container name> docker start <container name>
```
Restoration is the reverse process. Restoration is the reverse process.

View file

@ -1,15 +1,12 @@
# Choosing a Domain Name # Choosing a Domain Name
Through out this book, Kanidm will make reference to a "domain name". This is your Through out this book, Kanidm will make reference to a "domain name". This is your chosen DNS domain
chosen DNS domain name that you intend to use for Kanidm. Choosing this domain name however name that you intend to use for Kanidm. Choosing this domain name however is not simple as there are
is not simple as there are a number of considerations you need to be careful of. a number of considerations you need to be careful of.
{{#template {{#template templates/kani-warning.md imagepath=images/ title=Take note! text=Incorrect choice of
templates/kani-warning.md the domain name may have security impacts on your Kanidm instance, not limited to credential
imagepath=images/ phishing, theft, session leaks and more. It is critical you follow the advice in this chapter. }}
title=Take note!
text=Incorrect choice of the domain name may have security impacts on your Kanidm instance, not limited to credential phishing, theft, session leaks and more. It is critical you follow the advice in this chapter.
}}
## Considerations ## Considerations
@ -18,78 +15,79 @@ is not simple as there are a number of considerations you need to be careful of.
It is recommended you use a domain name within a domain that you own. While many examples list It is recommended you use a domain name within a domain that you own. While many examples list
`example.com` throughout this book, it is not recommended to use this outside of testing. Another `example.com` throughout this book, it is not recommended to use this outside of testing. Another
example of risky domain to use is `local`. While it seems appealing to use these, because you do not example of risky domain to use is `local`. While it seems appealing to use these, because you do not
have unique ownership of these domains, if you move your machine to a foreign network, it is possible have unique ownership of these domains, if you move your machine to a foreign network, it is
you may leak credentials or other cookies to these domains. TLS in a majority of cases can and will possible you may leak credentials or other cookies to these domains. TLS in a majority of cases can
protect you from such leaks however, but it should not always be relied upon as a sole line of defence. and will protect you from such leaks however, but it should not always be relied upon as a sole line
of defence.
Failure to use a unique domain you own, may allow DNS hijacking or other credential leaks in some circumstances. Failure to use a unique domain you own, may allow DNS hijacking or other credential leaks in some
circumstances.
### Subdomains ### Subdomains
Due to how web browsers and webauthn work, any matching domain name or subdomain of an effective domain Due to how web browsers and webauthn work, any matching domain name or subdomain of an effective
may have access to cookies within a browser session. An example is that `host.a.example.com` has access domain may have access to cookies within a browser session. An example is that `host.a.example.com`
to cookies from `a.example.com` and `example.com`. has access to cookies from `a.example.com` and `example.com`.
For this reason your kanidm host (or hosts) should be on a unique subdomain, with no other services For this reason your kanidm host (or hosts) should be on a unique subdomain, with no other services
registered under that subdomain. For example, consider `idm.example.com` as a subdomain for exclusive registered under that subdomain. For example, consider `idm.example.com` as a subdomain for
use of kanidm. This is *inverse* to Active Directory which often has it's domain name selected to be exclusive use of kanidm. This is _inverse_ to Active Directory which often has it's domain name
the parent (toplevel) domain (`example.com`). selected to be the parent (toplevel) domain (`example.com`).
Failure to use a unique subdomain may allow cookies to leak to other entities within your domain, and Failure to use a unique subdomain may allow cookies to leak to other entities within your domain,
may allow webauthn to be used on entities you did not intend for which may or may not lead to some phishing and may allow webauthn to be used on entities you did not intend for which may or may not lead to
scenarioes. some phishing scenarioes.
## Examples ## Examples
### Good Domain Names ### Good Domain Names
Consider we own `kanidm.com`. If we were to run geographical instances, and have testing environments Consider we own `kanidm.com`. If we were to run geographical instances, and have testing
the following domain and hostnames could be used. environments the following domain and hostnames could be used.
*production* _production_
* origin: `https://idm.kanidm.com` - origin: `https://idm.kanidm.com`
* domain name: `idm.kanidm.com` - domain name: `idm.kanidm.com`
* host names: `australia.idm.kanidm.com`, `newzealand.idm.kanidm.com` - host names: `australia.idm.kanidm.com`, `newzealand.idm.kanidm.com`
This allows us to have named geographical instances such as `https://australia.idm.kanidm.com` which This allows us to have named geographical instances such as `https://australia.idm.kanidm.com` which
still works with webauthn and cookies which are transferable between instances. still works with webauthn and cookies which are transferable between instances.
It is critical no other hosts are registered under this domain name. It is critical no other hosts are registered under this domain name.
*testing* _testing_
* origin: `https://idm.dev.kanidm.com` - origin: `https://idm.dev.kanidm.com`
* domain name: `idm.dev.kanidm.com` - domain name: `idm.dev.kanidm.com`
* host names: `australia.idm.dev.kanidm.com`, `newzealand.idm.dev.kanidm.com` - host names: `australia.idm.dev.kanidm.com`, `newzealand.idm.dev.kanidm.com`
Note that due to the name being `idm.dev.kanidm.com` vs `idm.kanidm.com`, the testing instance is not Note that due to the name being `idm.dev.kanidm.com` vs `idm.kanidm.com`, the testing instance is
a subdomain of production, meaning the cookies and webauthn tokens can NOT be transferred between not a subdomain of production, meaning the cookies and webauthn tokens can NOT be transferred
them. This provides proper isolation between the instances. between them. This provides proper isolation between the instances.
### Bad Domain Names ### Bad Domain Names
`idm.local` - This is a bad example as `.local` is an mDNS domain name suffix which means that client `idm.local` - This is a bad example as `.local` is an mDNS domain name suffix which means that
machines if they visit another network *may* try to contact `idm.local` believing they are on their client machines if they visit another network _may_ try to contact `idm.local` believing they are on
usual network. If TLS verification were disabled, this would allow leaking of credentials. their usual network. If TLS verification were disabled, this would allow leaking of credentials.
`kanidm.com` - This is bad because the use of the top level domain means that any subdomain can `kanidm.com` - This is bad because the use of the top level domain means that any subdomain can
access the cookies issued by `kanidm.com`, effectively leaking them to all other hosts. access the cookies issued by `kanidm.com`, effectively leaking them to all other hosts.
Second instance overlap: Second instance overlap:
*production* _production_
* origin: `https://idm.kanidm.com` - origin: `https://idm.kanidm.com`
* domain name: `idm.kanidm.com` - domain name: `idm.kanidm.com`
*testing* _testing_
* origin: `https://dev.idm.kanidm.com`
* domain name: `dev.idm.kanidm.com`
While the production instance has a valid and well defined subdomain that doesn't conflict, because the
dev instance is a subdomain of production, it allows production cookies to leak to dev. Dev instances
may have weaker security controls in some cases which can then allow compromise of the production instance.
- origin: `https://dev.idm.kanidm.com`
- domain name: `dev.idm.kanidm.com`
While the production instance has a valid and well defined subdomain that doesn't conflict, because
the dev instance is a subdomain of production, it allows production cookies to leak to dev. Dev
instances may have weaker security controls in some cases which can then allow compromise of the
production instance.

View file

@ -1,42 +1,55 @@
# Client tools # Client tools
To interact with Kanidm as an administrator, you'll need to use our command To interact with Kanidm as an administrator, you'll need to use our command line tools. If you
line tools. If you haven't installed them yet, [install them now](installing_client_tools.md). haven't installed them yet, [install them now](installing_client_tools.md).
## Kanidm configuration ## Kanidm configuration
You can configure `kanidm` to help make commands simpler by modifying `~/.config/kanidm` You can configure `kanidm` to help make commands simpler by modifying `~/.config/kanidm` or
or `/etc/kanidm/config`. `/etc/kanidm/config`.
```toml
uri = "https://idm.example.com" uri = "https://idm.example.com"
verify_ca = true|false verify_ca = true|false
verify_hostnames = true|false verify_hostnames = true|false
ca_path = "/path/to/ca.pem" ca_path = "/path/to/ca.pem"
```
Once configured, you can test this with: Once configured, you can test this with:
```bash
kanidm self whoami --name anonymous kanidm self whoami --name anonymous
```
## Session Management ## Session Management
To authenticate as a user (for use with the command line), you need to use the `login` command To authenticate as a user (for use with the command line), you need to use the `login` command to
to establish a session token. establish a session token.
```bash
kanidm login --name USERNAME kanidm login --name USERNAME
kanidm login --name admin kanidm login --name admin
```
Once complete, you can use `kanidm` without re-authenticating for a period of time for administration. Once complete, you can use `kanidm` without re-authenticating for a period of time for
administration.
You can list active sessions with: You can list active sessions with:
```bash
kanidm session list kanidm session list
```
Sessions will expire after a period of time (by default 1 hour). To remove these expired sessions Sessions will expire after a period of time (by default 1 hour). To remove these expired sessions
locally you can use: locally you can use:
```bash
kanidm session cleanup kanidm session cleanup
```
To log out of a session: To log out of a session:
```bash
kanidm logout --name USERNAME kanidm logout --name USERNAME
kanidm logout --name admin kanidm logout --name admin
```

View file

@ -2,51 +2,56 @@
## Reindexing ## Reindexing
In some (rare) cases you may need to reindex. In some (rare) cases you may need to reindex. Please note the server will sometimes reindex on
Please note the server will sometimes reindex on startup as a result of the project startup as a result of the project changing its internal schema definitions. This is normal and
changing its internal schema definitions. This is normal and expected - you may never need expected - you may never need to start a reindex yourself as a result!
to start a reindex yourself as a result!
You'll likely notice a need to reindex if you add indexes to schema and you see a message in You'll likely notice a need to reindex if you add indexes to schema and you see a message in your
your logs such as: logs such as:
```
Index EQUALITY name not found Index EQUALITY name not found
Index {type} {attribute} not found Index {type} {attribute} not found
```
This indicates that an index of type equality has been added for name, but the indexing process This indicates that an index of type equality has been added for name, but the indexing process has
has not been run. The server will continue to operate and the query execution code will correctly not been run. The server will continue to operate and the query execution code will correctly
process the query - however it will not be the optimal method of delivering the results as we need to process the query - however it will not be the optimal method of delivering the results as we need
disregard this part of the query and act as though it's un-indexed. to disregard this part of the query and act as though it's un-indexed.
Reindexing will resolve this by forcing all indexes to be recreated based on their schema Reindexing will resolve this by forcing all indexes to be recreated based on their schema
definitions (this works even though the schema is in the same database!) definitions (this works even though the schema is in the same database!)
```bash
docker stop <container name> docker stop <container name>
docker run --rm -i -t -v kanidmd:/data \ docker run --rm -i -t -v kanidmd:/data \
kanidm/server:latest /sbin/kanidmd reindex -c /data/server.toml kanidm/server:latest /sbin/kanidmd reindex -c /data/server.toml
docker start <container name> docker start <container name>
```
Generally, reindexing is a rare action and should not normally be required. Generally, reindexing is a rare action and should not normally be required.
## Vacuum ## Vacuum
[Vacuuming](https://www.sqlite.org/lang_vacuum.html) is the process of reclaiming un-used pages [Vacuuming](https://www.sqlite.org/lang_vacuum.html) is the process of reclaiming un-used pages from
from the sqlite freelists, as well as performing some data reordering tasks that may make some the sqlite freelists, as well as performing some data reordering tasks that may make some queries
queries more efficient . It is recommended that you vacuum after a reindex is performed or more efficient . It is recommended that you vacuum after a reindex is performed or when you wish to
when you wish to reclaim space in the database file. reclaim space in the database file.
Vacuum is also able to change the pagesize of the database. After changing `db_fs_type` (which affects Vacuum is also able to change the pagesize of the database. After changing `db_fs_type` (which
pagesize) in server.toml, you must run a vacuum for this to take effect: affects pagesize) in server.toml, you must run a vacuum for this to take effect:
```bash
docker stop <container name> docker stop <container name>
docker run --rm -i -t -v kanidmd:/data \ docker run --rm -i -t -v kanidmd:/data \
kanidm/server:latest /sbin/kanidmd vacuum -c /data/server.toml kanidm/server:latest /sbin/kanidmd vacuum -c /data/server.toml
docker start <container name> docker start <container name>
```
## Verification ## Verification
The server ships with a number of verification utilities to ensure that data is consistent such The server ships with a number of verification utilities to ensure that data is consistent such as
as referential integrity or memberof. referential integrity or memberof.
Note that verification really is a last resort - the server does _a lot_ to prevent and self-heal Note that verification really is a last resort - the server does _a lot_ to prevent and self-heal
from errors at run time, so you should rarely if ever require this utility. This utility was from errors at run time, so you should rarely if ever require this utility. This utility was
@ -54,11 +59,11 @@ developed to guarantee consistency during development!
You can run a verification with: You can run a verification with:
```bash
docker stop <container name> docker stop <container name>
docker run --rm -i -t -v kanidmd:/data \ docker run --rm -i -t -v kanidmd:/data \
kanidm/server:latest /sbin/kanidmd verify -c /data/server.toml kanidm/server:latest /sbin/kanidmd verify -c /data/server.toml
docker start <container name> docker start <container name>
```
If you have errors, please contact the project to help support you to resolve these. If you have errors, please contact the project to help support you to resolve these.

View file

@ -1,2 +1 @@
# Designs # Designs

View file

@ -1,22 +1,18 @@
# Access Profiles
Access Profiles Access Profiles (ACPs) are a way of expressing the set of actions which accounts are permitted to
=============== perform on database records (`object`) in the system.
Access Profiles (ACPs) are a way of expressing the set of actions which accounts are As a result, there are specific requirements to what these can control and how they are expressed.
permitted to perform on database records (`object`) in the system.
As a result, there are specific requirements to what these can control and how they are Access profiles define an action of `allow` or `deny`: `deny` has priority over `allow` and will
expressed. override even if applicable. They should only be created by system access profiles because certain
changes must be denied.
Access profiles define an action of `allow` or `deny`: `deny` has priority over `allow` Access profiles are stored as entries and are dynamically loaded into a structure that is more
and will override even if applicable. They should only be created by system access profiles efficent for use at runtime. `Schema` and its transactions are a similar implementation.
because certain changes must be denied.
Access profiles are stored as entries and are dynamically loaded into a structure that is ## Search Requirements
more efficent for use at runtime. `Schema` and its transactions are a similar implementation.
Search Requirements
-------------------
A search access profile must be able to limit: A search access profile must be able to limit:
@ -25,28 +21,28 @@ A search access profile must be able to limit:
An example: An example:
> Alice should only be able to search for objects where the class is `person` > Alice should only be able to search for objects where the class is `person` and the object is a
> and the object is a memberOf the group called "visible". > memberOf the group called "visible".
> >
> Alice should only be able to see those the attribute `displayName` for those > Alice should only be able to see those the attribute `displayName` for those users (not their
> users (not their `legalName`), and their public `email`. > `legalName`), and their public `email`.
Worded a bit differently. You need permission over the scope of entries, you need to be able Worded a bit differently. You need permission over the scope of entries, you need to be able to read
to read the attribute to filter on it, and you need to be able to read the attribute to recieve the attribute to filter on it, and you need to be able to read the attribute to recieve it in the
it in the result entry. result entry.
If Alice searches for `(&(name=william)(secretdata=x))`, we should not allow this to If Alice searches for `(&(name=william)(secretdata=x))`, we should not allow this to proceed because
proceed because Alice doesn't have the rights to read secret data, so they should not be allowed Alice doesn't have the rights to read secret data, so they should not be allowed to filter on it.
to filter on it. How does this work with two overlapping ACPs? For example: one that allows read How does this work with two overlapping ACPs? For example: one that allows read of name and
of name and description to class = group, and one that allows name to user. We don't want to description to class = group, and one that allows name to user. We don't want to say
say `(&(name=x)(description=foo))` and it to be allowed, because we don't know the target class `(&(name=x)(description=foo))` and it to be allowed, because we don't know the target class of the
of the filter. Do we "unmatch" all users because they have no access to the filter components? (Could filter. Do we "unmatch" all users because they have no access to the filter components? (Could be
be done by inverting and putting in an AndNot of the non-matchable overlaps). Or do we just done by inverting and putting in an AndNot of the non-matchable overlaps). Or do we just filter our
filter our description from the users returned (But that implies they DID match, which is a disclosure). description from the users returned (But that implies they DID match, which is a disclosure).
More concrete: More concrete:
```yaml ```
search { search {
action: allow action: allow
targetscope: Eq("class", "group") targetscope: Eq("class", "group")
@ -71,14 +67,14 @@ SearchRequest {
A potential defense is: A potential defense is:
```yaml ```
acp class group: Pres(name) and Pres(desc) both in target attr, allow acp class group: Pres(name) and Pres(desc) both in target attr, allow
acp class user: Pres(name) allow, Pres(desc) deny. Invert and Append acp class user: Pres(name) allow, Pres(desc) deny. Invert and Append
``` ```
So the filter now is: So the filter now is:
```yaml ```
And: { And: {
AndNot: { AndNot: {
Eq("class", "user") Eq("class", "user")
@ -94,7 +90,7 @@ This would now only allow access to the `name` and `description` of the class `g
If we extend this to a third, this would work. A more complex example: If we extend this to a third, this would work. A more complex example:
```yaml ```
search { search {
action: allow action: allow
targetscope: Eq("class", "group") targetscope: Eq("class", "group")
@ -117,7 +113,7 @@ search {
Now we have a single user where we can read `description`. So the compiled filter above as: Now we have a single user where we can read `description`. So the compiled filter above as:
```yaml ```
And: { And: {
AndNot: { AndNot: {
Eq("class", "user") Eq("class", "user")
@ -130,50 +126,49 @@ And: {
``` ```
This would now be invalid, first, because we would see that `class=user` and `william` has no name This would now be invalid, first, because we would see that `class=user` and `william` has no name
so that would be excluded also. We also may not even have "class=user" in the second ACP, so we can't so that would be excluded also. We also may not even have "class=user" in the second ACP, so we
use subset filter matching to merge the two. can't use subset filter matching to merge the two.
As a result, I think the only possible valid solution is to perform the initial filter, then determine As a result, I think the only possible valid solution is to perform the initial filter, then
on the candidates if we *could* have have valid access to filter on all required attributes. IE determine on the candidates if we _could_ have have valid access to filter on all required
this means even with an index look up, we still are required to perform some filter application attributes. IE this means even with an index look up, we still are required to perform some filter
on the candidates. application on the candidates.
I think this will mean on a possible candidate, we have to apply all ACP, then create a union of I think this will mean on a possible candidate, we have to apply all ACP, then create a union of the
the resulting targetattrs, and then compared that set into the set of attributes in the filter. resulting targetattrs, and then compared that set into the set of attributes in the filter.
This will be slow on large candidate sets (potentially), but could be sped up with parallelism, caching This will be slow on large candidate sets (potentially), but could be sped up with parallelism,
or other methods. However, in the same step, we can also apply the step of extracting only the allowed caching or other methods. However, in the same step, we can also apply the step of extracting only
read target attrs, so this is a valuable exercise. the allowed read target attrs, so this is a valuable exercise.
Delete Requirements ## Delete Requirements
-------------------
A `delete` profile must contain the `content` and `scope` of a delete. A `delete` profile must contain the `content` and `scope` of a delete.
An example: An example:
> Alice should only be able to delete objects where the `memberOf` is > Alice should only be able to delete objects where the `memberOf` is `purgeable`, and where they
> `purgeable`, and where they are not marked as `protected`. > are not marked as `protected`.
Create Requirements ## Create Requirements
-------------------
A `create` profile defines the following limits to what objects can be created, through the combination of filters and atttributes. A `create` profile defines the following limits to what objects can be created, through the
combination of filters and atttributes.
An example: An example:
> Alice should only be able to create objects where the `class` is `group`, and can > Alice should only be able to create objects where the `class` is `group`, and can only name the
> only name the group, but they cannot add members to the group. > group, but they cannot add members to the group.
An example of a content requirement could be something like "the value of an attribute must pass a regular expression filter". An example of a content requirement could be something like "the value of an attribute must pass a
This could limit a user to creating a group of any name, except where the group's name contains "admin". regular expression filter". This could limit a user to creating a group of any name, except where
This a contrived example which is also possible with filtering, but more complex requirements are possible. the group's name contains "admin". This a contrived example which is also possible with filtering,
but more complex requirements are possible.
For example, we want to be able to limit the classes that someone *could* create on an object For example, we want to be able to limit the classes that someone _could_ create on an object
because classes often are used in security rules. because classes often are used in security rules.
Modify Requirements ## Modify Requirements
-------------------
A `modify` profile defines the following limits: A `modify` profile defines the following limits:
@ -182,40 +177,41 @@ A `modify` profile defines the following limits:
A `modify` profile defines a limit on the `modlist` actions. A `modify` profile defines a limit on the `modlist` actions.
For example: you may only be allowed to ensure `presence` of a value. (Modify allowing purge, not-present, and presence). For example: you may only be allowed to ensure `presence` of a value. (Modify allowing purge,
not-present, and presence).
Content requirements (see [Create Requirements](#create-requirements)) are out of scope at the moment. Content requirements (see [Create Requirements](#create-requirements)) are out of scope at the
moment.
An example: An example:
> Alice should only be able to modify a user's password if that user is a member of the > Alice should only be able to modify a user's password if that user is a member of the students
> students group. > group.
**Note:** `modify` does not imply `read` of the attribute. Care should be taken that we don't disclose **Note:** `modify` does not imply `read` of the attribute. Care should be taken that we don't
the current value in any error messages if the operation fails. disclose the current value in any error messages if the operation fails.
Targeting Requirements ## Targeting Requirements
-----------------------
The `target` of an access profile should be a filter defining the objects that this applies to. The `target` of an access profile should be a filter defining the objects that this applies to.
The filter limit for the profiles of what they are acting on requires a single special operation The filter limit for the profiles of what they are acting on requires a single special operation
which is the concept of "targeting self". which is the concept of "targeting self".
For example: we could define a rule that says "members of group X are allowed self-write to the `mobilePhoneNumber` attribute". For example: we could define a rule that says "members of group X are allowed self-write to the
`mobilePhoneNumber` attribute".
An extension to the filter code could allow an extra filter enum of `self`, that would allow this An extension to the filter code could allow an extra filter enum of `self`, that would allow this to
to operate correctly, and would consume the entry in the event as the target of "Self". This would operate correctly, and would consume the entry in the event as the target of "Self". This would be
be best implemented as a compilation of `self -> eq(uuid, self.uuid)`. best implemented as a compilation of `self -> eq(uuid, self.uuid)`.
Implementation Details ## Implementation Details
----------------------
CHANGE: Receiver should be a group, and should be single value/multivalue? Can *only* be a group. CHANGE: Receiver should be a group, and should be single value/multivalue? Can _only_ be a group.
Example profiles: Example profiles:
```yaml ```
search { search {
action: allow action: allow
receiver: Eq("memberof", "admins") receiver: Eq("memberof", "admins")
@ -307,15 +303,14 @@ modify {
} }
``` ```
Formalised Schema ## Formalised Schema
-----------------
A complete schema would be: A complete schema would be:
### Attributes ### Attributes
| Name | Single/Multi | Type | Description | | Name | Single/Multi | Type | Description |
| --- | --- | --- | | | ---------------------- | ------------ | ----------------- | ------------------------------------------------- |
| acp_allow | single value | bool | | | acp_allow | single value | bool | |
| acp_enable | single value | bool | This ACP is enabled | | acp_enable | single value | bool | This ACP is enabled |
| acp_receiver | single value | filter | ??? | | acp_receiver | single value | filter | ??? |
@ -330,7 +325,7 @@ A complete schema would be:
### Classes ### Classes
| Name | Must Have | May Have | | Name | Must Have | May Have |
| --- | --- | --- | | ---------------------- | --------------------------------- | -------------------------------------------------------------------- |
| access_control_profile | `[acp_receiver, acp_targetscope]` | `[description, acp_allow]` | | access_control_profile | `[acp_receiver, acp_targetscope]` | `[description, acp_allow]` |
| access_control_search | `[acp_search_attr]` | | | access_control_search | `[acp_search_attr]` | |
| access_control_delete | | | | access_control_delete | | |
@ -339,13 +334,12 @@ A complete schema would be:
**Important**: empty sets really mean empty sets! **Important**: empty sets really mean empty sets!
The ACP code will assert that both `access_control_profile` *and* one of the `search/delete/modify/create` The ACP code will assert that both `access_control_profile` _and_ one of the
classes exists on an ACP. An important factor of this design is now the ability to *compose* `search/delete/modify/create` classes exists on an ACP. An important factor of this design is now
multiple ACP's into a single entry allowing a `create/delete/modify` to exist! However, each one must the ability to _compose_ multiple ACP's into a single entry allowing a `create/delete/modify` to
still list their respective actions to allow proper granularity. exist! However, each one must still list their respective actions to allow proper granularity.
"Search" Application ## "Search" Application
------------------
The set of access controls is checked, and the set where receiver matches the current identified The set of access controls is checked, and the set where receiver matches the current identified
user is collected. These then are added to the users requested search as: user is collected. These then are added to the users requested search as:
@ -359,8 +353,8 @@ required search profile filters, the outer `And` condition is nullified and no r
Once complete, in the translation of the entry -> proto_entry, each access control and its allowed Once complete, in the translation of the entry -> proto_entry, each access control and its allowed
set of attrs has to be checked to determine what of that entry can be displayed. Consider there are set of attrs has to be checked to determine what of that entry can be displayed. Consider there are
three entries, A, B, C. An ACI that allows read of "name" on A, B exists, and a read of "mail" on three entries, A, B, C. An ACI that allows read of "name" on A, B exists, and a read of "mail" on B,
B, C. The correct behaviour is then: C. The correct behaviour is then:
``` ```
A: name A: name
@ -369,11 +363,10 @@ C: mail
``` ```
So this means that the `entry -> proto entry` part is likely the most expensive part of the access So this means that the `entry -> proto entry` part is likely the most expensive part of the access
control operation, but also one of the most important. It may be possible to compile to some kind control operation, but also one of the most important. It may be possible to compile to some kind of
of faster method, but initially a simple version is needed. faster method, but initially a simple version is needed.
"Delete" Application ## "Delete" Application
------------------
Delete is similar to search, however there is the risk that the user may say something like: Delete is similar to search, however there is the risk that the user may say something like:
@ -381,8 +374,8 @@ Delete is similar to search, however there is the risk that the user may say som
Pres("class"). Pres("class").
``` ```
Were we to approach this like search, this would then have "every thing the identified user Were we to approach this like search, this would then have "every thing the identified user is
is allowed to delete, is deleted". A consideration here is that `Pres("class")` would delete "all" allowed to delete, is deleted". A consideration here is that `Pres("class")` would delete "all"
objects in the directory, but with the access control present, it would limit the deletion to the objects in the directory, but with the access control present, it would limit the deletion to the
set of allowed deletes. set of allowed deletes.
@ -396,13 +389,14 @@ the delete privilege which in itself is very high level of access, so this risk
So the choices are: So the choices are:
1. Treat it like search and allow the user to delete what they are allowed to delete, 1. Treat it like search and allow the user to delete what they are allowed to delete, but ignore
but ignore other objects other objects
2. Deny the request because their delete was too broad, and they must specify a valid deletion request. 2. Deny the request because their delete was too broad, and they must specify a valid deletion
request.
Option #2 seems more correct because the `delete` request is an explicit request, not a request where Option #2 seems more correct because the `delete` request is an explicit request, not a request
you want partial results. Imagine someone wants to delete users A and B at the same time, but only where you want partial results. Imagine someone wants to delete users A and B at the same time, but
has access to A. They want this request to fail so they KNOW B was not deleted, rather than it only has access to A. They want this request to fail so they KNOW B was not deleted, rather than it
succeed and have B still exist with a partial delete status. succeed and have B still exist with a partial delete status.
However, a possible issue is that Option #2 means that a delete request of However, a possible issue is that Option #2 means that a delete request of
@ -414,64 +408,60 @@ However, a possible issue is that Option #2 means that a delete request of
that would depend if the response was "invalid" in both cases, or "invalid" / "refused" that would depend if the response was "invalid" in both cases, or "invalid" / "refused"
--> -->
This is also a concern for modification, where the modification attempt may or may not This is also a concern for modification, where the modification attempt may or may not fail
fail depending on the entries and if you can/can't see them. depending on the entries and if you can/can't see them.
**IDEA:** You can only `delete`/`modify` within the read scope you have. If you can't
read it (based on the read rules of `search`), you can't `delete` it. This is in addition to the filter
rules of the `delete` applying as well. So performing a `delete` of `Pres(class)`, will only delete
in your `read` scope and will never disclose if you are denied access.
**IDEA:** You can only `delete`/`modify` within the read scope you have. If you can't read it (based
on the read rules of `search`), you can't `delete` it. This is in addition to the filter rules of
the `delete` applying as well. So performing a `delete` of `Pres(class)`, will only delete in your
`read` scope and will never disclose if you are denied access.
<!-- TODO <!-- TODO
@yaleman: This goes back to the commentary on Option #2 and feels icky like SQL's `DELETE FROM <table>` just deleting everything. It's more complex from the client - you have to search for a set of things to delete - then delete them. @yaleman: This goes back to the commentary on Option #2 and feels icky like SQL's `DELETE FROM <table>` just deleting everything. It's more complex from the client - you have to search for a set of things to delete - then delete them.
Explicitly listing the objects you want to delete feels.... way less bad. This applies to modifies too.  😁 Explicitly listing the objects you want to delete feels.... way less bad. This applies to modifies too.  😁
--> -->
"Create" Application ## "Create" Application
------------------
Create seems like the easiest to apply. Ensure that only the attributes in `createattr` are in the Create seems like the easiest to apply. Ensure that only the attributes in `createattr` are in the
`createevent`, ensure the classes only contain the set in `createclass`, then finally apply `createevent`, ensure the classes only contain the set in `createclass`, then finally apply
`filter_no_index` to the entry to entry. If all of this passes, the create is allowed. `filter_no_index` to the entry to entry. If all of this passes, the create is allowed.
A key point is that there is no union of `create` ACI's - the WHOLE ACI must pass, not parts of A key point is that there is no union of `create` ACI's - the WHOLE ACI must pass, not parts of
multiple. This means if a control say "allows creating group with member" and "allows creating multiple. This means if a control say "allows creating group with member" and "allows creating user
user with name", creating a group with `name` is not allowed - despite your ability to create with name", creating a group with `name` is not allowed - despite your ability to create an entry
an entry with `name`, its classes don't match. This way, the administrator of the service can define with `name`, its classes don't match. This way, the administrator of the service can define create
create controls with specific intent for how they will be used without the risk of two controls with specific intent for how they will be used without the risk of two controls causing
controls causing unintended effects (`users` that are also `groups`, or allowing invalid values. unintended effects (`users` that are also `groups`, or allowing invalid values.
An important consideration is how to handle overlapping ACI. If two ACI *could* match the create An important consideration is how to handle overlapping ACI. If two ACI _could_ match the create
should we enforce both conditions are upheld? Or only a single upheld ACI allows the create? should we enforce both conditions are upheld? Or only a single upheld ACI allows the create?
In some cases it may not be possible to satisfy both, and that would block creates. The intent In some cases it may not be possible to satisfy both, and that would block creates. The intent of
of the access profile is that "something like this CAN" be created, so I believe that provided the access profile is that "something like this CAN" be created, so I believe that provided only a
only a single control passes, the create should be allowed. single control passes, the create should be allowed.
"Modify" Application ## "Modify" Application
------------------
Modify is similar to Create, however we specifically filter on the `modlist` action of `present`, Modify is similar to Create, however we specifically filter on the `modlist` action of `present`,
`removed` or `purged` with the action. The rules of create still apply; provided all requirements `removed` or `purged` with the action. The rules of create still apply; provided all requirements of
of the modify are permitted, then it is allowed once at least one profile allows the change. the modify are permitted, then it is allowed once at least one profile allows the change.
A key difference is that if the modify ACP lists multiple `presentattr` types, the modify request A key difference is that if the modify ACP lists multiple `presentattr` types, the modify request is
is valid if it is only modifying one attribute. IE we say `presentattr: name, email`, but we valid if it is only modifying one attribute. IE we say `presentattr: name, email`, but we only
only attempt to modify `email`. attempt to modify `email`.
Considerations ## Considerations
--------------
* When should access controls be applied? During an operation, we only validate schema after - When should access controls be applied? During an operation, we only validate schema after pre*
pre* Plugin application, so likely it has to be "at that point", to ensure schema-based Plugin application, so likely it has to be "at that point", to ensure schema-based validity of the
validity of the entries that are allowed to be changed. entries that are allowed to be changed.
* Self filter keyword should compile to `eq("uuid", "....")`. When do we do this and how? - Self filter keyword should compile to `eq("uuid", "....")`. When do we do this and how?
* `memberof` could take `name` or `uuid`, we need to be able to resolve this correctly, but this is - `memberof` could take `name` or `uuid`, we need to be able to resolve this correctly, but this is
likely an issue in `memberof` which needs to be addressed, ie `memberof uuid` vs `memberof attr`. likely an issue in `memberof` which needs to be addressed, ie `memberof uuid` vs `memberof attr`.
* Content controls in `create` and `modify` will be important to get right to avoid the security issues - Content controls in `create` and `modify` will be important to get right to avoid the security
of LDAP access controls. Given that `class` has special importance, it's only right to give it extra issues of LDAP access controls. Given that `class` has special importance, it's only right to give
consideration in these controls. it extra consideration in these controls.
* In the future when `recyclebin` is added, a `re-animation` access profile should be created allowing - In the future when `recyclebin` is added, a `re-animation` access profile should be created
revival of entries given certain conditions of the entry we are attempting to revive. A service-desk user allowing revival of entries given certain conditions of the entry we are attempting to revive. A
should not be able to revive a deleted high-privilege user. service-desk user should not be able to revive a deleted high-privilege user.

View file

@ -1,4 +1,3 @@
# Access Profiles Rework 2022 # Access Profiles Rework 2022
Access controls are critical for a project like Kanidm to determine who can access what on other Access controls are critical for a project like Kanidm to determine who can access what on other
@ -10,69 +9,71 @@ a complete and useful IDM.
The original design of the access control system was intended to satisfy our need for flexibility, The original design of the access control system was intended to satisfy our need for flexibility,
but we have begun to discover a number of limitations. The design incorporating filter queries makes but we have begun to discover a number of limitations. The design incorporating filter queries makes
them hard to administer as we have not often publicly talked about the filter language and how it them hard to administer as we have not often publicly talked about the filter language and how it
internally works. Because of their use of filters it is hard to see on an entry "what" access controls internally works. Because of their use of filters it is hard to see on an entry "what" access
will apply to entries, making it hard to audit without actually calling the ACP subsystem. Currently controls will apply to entries, making it hard to audit without actually calling the ACP subsystem.
the access control system has a large impact on performance, accounting for nearly 35% of the time taken Currently the access control system has a large impact on performance, accounting for nearly 35% of
in a search operation. the time taken in a search operation.
Additionally, the default access controls that we supply have started to run into limits and rough cases Additionally, the default access controls that we supply have started to run into limits and rough
due to changes as we have improved features. Some of this was due to limited design with user cases cases due to changes as we have improved features. Some of this was due to limited design with user
in mind during development. cases in mind during development.
To resolve this a number of coordinating features need implementation to improve this situation. These To resolve this a number of coordinating features need implementation to improve this situation.
features will be documented *first*, and the use cases *second* with each use case linking to the These features will be documented _first_, and the use cases _second_ with each use case linking to
features that satisfy it. the features that satisfy it.
## Required Features to Satisfy ## Required Features to Satisfy
### Refactor of default access controls ### Refactor of default access controls
The current default privileges will need to be refactored to improve seperation of privilege The current default privileges will need to be refactored to improve seperation of privilege and
and improved delegation of finer access rights. improved delegation of finer access rights.
### Access profiles target specifiers instead of filters ### Access profiles target specifiers instead of filters
Access profiles should target a list of groups for who the access profile applies to, and who recieves Access profiles should target a list of groups for who the access profile applies to, and who
the access it is granting. recieves the access it is granting.
Alternately an access profile could target "self" so that self-update rules can still be expressed. Alternately an access profile could target "self" so that self-update rules can still be expressed.
An access profile could target an oauth2 definition for the purpose of allowing reads to members An access profile could target an oauth2 definition for the purpose of allowing reads to members of
of a set of scopes that can access the service. a set of scopes that can access the service.
The access profile receiver would be group based only. This allows specifying that "X group of members The access profile receiver would be group based only. This allows specifying that "X group of
can write self" meaning that any member of that group can write to themself and only themself. members can write self" meaning that any member of that group can write to themself and only
themself.
In the future we could also create different target/receiver specifiers to allow other extended management In the future we could also create different target/receiver specifiers to allow other extended
and delegation scenarioes. This improves the situation making things more flexible from the current management and delegation scenarioes. This improves the situation making things more flexible from
filter system. It also may allow filters to be simplified to remove the SELF uuid resolve step in some cases. the current filter system. It also may allow filters to be simplified to remove the SELF uuid
resolve step in some cases.
### Filter based groups ### Filter based groups
These are groups who's members are dynamicly allocated based on a filter query. This allows a similar These are groups who's members are dynamicly allocated based on a filter query. This allows a
level of dynamic group management as we have currently with access profiles, but with the additional similar level of dynamic group management as we have currently with access profiles, but with the
ability for them to be used outside of the access control context. This is the "bridge" allowing us to additional ability for them to be used outside of the access control context. This is the "bridge"
move from filter based access controls to "group" targetted. allowing us to move from filter based access controls to "group" targetted.
A risk of filter based groups is "infinite churn" because of recursion. This can occur if you A risk of filter based groups is "infinite churn" because of recursion. This can occur if you had a
had a rule such a "and not memberof = self" on a dynamic group. Because of this, filters on rule such a "and not memberof = self" on a dynamic group. Because of this, filters on dynamic groups
dynamic groups may not use "memberof" unless they are internally provided by the kanidm project so may not use "memberof" unless they are internally provided by the kanidm project so that we can vet
that we can vet these rules as correct and without creating infinite recursion scenarioes. these rules as correct and without creating infinite recursion scenarioes.
### Access rules extracted to ACI entries on targets ### Access rules extracted to ACI entries on targets
The access control profiles are an excellent way to administer access where you can specific whom The access control profiles are an excellent way to administer access where you can specific whom
has access to what, but it makes it harder for the reverse query which is "who has access to this has access to what, but it makes it harder for the reverse query which is "who has access to this
specific entity". Since this is needed for both search and auditing, by specifying our access profiles specific entity". Since this is needed for both search and auditing, by specifying our access
in the current manner, but using them to generate ACE rules on the target entry will allow the search profiles in the current manner, but using them to generate ACE rules on the target entry will allow
and audit paths to answer the question of "who has access to this entity" much faster. the search and audit paths to answer the question of "who has access to this entity" much faster.
### Sudo Mode ### Sudo Mode
A flag should exist on a session defining "sudo" mode which requires a special account policy membership A flag should exist on a session defining "sudo" mode which requires a special account policy
OR a re-authentication to enable. This sudo flag is a time window on a session token which can membership OR a re-authentication to enable. This sudo flag is a time window on a session token
allow/disallow certain behaviours. It would be necessary for all write paths to have access to this which can allow/disallow certain behaviours. It would be necessary for all write paths to have
value. access to this value.
### Account Policy ### Account Policy
@ -84,13 +85,14 @@ mode and this enforces rules on session expiry.
### Default Roles / Seperation of Privilege ### Default Roles / Seperation of Privilege
By default we attempt to seperate privileges so that "no single account" has complete authority By default we attempt to seperate privileges so that "no single account" has complete authority over
over the system. the system.
Satisfied by: Satisfied by:
* Refactor of default access controls
* Filter based groups - Refactor of default access controls
* Sudo Mode - Filter based groups
- Sudo Mode
#### System Admin #### System Admin
@ -99,39 +101,39 @@ users or accounts.
The "admins" role is responsible to manage: The "admins" role is responsible to manage:
* The name of the domain - The name of the domain
* Configuration of the servers and replication - Configuration of the servers and replication
* Management of external integrations (oauth2) - Management of external integrations (oauth2)
#### Service Account Admin #### Service Account Admin
The role would be called "sa\_admins" and would be responsible for top level management of service The role would be called "sa\_admins" and would be responsible for top level management of service
accounts, and delegating authority for service account administration to managing users. accounts, and delegating authority for service account administration to managing users.
* Create service accounts - Create service accounts
* Delegate service account management to owners groups - Delegate service account management to owners groups
* Migrate service accounts to persons - Migrate service accounts to persons
The service account admin is capable of migrating service accounts to persons as it is "yielding" The service account admin is capable of migrating service accounts to persons as it is "yielding"
control of the entity, rather than an idm admin "taking" the entity which may have security impacts. control of the entity, rather than an idm admin "taking" the entity which may have security impacts.
#### Service Desk #### Service Desk
This role manages a subset of persons. The helpdesk roles are precluded from modification of This role manages a subset of persons. The helpdesk roles are precluded from modification of "higher
"higher privilege" roles like service account, identity and system admins. This is due to potential privilege" roles like service account, identity and system admins. This is due to potential
privilege escalation attacks. privilege escalation attacks.
* Can create credential reset links - Can create credential reset links
* Can lock and unlock accounts and their expiry. - Can lock and unlock accounts and their expiry.
#### Idm Admin #### Idm Admin
This role manages identities, or more specifically person accounts. In addition in is a This role manages identities, or more specifically person accounts. In addition in is a "high
"high privilege" service desk role and can manage high privilege users as well. privilege" service desk role and can manage high privilege users as well.
* Create persons - Create persons
* Modify and manage persons - Modify and manage persons
* All roles of service desk for all persons - All roles of service desk for all persons
### Self Write / Write Privilege ### Self Write / Write Privilege
@ -146,19 +148,19 @@ authentication sessions as a result of this.
Satisfied by: Satisfied by:
* Access profiles target specifiers instead of filters - Access profiles target specifiers instead of filters
* Sudo Mode - Sudo Mode
### Oauth2 Service Read (Nice to Have) ### Oauth2 Service Read (Nice to Have)
For ux/ui integration, being able to list oauth2 applications that are accessible to the user For ux/ui integration, being able to list oauth2 applications that are accessible to the user would
would be a good feature. To limit "who" can see the oauth2 applications that an account can access be a good feature. To limit "who" can see the oauth2 applications that an account can access a way
a way to "allow read" but by proxy of the related users of the oauth2 service. This will require to "allow read" but by proxy of the related users of the oauth2 service. This will require access
access controls to be able to interept the oauth2 config and provide rights based on that. controls to be able to interept the oauth2 config and provide rights based on that.
Satisfied by: Satisfied by:
* Access profiles target specifiers instead of filters - Access profiles target specifiers instead of filters
### Administration ### Administration
@ -166,9 +168,9 @@ Access controls should be easier to manage and administer, and should be group b
filter based. This will make it easier for administrators to create and define their own access filter based. This will make it easier for administrators to create and define their own access
rules. rules.
* Refactor of default access controls - Refactor of default access controls
* Access profiles target specifiers instead of filters - Access profiles target specifiers instead of filters
* Filter based groups - Filter based groups
### Service Account Access ### Service Account Access
@ -176,17 +178,15 @@ Service accounts should be able to be "delegated" administration, where a group
a service account. This should not require administrators to create unique access controls for each a service account. This should not require administrators to create unique access controls for each
service account, but a method to allow mapping of the service account to "who manages it". service account, but a method to allow mapping of the service account to "who manages it".
* Sudo Mode - Sudo Mode
* Account Policy - Account Policy
* Access profiles target specifiers instead of filters - Access profiles target specifiers instead of filters
* Refactor of default access controls - Refactor of default access controls
### Auditing of Access ### Auditing of Access
It should be easier to audit whom has access to what by inspecting the entry to view what can access It should be easier to audit whom has access to what by inspecting the entry to view what can access
it. it.
* Access rules extracted to ACI entries on targets - Access rules extracted to ACI entries on targets
* Access profiles target specifiers instead of filters - Access profiles target specifiers instead of filters

View file

@ -1,69 +1,60 @@
# Oauth2 Application Listing
Oauth2 Application Listing A feature of some other IDM systems is to also double as a portal to linked applications. This
========================== allows a convinent access point for users to discover and access linked applications without having
to navigate to them manually. This naturally works quite well since it means that the user is
already authenticated, and the IDM becomes the single "gateway" to accessing other applications.
A feature of some other IDM systems is to also double as a portal to linked applications. This allows ## How it should look
a convinent access point for users to discover and access linked applications without having to
navigate to them manually. This naturally works quite well since it means that the user is already
authenticated, and the IDM becomes the single "gateway" to accessing other applications.
How it should look - The user should ONLY see a list of applications they _can_ access
------------------ - The user should see a list of applications with "friendly" display names
- The list of applications _may_ have an icon/logo
- Clicking the application should take them to the location
* The user should ONLY see a list of applications they *can* access ## Access Control
* The user should see a list of applications with "friendly" display names
* The list of applications *may* have an icon/logo
* Clicking the application should take them to the location
The current design of the oauth2 resource servers (oauth2rs) is modeled around what the oauth2
protocol requires. This defines that in an oauth2 request, all of the requested scopes need be
granted else it can not proceed. The current design is:
Access Control - scope maps - a relation of groups to the set of scopes that they grant
-------------- - implicit scopes - a set of scopes granted to all persons
The current design of the oauth2 resource servers (oauth2rs) is modeled around what
the oauth2 protocol requires. This defines that in an oauth2 request, all of the requested
scopes need be granted else it can not proceed. The current design is:
* scope maps - a relation of groups to the set of scopes that they grant
* implicit scopes - a set of scopes granted to all persons
While this works well for the oauth2 authorisation design, it doesn't work well from the kanidm side While this works well for the oauth2 authorisation design, it doesn't work well from the kanidm side
for managing *our* knowledge of who is granted access to the application. for managing _our_ knowledge of who is granted access to the application.
In order to limit who can see what applications we will need a new method to define who is allowed In order to limit who can see what applications we will need a new method to define who is allowed
access to the resource server on the kanidm side, while also preserving ouath2 semantics. access to the resource server on the kanidm side, while also preserving ouath2 semantics.
To fix this the current definition of scopes on oauth2 resource servers need to change. To fix this the current definition of scopes on oauth2 resource servers need to change.
* access scopes - a list of scopes (similar to implicit) that are used by the resource server for granting access to the resource. - access scopes - a list of scopes (similar to implicit) that are used by the resource server for
* access members - a list of groups that are granted access granting access to the resource.
* supplementary scopes - definitions of scope maps that grant scopes which are not access related, but may provide extra details for the account using the resource - access members - a list of groups that are granted access
- supplementary scopes - definitions of scope maps that grant scopes which are not access related,
but may provide extra details for the account using the resource
By changing to this method this removes the arbitrary implicit scope/scope map rules, and clearly By changing to this method this removes the arbitrary implicit scope/scope map rules, and clearly
defines the set of scopes that grant access to the application, while also allow extended scopes defines the set of scopes that grant access to the application, while also allow extended scopes to
to be sent that can attenuate the application behaviour. This also allows the access members reference be sent that can attenuate the application behaviour. This also allows the access members reference
to be used to generate knowledge on the kanidm side of "who can access this oauth2 resource". This to be used to generate knowledge on the kanidm side of "who can access this oauth2 resource". This
can be used to limit the listed applications to these oauth2 applications. In addition we can then can be used to limit the listed applications to these oauth2 applications. In addition we can then
use these access members to create access controls to strictly limit who can see what oauth2 applications use these access members to create access controls to strictly limit who can see what oauth2
to the admins of oauth2 applications, and the users of them. applications to the admins of oauth2 applications, and the users of them.
To support this, we should allow dynamic groups to be created so that the 'implicit scope' behaviour To support this, we should allow dynamic groups to be created so that the 'implicit scope' behaviour
which allow all persons to access an application can be emulated by making all persons a member of access members. which allow all persons to access an application can be emulated by making all persons a member of
access members.
Migration of the current scopes and implicit scopes is likely not possible with this change, so we Migration of the current scopes and implicit scopes is likely not possible with this change, so we
may have to delete these which will require admins to re-configure these permissions, but that is a better may have to delete these which will require admins to re-configure these permissions, but that is a
option than allowing "too much" access. better option than allowing "too much" access.
Display Names / Logos ## Display Names / Logos
---------------------
Display names already exist. Display names already exist.
Logos will require upload and storage. A binary type exists in the db that can be used for storing Logos will require upload and storage. A binary type exists in the db that can be used for storing
blobs, or we could store something like svg. I think it's too risky to "validate" images in these blobs, or we could store something like svg. I think it's too risky to "validate" images in these
uploads, so we could just store the blob and display it? uploads, so we could just store the blob and display it?

View file

@ -1,15 +1,13 @@
# REST Interface # REST Interface
{{#template {{#template\
../../templates/kani-warning.md ../../templates/kani-warning.md imagepath=../../images/ title=Note! text=Here begins some early
imagepath=../../images/ notes on the REST interface - much better ones are in the repository's designs directory. }}
title=Note!
text=Here begins some early notes on the REST interface - much better ones are in the repository's designs directory.
}}
There's an endpoint at `/<api_version>/routemap` (for example, https://localhost/v1/routemap) which is based on the API routes as they get instantiated. There's an endpoint at `/<api_version>/routemap` (for example, https://localhost/v1/routemap) which
is based on the API routes as they get instantiated.
It's *very, very, very* early work, and should not be considered stable at all. It's _very, very, very_ early work, and should not be considered stable at all.
An example of some elements of the output is below: An example of some elements of the output is below:

View file

@ -1,24 +1,31 @@
# Scim and Migration Tooling # Scim and Migration Tooling
We need to be able to synchronise content from other directory or identity management systems. We need to be able to synchronise content from other directory or identity management systems. To do
To do this, we need the capability to have "pluggable" synchronisation drivers. This is because this, we need the capability to have "pluggable" synchronisation drivers. This is because not all
not all deployments will be able to use our generic versions, or may have customisations they deployments will be able to use our generic versions, or may have customisations they wish to
wish to perform that are unique to them. perform that are unique to them.
To achieve this we need a layer of seperation - This effectively becomes an "extract, transform, To achieve this we need a layer of seperation - This effectively becomes an "extract, transform,
load" process. In addition this process must be *stateful* where it can be run multiple times load" process. In addition this process must be _stateful_ where it can be run multiple times or
or even continuously and it will bring kanidm into synchronisation. even continuously and it will bring kanidm into synchronisation.
We refer to a "synchronisation" as meaning a complete successful extract, transform and load cycle. We refer to a "synchronisation" as meaning a complete successful extract, transform and load cycle.
There are three expected methods of using the synchronisation tools for Kanidm There are three expected methods of using the synchronisation tools for Kanidm
* Kanidm as a "read only" portal allowing access to it's specific features and integrations. This is less of a migration, and more of a way to "feed" data into Kanidm without relying on it's internal administration features. - Kanidm as a "read only" portal allowing access to it's specific features and integrations. This is
* "Big Bang" migration. This is where all the data from another IDM is synchronised in a single execution and applications are swapped to Kanidm. This is rare in larger deployments, but may be used in smaller sites. less of a migration, and more of a way to "feed" data into Kanidm without relying on it's internal
* Gradual migration. This is where data is synchronised to Kanidm and then both the existing IDM and Kanidm co-exist. Applications gradually migrate to Kanidm. At some point a "final" synchronisation is performed where Kanidm 'gains authority' over all identity data and the existing IDM is disabled. administration features.
- "Big Bang" migration. This is where all the data from another IDM is synchronised in a single
execution and applications are swapped to Kanidm. This is rare in larger deployments, but may be
used in smaller sites.
- Gradual migration. This is where data is synchronised to Kanidm and then both the existing IDM and
Kanidm co-exist. Applications gradually migrate to Kanidm. At some point a "final" synchronisation
is performed where Kanidm 'gains authority' over all identity data and the existing IDM is
disabled.
In these processes there may be a need to "reset" the synchronsied data. The diagram below shows the possible work flows which account for the above. In these processes there may be a need to "reset" the synchronsied data. The diagram below shows the
possible work flows which account for the above.
┏━━━━━━━━━━━━━━━━━┓ ┏━━━━━━━━━━━━━━━━━┓
┃ ┃ ┃ ┃
@ -45,9 +52,10 @@ In these processes there may be a need to "reset" the synchronsied data. The dia
Kanidm starts in a "detached" state from the extern IDM source. Kanidm starts in a "detached" state from the extern IDM source.
For Kanidm as a "read only" application source the Initial synchronisation is performed followed by periodic For Kanidm as a "read only" application source the Initial synchronisation is performed followed by
active (partial) synchronisations. At anytime a full initial synchronisation can re-occur to reset the data of the periodic active (partial) synchronisations. At anytime a full initial synchronisation can re-occur
provider. The provider can be reset and removed by a purge which reset's Kanidm to a detached state. to reset the data of the provider. The provider can be reset and removed by a purge which reset's
Kanidm to a detached state.
For a gradual migration, this process is the same as the read only application. However when ready For a gradual migration, this process is the same as the read only application. However when ready
to perform the final cut over a final synchronisation is performed, which retains the data of the to perform the final cut over a final synchronisation is performed, which retains the data of the
@ -61,43 +69,43 @@ step required, where all data is loaded and then immediately granted authority t
### Extract ### Extract
First a user must be able to retrieve their data from their supplying IDM source. Initially First a user must be able to retrieve their data from their supplying IDM source. Initially we will
we will target LDAP and systems with LDAP interfaces, but in the future there is no barrier target LDAP and systems with LDAP interfaces, but in the future there is no barrier to supporting
to supporting other transports. other transports.
To achieve this, we initially provide synchronisation primitives in the To achieve this, we initially provide synchronisation primitives in the
[ldap3 crate](https://github.com/kanidm/ldap3). [ldap3 crate](https://github.com/kanidm/ldap3).
### Transform ### Transform
This process will be custom developed by the user, or may have a generic driver that we provide. This process will be custom developed by the user, or may have a generic driver that we provide. Our
Our generic tools may provide attribute mapping abilitys so that we can allow some limited generic tools may provide attribute mapping abilitys so that we can allow some limited
customisation. customisation.
### Load ### Load
Finally to load the data into Kanidm, we will make a SCIM interface available. SCIM is a Finally to load the data into Kanidm, we will make a SCIM interface available. SCIM is a "spiritual
"spiritual successor" to LDAP, and aligns with Kani's design. SCIM allows structured data successor" to LDAP, and aligns with Kani's design. SCIM allows structured data to be uploaded
to be uploaded (unlike LDAP which is simply strings). Because of this SCIM will allow us to (unlike LDAP which is simply strings). Because of this SCIM will allow us to expose more complex
expose more complex types that previously we have not been able to provide. types that previously we have not been able to provide.
The largest benefit to SCIM's model is it's ability to perform "batched" operations, which work The largest benefit to SCIM's model is it's ability to perform "batched" operations, which work with
with Kanidm's transactional model to ensure that during load events, that content is always valid Kanidm's transactional model to ensure that during load events, that content is always valid and
and correct. correct.
## Configuring a Synchronisation Provider in Kanidm ## Configuring a Synchronisation Provider in Kanidm
Kanidm has a strict transactional model with full ACID compliance. Attempting to create an external Kanidm has a strict transactional model with full ACID compliance. Attempting to create an external
model that needs to interoperate with Kanidm's model and ensure both are compliant is fraught with model that needs to interoperate with Kanidm's model and ensure both are compliant is fraught with
danger. As a result, Kanidm sync providers *should* be stateless, acting only as an ETL bridge. danger. As a result, Kanidm sync providers _should_ be stateless, acting only as an ETL bridge.
Additionally syncproviders need permissions to access and write to content in Kanidm, so it also Additionally syncproviders need permissions to access and write to content in Kanidm, so it also
necessitates Kanidm being aware of the sync relationship. necessitates Kanidm being aware of the sync relationship.
For this reason a syncprovider is a derivative of a service account, which also allows storage of For this reason a syncprovider is a derivative of a service account, which also allows storage of
the *state* of the synchronisation operation. An example of this is that LDAP syncrepl provides a the _state_ of the synchronisation operation. An example of this is that LDAP syncrepl provides a
cookie defining the "state" of what has been "consumed up to" by the ETL bridge. During the cookie defining the "state" of what has been "consumed up to" by the ETL bridge. During the load
load phase the modified entries *and* the cookie are persisted. This means that if the operation fails phase the modified entries _and_ the cookie are persisted. This means that if the operation fails
the cookie also rolls back allowing a retry of the sync. If it suceeds the next sync knows that the cookie also rolls back allowing a retry of the sync. If it suceeds the next sync knows that
kanidm is in the correct state. Graphically: kanidm is in the correct state. Graphically:
@ -119,16 +127,16 @@ kanidm is in the correct state. Graphically:
│ │ │ │◀─────Result───────│ │ │ │ │ │◀─────Result───────│ │
└────────────┘ └────────────┘ └────────────┘ └────────────┘ └────────────┘ └────────────┘
At any point the operation *may* fail, so by locking the state with the upload of entries this At any point the operation _may_ fail, so by locking the state with the upload of entries this
guarantees correct upload has suceeded and persisted. A success really means it! guarantees correct upload has suceeded and persisted. A success really means it!
## SCIM ## SCIM
### Authentication to the endpoint ### Authentication to the endpoint
This will be based on Kanidm's existing authentication infrastructure, allowing service accounts This will be based on Kanidm's existing authentication infrastructure, allowing service accounts to
to use bearer tokens. These tokens will internally bind that changes from the account MUST contain use bearer tokens. These tokens will internally bind that changes from the account MUST contain the
the associated state identifier (cookie). associated state identifier (cookie).
### Batch Operations ### Batch Operations
@ -153,26 +161,27 @@ source is the authority on the information.
## Internal Batch Update Operation Phases ## Internal Batch Update Operation Phases
We have to consider in our batch updates that there are multiple stages of the update. This is because We have to consider in our batch updates that there are multiple stages of the update. This is
we need to consider that at any point the lifecycle of a presented entry may change within a single because we need to consider that at any point the lifecycle of a presented entry may change within a
batch. Because of this, we have to treat the operation differently within kanidm to ensure a consistent outcome. single batch. Because of this, we have to treat the operation differently within kanidm to ensure a
consistent outcome.
Additionally we have to "fail fast". This means that on any conflict the sync will abort and the administrator Additionally we have to "fail fast". This means that on any conflict the sync will abort and the
must intervene. administrator must intervene.
To understand why we chose this, we have to look at what happens in a "soft fail" condition. To understand why we chose this, we have to look at what happens in a "soft fail" condition.
In this example we have an account named X and a group named Y. The group contains X as a member. In this example we have an account named X and a group named Y. The group contains X as a member.
When we submit this for an initial sync, or after the account X is created, if we had a "soft" fail When we submit this for an initial sync, or after the account X is created, if we had a "soft" fail
during the import of the account, we would reject it from being added to Kanidm but would then continue during the import of the account, we would reject it from being added to Kanidm but would then
with the synchronisation. Then the group Y would be imported. Since the member pointing to X would continue with the synchronisation. Then the group Y would be imported. Since the member pointing to
not be valid, it would be silently removed. X would not be valid, it would be silently removed.
At this point we would have group Y imported, but it has no members and the account X would not At this point we would have group Y imported, but it has no members and the account X would not have
have been imported. The administrator may intervene and fix the account X to allow sync to proceed. However been imported. The administrator may intervene and fix the account X to allow sync to proceed.
this would not repair the missing group membership. To repair the group membership a change to group Y However this would not repair the missing group membership. To repair the group membership a change
would need to be triggered to also sync the group status. to group Y would need to be triggered to also sync the group status.
Since the admin may not be aware of this, it would silently mean the membership is missing. Since the admin may not be aware of this, it would silently mean the membership is missing.
@ -182,8 +191,8 @@ group Y would sync and the membership would be intact.
### Phase 1 - Validation of Update State ### Phase 1 - Validation of Update State
In this phase we need to assert that the batch operation can proceed and is consistent with the expectations In this phase we need to assert that the batch operation can proceed and is consistent with the
we have of the server's state. expectations we have of the server's state.
Assert the token provided is valid, and contains the correct access requirements. Assert the token provided is valid, and contains the correct access requirements.
@ -199,31 +208,32 @@ Retrieve the sync\_authority value from the sync entry.
### Phase 2 - Entry Location, Creation and Authority ### Phase 2 - Entry Location, Creation and Authority
In this phase we are ensuring that all the entries within the operation are within the control of In this phase we are ensuring that all the entries within the operation are within the control of
this sync domain. We also ensure that entries we intend to act upon exist with our authority this sync domain. We also ensure that entries we intend to act upon exist with our authority markers
markers such that the subsequent operations are all "modifications" rather than mixed create/modify such that the subsequent operations are all "modifications" rather than mixed create/modify
For each entry in the sync request, if an entry with that uuid exists retrieve it. For each entry in the sync request, if an entry with that uuid exists retrieve it.
* If an entry exists in the database, assert that it's sync\_parent\_uuid is the same as our agreements. - If an entry exists in the database, assert that it's sync\_parent\_uuid is the same as our
* If there is no sync\_parent\_uuid or the sync\_parent\_uuid does not match, reject the operation. agreements.
- If there is no sync\_parent\_uuid or the sync\_parent\_uuid does not match, reject the
operation.
* If no entry exists in the database, create a "stub" entry with our sync\_parent\_uuid - If no entry exists in the database, create a "stub" entry with our sync\_parent\_uuid
* Create the entry immediately, and then retrieve it. - Create the entry immediately, and then retrieve it.
### Phase 3 - Entry Assertion ### Phase 3 - Entry Assertion
Remove all attributes in the sync that are overlapped with our sync\_authority value. Remove all attributes in the sync that are overlapped with our sync\_authority value.
For all uuids in the entry present set For all uuids in the entry present set Assert their attributes match what was synced in. Resolve
Assert their attributes match what was synced in. types that need resolving (name2uuid, externalid2uuid)
Resolve types that need resolving (name2uuid, externalid2uuid)
Write all Write all
### Phase 4 - Entry Removal ### Phase 4 - Entry Removal
For all uuids in the delete\_uuids set: For all uuids in the delete\_uuids set: if their sync\_parent\_uuid matches ours, assert they are
if their sync\_parent\_uuid matches ours, assert they are deleted (recycled). deleted (recycled).
### Phase 5 - Commit ### Phase 5 - Commit
@ -232,14 +242,3 @@ Write the updated "state" from the request to\_state to our current state of the
Write an updated "authority" value to the agreement of what attributes we can change. Write an updated "authority" value to the agreement of what attributes we can change.
Commit the txn. Commit the txn.

View file

@ -15,11 +15,14 @@ TODO: a lot of things.
Setting up a dev environment can be a little complex because of the mono-repo. Setting up a dev environment can be a little complex because of the mono-repo.
1. Install poetry: `python -m pip install poetry`. This is what we use to manage the packages, and allows you to set up virtual python environments easier. 1. Install poetry: `python -m pip install poetry`. This is what we use to manage the packages, and
2. Build the base environment. From within the `pykanidm` directory, run: `poetry install` This'll set up a virtual environment and install all the required packages (and development-related ones) allows you to set up virtual python environments easier.
2. Build the base environment. From within the `pykanidm` directory, run: `poetry install` This'll
set up a virtual environment and install all the required packages (and development-related ones)
3. Start editing! 3. Start editing!
Most IDEs will be happier if you open the kanidm_rlm_python or pykanidm directories as the base you are working from, rather than the kanidm repository root, so they can auto-load integrations etc. Most IDEs will be happier if you open the kanidm_rlm_python or pykanidm directories as the base you
are working from, rather than the kanidm repository root, so they can auto-load integrations etc.
## Building the documentation ## Building the documentation

View file

@ -2,40 +2,48 @@
Setting up a dev environment has some extra complexity due to the mono-repo design. Setting up a dev environment has some extra complexity due to the mono-repo design.
1. Install poetry: `python -m pip install poetry`. This is what we use to manage the packages, and allows you to set up virtual python environments easier. 1. Install poetry: `python -m pip install poetry`. This is what we use to manage the packages, and
allows you to set up virtual python environments easier.
2. Build the base environment. From within the kanidm_rlm_python directory, run: `poetry install` 2. Build the base environment. From within the kanidm_rlm_python directory, run: `poetry install`
3. Install the `kanidm` python library: `poetry run python -m pip install ../pykanidm` 3. Install the `kanidm` python library: `poetry run python -m pip install ../pykanidm`
4. Start editing! 4. Start editing!
Most IDEs will be happier if you open the `kanidm_rlm_python` or `pykanidm` directories as the base you are working from, rather than the `kanidm` repository root, so they can auto-load integrations etc. Most IDEs will be happier if you open the `kanidm_rlm_python` or `pykanidm` directories as the base
you are working from, rather than the `kanidm` repository root, so they can auto-load integrations
etc.
## Running a test RADIUS container ## Running a test RADIUS container
From the root directory of the Kanidm repository: From the root directory of the Kanidm repository:
1. Build the container - this'll give you a container image called `kanidm/radius` with the tag `devel`: 1. Build the container - this'll give you a container image called `kanidm/radius` with the tag
`devel`:
```shell ```bash
make build/radiusd make build/radiusd
``` ```
2. Once the process has completed, check the container exists in your docker environment: 2. Once the process has completed, check the container exists in your docker environment:
```shell ```bash
➜ docker image ls kanidm/radius ➜ docker image ls kanidm/radius
REPOSITORY TAG IMAGE ID CREATED SIZE REPOSITORY TAG IMAGE ID CREATED SIZE
kanidm/radius devel 5dabe894134c About a minute ago 622MB kanidm/radius devel 5dabe894134c About a minute ago 622MB
``` ```
*Note:* If you're just looking to play with a pre-built container, images are also automatically built based on the development branch and available at `ghcr.io/kanidm/radius:devel`
3. Generate some self-signed certificates by running the script - just hit enter on all the prompts if you don't want to customise them. This'll put the files in `/tmp/kanidm`: _Note:_ If you're just looking to play with a pre-built container, images are also automatically
built based on the development branch and available at `ghcr.io/kanidm/radius:devel`
```shell 3. Generate some self-signed certificates by running the script - just hit enter on all the prompts
if you don't want to customise them. This'll put the files in `/tmp/kanidm`:
```bash
./insecure_generate_tls.sh ./insecure_generate_tls.sh
``` ```
4. Run the container: 4. Run the container:
```shell ```bash
cd kanidm_rlm_python && ./run_radius_container.sh cd kanidm_rlm_python && ./run_radius_container.sh
``` ```
@ -46,7 +54,7 @@ You can pass the following environment variables to `run_radius_container.sh` to
For example: For example:
```shell ```bash
IMAGE=ghcr.io/kanidm/radius:devel \ IMAGE=ghcr.io/kanidm/radius:devel \
CONFIG_FILE=~/.config/kanidm \ CONFIG_FILE=~/.config/kanidm \
./run_radius_container.sh ./run_radius_container.sh
@ -54,9 +62,10 @@ IMAGE=ghcr.io/kanidm/radius:devel \
## Testing authentication ## Testing authentication
Authentication can be tested through the client.localhost Network Access Server (NAS) configuration with: Authentication can be tested through the client.localhost Network Access Server (NAS) configuration
with:
```shell ```bash
docker exec -i -t radiusd radtest \ docker exec -i -t radiusd radtest \
<username> badpassword \ <username> badpassword \
127.0.0.1 10 testing123 127.0.0.1 10 testing123

View file

@ -1,33 +1,39 @@
# Rename the domain # Rename the domain
There are some cases where you may need to rename the domain. You should have configured There are some cases where you may need to rename the domain. You should have configured this
this initially in the setup, however you may have a situation where a business is changing initially in the setup, however you may have a situation where a business is changing name, merging,
name, merging, or other needs which may prompt this needing to be changed. or other needs which may prompt this needing to be changed.
> **WARNING:** This WILL break ALL u2f/webauthn tokens that have been enrolled, which MAY cause > **WARNING:** This WILL break ALL u2f/webauthn tokens that have been enrolled, which MAY cause
> accounts to be locked out and unrecoverable until further action is taken. DO NOT CHANGE > accounts to be locked out and unrecoverable until further action is taken. DO NOT CHANGE the
> the domain name unless REQUIRED and have a plan on how to manage these issues. > domain name unless REQUIRED and have a plan on how to manage these issues.
> **WARNING:** This operation can take an extensive amount of time as ALL accounts and groups > **WARNING:** This operation can take an extensive amount of time as ALL accounts and groups in the
> in the domain MUST have their Security Principal Names (SPNs) regenerated. This WILL also cause > domain MUST have their Security Principal Names (SPNs) regenerated. This WILL also cause a large
> a large delay in replication once the system is restarted. > delay in replication once the system is restarted.
You should make a backup before proceeding with this operation. You should make a backup before proceeding with this operation.
When you have a created a migration plan and strategy on handling the invalidation of webauthn, When you have a created a migration plan and strategy on handling the invalidation of webauthn, you
you can then rename the domain. can then rename the domain.
First, stop the instance. First, stop the instance.
```bash
docker stop <container name> docker stop <container name>
```
Second, change `domain` and `origin` in `server.toml`. Second, change `domain` and `origin` in `server.toml`.
Third, trigger the database domain rename process. Third, trigger the database domain rename process.
```bash
docker run --rm -i -t -v kanidmd:/data \ docker run --rm -i -t -v kanidmd:/data \
kanidm/server:latest /sbin/kanidmd domain rename -c /data/server.toml kanidm/server:latest /sbin/kanidmd domain rename -c /data/server.toml
```
Finally, you can now start your instance again. Finally, you can now start your instance again.
```bash
docker start <container name> docker start <container name>
```

View file

@ -2,7 +2,6 @@
Guard your Kubernetes ingress with Kanidm authentication and authorization. Guard your Kubernetes ingress with Kanidm authentication and authorization.
## Prerequisites ## Prerequisites
We recommend you have the following before continuing: We recommend you have the following before continuing:
@ -13,32 +12,43 @@ We recommend you have the following before continuing:
- A fully qualified domain name with an A record pointing to your k8s ingress. - A fully qualified domain name with an A record pointing to your k8s ingress.
- [CertManager with a Cluster Issuer installed.](https://cert-manager.io/docs/installation/) - [CertManager with a Cluster Issuer installed.](https://cert-manager.io/docs/installation/)
## Instructions ## Instructions
1. Create a Kanidm account and group: 1. Create a Kanidm account and group:
1. Create a Kanidm account. Please see the section [Creating Accounts](../accounts_and_groups.md). 1. Create a Kanidm account. Please see the section
1. Give the account a password. Please see the section [Resetting Account Credentials](../accounts_and_groups.md). [Creating Accounts](../accounts_and_groups.md).
2. Make the account a person. Please see the section [People Accounts](../accounts_and_groups.md). 1. Give the account a password. Please see the section
3. Create a Kanidm group. Please see the section [Creating Accounts](../accounts_and_groups.md). [Resetting Account Credentials](../accounts_and_groups.md).
4. Add the account you created to the group you create. Please see the section [Creating Accounts](../accounts_and_groups.md). 1. Make the account a person. Please see the section
[People Accounts](../accounts_and_groups.md).
1. Create a Kanidm group. Please see the section [Creating Accounts](../accounts_and_groups.md).
1. Add the account you created to the group you create. Please see the section
[Creating Accounts](../accounts_and_groups.md).
2. Create a Kanidm OAuth2 resource: 2. Create a Kanidm OAuth2 resource:
1. Create the OAuth2 resource for your domain. Please see the section [Create the Kanidm Configuration](../integrations/oauth2.md). 1. Create the OAuth2 resource for your domain. Please see the section
2. Add a scope mapping from the resource you created to the group you create with the openid, profile, and email scopes. Please see the section [Create the Kanidm Configuration](../integrations/oauth2.md). [Create the Kanidm Configuration](../integrations/oauth2.md).
2. Add a scope mapping from the resource you created to the group you create with the openid,
profile, and email scopes. Please see the section
[Create the Kanidm Configuration](../integrations/oauth2.md).
3. Create a `Cookie Secret` to for the placeholder `<COOKIE_SECRET>` in step 4: 3. Create a `Cookie Secret` to for the placeholder `<COOKIE_SECRET>` in step 4:
```shell ```shell
docker run -ti --rm python:3-alpine python -c 'import secrets,base64; print(base64.b64encode(base64.b64encode(secrets.token_bytes(16))).decode("utf-8"));' docker run -ti --rm python:3-alpine python -c 'import secrets,base64; print(base64.b64encode(base64.b64encode(secrets.token_bytes(16))).decode("utf-8"));'
``` ```
4. Create a file called `k8s.kanidm-nginx-auth-example.yaml` with the block below. Replace every `<string>` (drop the `<>`) with appropriate values: 4. Create a file called `k8s.kanidm-nginx-auth-example.yaml` with the block below. Replace every
`<string>` (drop the `<>`) with appropriate values:
1. `<FQDN>`: The fully qualified domain name with an A record pointing to your k8s ingress. 1. `<FQDN>`: The fully qualified domain name with an A record pointing to your k8s ingress.
2. `<KANIDM_FQDN>`: The fully qualified domain name of your Kanidm deployment. 2. `<KANIDM_FQDN>`: The fully qualified domain name of your Kanidm deployment.
3. `<COOKIE_SECRET>`: The output from step 3. 3. `<COOKIE_SECRET>`: The output from step 3.
4. `<OAUTH2_RS_NAME>`: Please see the output from step 2.1 or [get](../integrations/oauth2.md) the OAuth2 resource you create from that step. 4. `<OAUTH2_RS_NAME>`: Please see the output from step 2.1 or [get](../integrations/oauth2.md)
5. `<OAUTH2_RS_BASIC_SECRET>`: Please see the output from step 2.1 or [get](../integrations/oauth2.md) the OAuth2 resource you create from that step. the OAuth2 resource you create from that step.
5. `<OAUTH2_RS_BASIC_SECRET>`: Please see the output from step 2.1 or
[get](../integrations/oauth2.md) the OAuth2 resource you create from that step.
This will deploy the following to your cluster: This will deploy the following to your cluster:
- [modem7/docker-starwars](https://github.com/modem7/docker-starwars) - An example web site. - [modem7/docker-starwars](https://github.com/modem7/docker-starwars) - An example web site.
- [OAuth2 Proxy](https://oauth2-proxy.github.io/oauth2-proxy/) - A OAuth2 proxy is used as an OAuth2 client with NGINX [Authentication Based on Subrequest Result](https://docs.nginx.com/nginx/admin-guide/security-controls/configuring-subrequest-authentication/). - [OAuth2 Proxy](https://oauth2-proxy.github.io/oauth2-proxy/) - A OAuth2 proxy is used as an
OAuth2 client with NGINX
[Authentication Based on Subrequest Result](https://docs.nginx.com/nginx/admin-guide/security-controls/configuring-subrequest-authentication/).
```yaml ```yaml
--- ---
@ -214,26 +224,29 @@ We recommend you have the following before continuing:
secretName: <FQDN>-ingress-tls # replace . with - in the hostname secretName: <FQDN>-ingress-tls # replace . with - in the hostname
``` ```
5. Apply the configuration by running the following command: 5. Apply the configuration by running the following command:
```bash
```shell
kubectl apply -f k8s.kanidm-nginx-auth-example.yaml kubectl apply -f k8s.kanidm-nginx-auth-example.yaml
``` ```
6. Check your deployment succeeded by running the following commands: 6. Check your deployment succeeded by running the following commands:
```shell ```bash
kubectl -n kanidm-example get all kubectl -n kanidm-example get all
kubectl -n kanidm-example get ingress kubectl -n kanidm-example get ingress
kubectl -n kanidm-example get Certificate kubectl -n kanidm-example get Certificate
``` ```
You may use kubectl's describe and log for troubleshooting. If there are ingress errors see the Ingress NGINX documentation's [troubleshooting page](https://kubernetes.github.io/ingress-nginx/troubleshooting/). If there are certificate errors see the CertManger documentation's [troubleshooting page](https://cert-manager.io/docs/faq/troubleshooting/). You may use kubectl's describe and log for troubleshooting. If there are ingress errors see the
Ingress NGINX documentation's
Once it has finished deploying, you will be able to access it at `https://<FQDN>` which will prompt you for authentication. [troubleshooting page](https://kubernetes.github.io/ingress-nginx/troubleshooting/). If there are
certificate errors see the CertManger documentation's
[troubleshooting page](https://cert-manager.io/docs/faq/troubleshooting/).
Once it has finished deploying, you will be able to access it at `https://<FQDN>` which will
prompt you for authentication.
## Cleaning Up ## Cleaning Up
1. Remove the resources create for this example from k8s: 1. Remove the resources create for this example from k8s:
```shell ```bash
kubectl delete namespace kanidm-example kubectl delete namespace kanidm-example
``` ```
@ -242,7 +255,6 @@ We recommend you have the following before continuing:
2. Delete the group created in section Instructions step 2. 2. Delete the group created in section Instructions step 2.
3. Delete the OAuth2 resource created in section Instructions step 3. 3. Delete the OAuth2 resource created in section Instructions step 3.
## References ## References
1. [NGINX Ingress Controller: External OAUTH Authentication](https://kubernetes.github.io/ingress-nginx/examples/auth/oauth-external-auth/) 1. [NGINX Ingress Controller: External OAUTH Authentication](https://kubernetes.github.io/ingress-nginx/examples/auth/oauth-external-auth/)

View file

@ -1,11 +1,11 @@
# Frequently Asked Questions # Frequently Asked Questions
... or ones we think people *might* ask. ... or ones we think people _might_ ask.
## Why disallow HTTP (without TLS) between my load balancer and Kanidm? ## Why disallow HTTP (without TLS) between my load balancer and Kanidm?
Because Kanidm is one of the keys to a secure network, and insecure connections Because Kanidm is one of the keys to a secure network, and insecure connections to them are not best
to them are not best practice. practice.
Please refer to [Why TLS?](why_tls.md) for a longer explanation. Please refer to [Why TLS?](why_tls.md) for a longer explanation.
@ -15,11 +15,13 @@ It's [a rust thing](https://rustacean.net).
## Will you implement -insert protocol here- ## Will you implement -insert protocol here-
Probably, on an infinite time-scale! As long as it's not Kerberos. Or involves SSL or STARTTLS. Please log an issue and start the discussion! Probably, on an infinite time-scale! As long as it's not Kerberos. Or involves SSL or STARTTLS.
Please log an issue and start the discussion!
## Why do the crabs have knives? ## Why do the crabs have knives?
Don't [ask](https://www.youtube.com/watch?v=0QaAKi0NFkA). They just [do](https://www.youtube.com/shorts/WizH5ae9ozw). Don't [ask](https://www.youtube.com/watch?v=0QaAKi0NFkA). They just
[do](https://www.youtube.com/shorts/WizH5ae9ozw).
## Why won't you take this FAQ thing seriously? ## Why won't you take this FAQ thing seriously?

View file

@ -1,33 +1,46 @@
# Glossary # Glossary
This is a glossary of terms used through out this book. While we make every effort to This is a glossary of terms used through out this book. While we make every effort to explains terms
explains terms and acronyms when they are used, this may be a useful reference if something and acronyms when they are used, this may be a useful reference if something feels unknown to you.
feels unknown to you.
## Domain Names ## Domain Names
* domain - This is the domain you "own". It is the highest level entity. An example would be `example.com` (since you do not own `.com`). - domain - This is the domain you "own". It is the highest level entity. An example would be
* subdomain - A subdomain is a domain name space under the domain. A subdomains of `example.com` are `a.example.com` and `b.example.com`. Each subdomain can have further subdomains. `example.com` (since you do not own `.com`).
* domain name - This is any named entity within your domain or its subdomains. This is the umbrella term, referring to all entities in the domain. `example.com`, `a.example.com`, `host.example.com` are all valid domain names with the domain `example.com`. - subdomain - A subdomain is a domain name space under the domain. A subdomains of `example.com` are
* origin - An origin defines a URL with a protocol scheme, optional port number and domain name components. An example is `https://host.example.com` `a.example.com` and `b.example.com`. Each subdomain can have further subdomains.
* effective domain - This is the extracted domain name from an origin excluding port and scheme. - domain name - This is any named entity within your domain or its subdomains. This is the umbrella
term, referring to all entities in the domain. `example.com`, `a.example.com`, `host.example.com`
are all valid domain names with the domain `example.com`.
- origin - An origin defines a URL with a protocol scheme, optional port number and domain name
components. An example is `https://host.example.com`
- effective domain - This is the extracted domain name from an origin excluding port and scheme.
## Accounts ## Accounts
* trust - A trust is when two Kanidm domains have a relationship to each other where accounts can be used between the domains. The domains retain their administration boundaries, but allow cross authentication. - trust - A trust is when two Kanidm domains have a relationship to each other where accounts can be
* replication - This is the process where two or more Kanidm servers in a domain can synchronise their database content. used between the domains. The domains retain their administration boundaries, but allow cross
* UAT - User Authentication Token. This is a token issue by Kanidm to an account after it has authenticated. authentication.
* SPN - Security Principal Name. This is a name of an account comprising it's name and domain name. This allows distinction between accounts with identical names over a trust boundary - replication - This is the process where two or more Kanidm servers in a domain can synchronise
their database content.
- UAT - User Authentication Token. This is a token issue by Kanidm to an account after it has
authenticated.
- SPN - Security Principal Name. This is a name of an account comprising it's name and domain name.
This allows distinction between accounts with identical names over a trust boundary
## Internals ## Internals
* entity, object, entry - Any item in the database. Generally these terms are interchangeable, but internally they are referred to as Entry. - entity, object, entry - Any item in the database. Generally these terms are interchangeable, but
* account - An entry that may authenticate to the server, generally allowing extended permissions and actions to be undertaken. internally they are referred to as Entry.
- account - An entry that may authenticate to the server, generally allowing extended permissions
and actions to be undertaken.
### Access Control ### Access Control
* privilege - An expression of what actions an account may perform if granted - privilege - An expression of what actions an account may perform if granted
* target - The entries that will be affected by a privilege - target - The entries that will be affected by a privilege
* receiver - The entries that will be able to use a privilege - receiver - The entries that will be able to use a privilege
* acp - an Access Control Profile which defines a set of privileges that are granted to receivers to affect target entries. - acp - an Access Control Profile which defines a set of privileges that are granted to receivers to
* role - A term used to express a group that is the receiver of an access control profile allowing it's members to affect the target entries. affect target entries.
- role - A term used to express a group that is the receiver of an access control profile allowing
it's members to affect the target entries.

View file

@ -1,50 +1,56 @@
# Installing Client Tools # Installing Client Tools
> **NOTE** As this project is in a rapid development phase, running different > **NOTE** As this project is in a rapid development phase, running different release versions will
release versions will likely present incompatibilities. Ensure you're running > likely present incompatibilities. Ensure you're running matching release versions of client and
matching release versions of client and server binaries. If you have any issues, > server binaries. If you have any issues, check that you are running the latest software.
check that you are running the latest software.
## From packages ## From packages
Kanidm currently is packaged for the following systems: Kanidm currently is packaged for the following systems:
* OpenSUSE Tumbleweed - OpenSUSE Tumbleweed
* OpenSUSE Leap 15.3/15.4 - OpenSUSE Leap 15.3/15.4
* MacOS - MacOS
* Arch Linux - Arch Linux
* NixOS - NixOS
* Fedora 36 - Fedora 36
* CentOS Stream 9 - CentOS Stream 9
The `kanidm` client has been built and tested from Windows, but is not (yet) packaged routinely. The `kanidm` client has been built and tested from Windows, but is not (yet) packaged routinely.
### OpenSUSE Tumbleweed ### OpenSUSE Tumbleweed
Kanidm has been part of OpenSUSE Tumbleweed since October 2020. You can install Kanidm has been part of OpenSUSE Tumbleweed since October 2020. You can install the clients with:
the clients with:
```bash
zypper ref zypper ref
zypper in kanidm-clients zypper in kanidm-clients
```
### OpenSUSE Leap 15.3/15.4 ### OpenSUSE Leap 15.3/15.4
Using zypper you can add the Kanidm leap repository with: Using zypper you can add the Kanidm leap repository with:
```bash
zypper ar -f obs://network:idm network_idm zypper ar -f obs://network:idm network_idm
```
Then you need to refresh your metadata and install the clients. Then you need to refresh your metadata and install the clients.
```bash
zypper ref zypper ref
zypper in kanidm-clients zypper in kanidm-clients
```
### MacOS - Brew ### MacOS - Brew
[Homebrew](https://brew.sh/) allows addition of third party repositories for installing tools. On [Homebrew](https://brew.sh/) allows addition of third party repositories for installing tools. On
MacOS you can use this to install the Kanidm tools. MacOS you can use this to install the Kanidm tools.
```bash
brew tap kanidm/kanidm brew tap kanidm/kanidm
brew install kanidm brew install kanidm
```
### Arch Linux ### Arch Linux
@ -56,38 +62,42 @@ MacOS you can use this to install the Kanidm tools.
### Fedora / Centos Stream ### Fedora / Centos Stream
{{#template {{#template templates/kani-warning.md imagepath=images title=Take Note! text=Kanidm frequently uses
templates/kani-warning.md new Rust versions and features, however Fedora and Centos frequently are behind in Rust releases. As
imagepath=images a result, they may not always have the latest Kanidm versions available. }}
title=Take Note!
text=Kanidm frequently uses new Rust versions and features, however Fedora and Centos frequently are behind in Rust releases. As a result, they may not always have the latest Kanidm versions available.
}}
Fedora has limited support through the development repository. You need to add the repository Fedora has limited support through the development repository. You need to add the repository
metadata into the correct directory: metadata into the correct directory:
```bash
# Fedora # Fedora
wget https://download.opensuse.org/repositories/network:/idm/Fedora_36/network:idm.repo wget https://download.opensuse.org/repositories/network:/idm/Fedora_36/network:idm.repo
# Centos Stream 9 # Centos Stream 9
wget https://download.opensuse.org/repositories/network:/idm/CentOS_9_Stream/network:idm.repo wget https://download.opensuse.org/repositories/network:/idm/CentOS_9_Stream/network:idm.repo
```
You can then install with: You can then install with:
```bash
dnf install kanidm-clients dnf install kanidm-clients
```
## Cargo ## Cargo
The tools are available as a cargo download if you have a rust tool chain available. To install The tools are available as a cargo download if you have a rust tool chain available. To install rust
rust you should follow the documentation for [rustup](https://rustup.rs/). These will be installed you should follow the documentation for [rustup](https://rustup.rs/). These will be installed into
into your home directory. To update these, re-run the install command with the new version. your home directory. To update these, re-run the install command with the new version.
```bash
cargo install --version 1.1.0-alpha.10 kanidm_tools cargo install --version 1.1.0-alpha.10 kanidm_tools
```
## Tools Container ## Tools Container
In some cases if your distribution does not have native kanidm-client support, and you can't access In some cases if your distribution does not have native kanidm-client support, and you can't access
cargo for the install for some reason, you can use the cli tools from a docker container instead. cargo for the install for some reason, you can use the cli tools from a docker container instead.
```bash
docker pull kanidm/tools:latest docker pull kanidm/tools:latest
docker run --rm -i -t \ docker run --rm -i -t \
-v /etc/kanidm/config:/etc/kanidm/config:ro \ -v /etc/kanidm/config:/etc/kanidm/config:ro \
@ -95,21 +105,26 @@ cargo for the install for some reason, you can use the cli tools from a docker c
-v ~/.cache/kanidm_tokens:/home/kanidm/.cache/kanidm_tokens \ -v ~/.cache/kanidm_tokens:/home/kanidm/.cache/kanidm_tokens \
kanidm/tools:latest \ kanidm/tools:latest \
/sbin/kanidm --help /sbin/kanidm --help
```
If you have a ca.pem you may need to bind mount this in as required. If you have a ca.pem you may need to bind mount this in as required.
> **TIP** You can alias the docker run command to make the tools easier to access such as: > **TIP** You can alias the docker run command to make the tools easier to access such as:
```bash
alias kanidm="docker run ..." alias kanidm="docker run ..."
```
## Checking that the tools work ## Checking that the tools work
Now you can check your instance is working. You may need to provide a CA certificate for verification Now you can check your instance is working. You may need to provide a CA certificate for
with the -C parameter: verification with the -C parameter:
```bash
kanidm login --name anonymous kanidm login --name anonymous
kanidm self whoami -H https://localhost:8443 --name anonymous kanidm self whoami -H https://localhost:8443 --name anonymous
kanidm self whoami -C ../path/to/ca.pem -H https://localhost:8443 --name anonymous kanidm self whoami -C ../path/to/ca.pem -H https://localhost:8443 --name anonymous
```
Now you can take some time to look at what commands are available - please Now you can take some time to look at what commands are available - please
[ask for help at any time](https://github.com/kanidm/kanidm#getting-in-contact--questions). [ask for help at any time](https://github.com/kanidm/kanidm#getting-in-contact--questions).

View file

@ -1,6 +1,3 @@
# Installing the Server # Installing the Server
This chapter will describe how to plan, configure, deploy and update your Kanidm instances. This chapter will describe how to plan, configure, deploy and update your Kanidm instances.

View file

@ -1,66 +1,58 @@
# LDAP # LDAP
While many applications can support external authentication and identity services through While many applications can support external authentication and identity services through Oauth2,
Oauth2, not all services can. not all services can. Lightweight Directory Access Protocol (LDAP) has been the "lingua franca" of
Lightweight Directory Access Protocol (LDAP) has been the "lingua franca" of authentication for many years, with almost every application in the world being able to search and
authentication for many years, with almost every application in the world being bind to LDAP. As many organisations still rely on LDAP, Kanidm can host a read-only LDAP interface
able to search and bind to LDAP. As many organisations still rely on LDAP, Kanidm for these legacy applications.
can host a read-only LDAP interface for these legacy applications.
{{#template
../templates/kani-warning.md
imagepath=../images
title=Warning!
text=The LDAP server in Kanidm is not a fully RFC-compliant LDAP server. This is intentional, as Kanidm wants to cover the common use cases - simple bind and search.
}}
{{#template\
../templates/kani-warning.md imagepath=../images title=Warning! text=The LDAP server in Kanidm is
not a fully RFC-compliant LDAP server. This is intentional, as Kanidm wants to cover the common use
cases - simple bind and search. }}
## What is LDAP ## What is LDAP
LDAP is a protocol to read data from a directory of information. It is not LDAP is a protocol to read data from a directory of information. It is not a server, but a way to
a server, but a way to communicate to a server. There are many famous LDAP communicate to a server. There are many famous LDAP implementations such as Active Directory, 389
implementations such as Active Directory, 389 Directory Server, DSEE, Directory Server, DSEE, FreeIPA, and many others. Because it is a standard, applications can use an
FreeIPA, and many others. Because it is a standard, applications can use LDAP client library to authenticate users to LDAP, given "one account" for many applications - an
an LDAP client library to authenticate users to LDAP, given "one account" for IDM just like Kanidm!
many applications - an IDM just like Kanidm!
## Data Mapping ## Data Mapping
Kanidm cannot be mapped 100% to LDAP's objects. This is because LDAP Kanidm cannot be mapped 100% to LDAP's objects. This is because LDAP types are simple key-values on
types are simple key-values on objects which are all UTF8 strings (or subsets objects which are all UTF8 strings (or subsets thereof) based on validation (matching) rules. Kanidm
thereof) based on validation (matching) rules. Kanidm internally implements complex internally implements complex data types such as tagging on SSH keys, or multi-value credentials.
data types such as tagging on SSH keys, or multi-value credentials. These can not These can not be represented in LDAP.
be represented in LDAP.
Many of the structures in Kanidm do not correlate closely to LDAP. For example Many of the structures in Kanidm do not correlate closely to LDAP. For example Kanidm only has a GID
Kanidm only has a GID number, where LDAP's schemas define both a UID number and a number, where LDAP's schemas define both a UID number and a GID number.
GID number.
Entries in the database also have a specific name in LDAP, related to their path Entries in the database also have a specific name in LDAP, related to their path in the directory
in the directory tree. Kanidm is a flat model, so we have to emulate some tree-like tree. Kanidm is a flat model, so we have to emulate some tree-like elements, and ignore others.
elements, and ignore others.
For this reason, when you search the LDAP interface, Kanidm will make some mapping decisions. For this reason, when you search the LDAP interface, Kanidm will make some mapping decisions.
* The Kanidm domain name is used to generate the DN of the suffix. - The Kanidm domain name is used to generate the DN of the suffix.
* The domain\_info object becomes the suffix root. - The domain\_info object becomes the suffix root.
* All other entries are direct subordinates of the domain\_info for DN purposes. - All other entries are direct subordinates of the domain\_info for DN purposes.
* Distinguished Names (DNs) are generated from the spn, name, or uuid attribute. - Distinguished Names (DNs) are generated from the spn, name, or uuid attribute.
* Bind DNs can be remapped and rewritten, and may not even be a DN during bind. - Bind DNs can be remapped and rewritten, and may not even be a DN during bind.
* The '\*' and '+' operators can not be used in conjuction with attribute lists in searches. - The '\*' and '+' operators can not be used in conjuction with attribute lists in searches.
These decisions were made to make the path as simple and effective as possible, These decisions were made to make the path as simple and effective as possible, relying more on the
relying more on the Kanidm query and filter system than attempting to generate a tree-like Kanidm query and filter system than attempting to generate a tree-like representation of data. As
representation of data. As almost all clients can use filters for entry selection almost all clients can use filters for entry selection we don't believe this is a limitation for the
we don't believe this is a limitation for the consuming applications. consuming applications.
## Security ## Security
### TLS ### TLS
StartTLS is not supported due to security risks. LDAPS is the only secure method StartTLS is not supported due to security risks. LDAPS is the only secure method of communicating to
of communicating to any LDAP server. Kanidm, when configured with certificates, will any LDAP server. Kanidm, when configured with certificates, will use them for LDAPS (and will not
use them for LDAPS (and will not listen on a plaintext LDAP port). listen on a plaintext LDAP port).
### Writes ### Writes
@ -69,60 +61,67 @@ contains. As a result, writes are rejected for all users via the LDAP interface.
### Access Controls ### Access Controls
LDAP only supports password authentication. As LDAP is used heavily in POSIX environments LDAP only supports password authentication. As LDAP is used heavily in POSIX environments the LDAP
the LDAP bind for any DN will use its configured posix password. bind for any DN will use its configured posix password.
As the POSIX password is not equivalent in strength to the primary credentials of Kanidm As the POSIX password is not equivalent in strength to the primary credentials of Kanidm (which may
(which may be multi-factor authentication, MFA), the LDAP bind does not grant be multi-factor authentication, MFA), the LDAP bind does not grant rights to elevated read
rights to elevated read permissions. All binds have the permissions of "Anonymous" permissions. All binds have the permissions of "Anonymous" even if the anonymous account is locked.
even if the anonymous account is locked.
The exception is service accounts which can use api-tokens during an LDAP bind for elevated The exception is service accounts which can use api-tokens during an LDAP bind for elevated read
read permissions. permissions.
## Server Configuration ## Server Configuration
To configure Kanidm to provide LDAP, add the argument to the `server.toml` configuration: To configure Kanidm to provide LDAP, add the argument to the `server.toml` configuration:
```toml
ldapbindaddress = "127.0.0.1:3636" ldapbindaddress = "127.0.0.1:3636"
```
You should configure TLS certificates and keys as usual - LDAP will re-use the Web You should configure TLS certificates and keys as usual - LDAP will re-use the Web server TLS
server TLS material. material.
## Showing LDAP Entries and Attribute Maps ## Showing LDAP Entries and Attribute Maps
By default Kanidm is limited in what attributes are generated or remapped into By default Kanidm is limited in what attributes are generated or remapped into LDAP entries.
LDAP entries. However, the server internally contains a map of extended attribute However, the server internally contains a map of extended attribute mappings for application
mappings for application specific requests that must be satisfied. specific requests that must be satisfied.
An example is that some applications expect and require a 'CN' value, even though Kanidm does not An example is that some applications expect and require a 'CN' value, even though Kanidm does not
provide it. If the application is unable to be configured to accept "name" it may be necessary provide it. If the application is unable to be configured to accept "name" it may be necessary to
to use Kanidm's mapping feature. Currently these are compiled into the server, so you may need to open use Kanidm's mapping feature. Currently these are compiled into the server, so you may need to open
an issue with your requirements for attribute maps. an issue with your requirements for attribute maps.
To show what attribute maps exists for an entry you can use the attribute search term '+'. To show what attribute maps exists for an entry you can use the attribute search term '+'.
```bash
# To show Kanidm attributes # To show Kanidm attributes
ldapsearch ... -x '(name=admin)' '*' ldapsearch ... -x '(name=admin)' '*'
# To show all attribute maps # To show all attribute maps
ldapsearch ... -x '(name=admin)' '+' ldapsearch ... -x '(name=admin)' '+'
```
Attributes that are in the map can be requested explicitly, and this can be combined with requesting Attributes that are in the map can be requested explicitly, and this can be combined with requesting
Kanidm native attributes. Kanidm native attributes.
```bash
ldapsearch ... -x '(name=admin)' cn objectClass displayname memberof ldapsearch ... -x '(name=admin)' cn objectClass displayname memberof
```
## Service Accounts ## Service Accounts
If you have [issued api tokens for a service account](../accounts_and_groups.html#using-api-tokens-with-service-accounts) If you have
[issued api tokens for a service account](../accounts_and_groups.html#using-api-tokens-with-service-accounts)
they can be used to gain extended read permissions for those service accounts. they can be used to gain extended read permissions for those service accounts.
Api tokens can also be used to gain extended search permissions with LDAP. To do this you can bind Api tokens can also be used to gain extended search permissions with LDAP. To do this you can bind
with a dn of `dn=token` and provide the api token in the password. with a dn of `dn=token` and provide the api token in the password.
> **NOTE** The `dn=token` keyword is guaranteed to not be used by any other entry, which is why it was chosen as the keyword to initiate api token binds. > **NOTE** The `dn=token` keyword is guaranteed to not be used by any other entry, which is why it
> was chosen as the keyword to initiate api token binds.
```shell ```bash
ldapwhoami -H ldaps://URL -x -D "dn=token" -w "TOKEN" ldapwhoami -H ldaps://URL -x -D "dn=token" -w "TOKEN"
ldapwhoami -H ldaps://idm.example.com -x -D "dn=token" -w "..." ldapwhoami -H ldaps://idm.example.com -x -D "dn=token" -w "..."
# u: demo_service@idm.example.com # u: demo_service@idm.example.com
@ -130,13 +129,17 @@ ldapwhoami -H ldaps://idm.example.com -x -D "dn=token" -w "..."
## Example ## Example
Given a default install with domain "example.com" the configured LDAP DN will be "dc=example,dc=com". Given a default install with domain "example.com" the configured LDAP DN will be
"dc=example,dc=com".
```toml
# from server.toml # from server.toml
ldapbindaddress = "[::]:3636" ldapbindaddress = "[::]:3636"
```
This can be queried with: This can be queried with:
```bash
LDAPTLS_CACERT=ca.pem ldapsearch \ LDAPTLS_CACERT=ca.pem ldapsearch \
-H ldaps://127.0.0.1:3636 \ -H ldaps://127.0.0.1:3636 \
-b 'dc=example,dc=com' \ -b 'dc=example,dc=com' \
@ -153,15 +156,17 @@ This can be queried with:
name: test1 name: test1
spn: test1@example.com spn: test1@example.com
entryuuid: 22a65b6c-80c8-4e1a-9b76-3f3afdff8400 entryuuid: 22a65b6c-80c8-4e1a-9b76-3f3afdff8400
```
It is recommended that client applications filter accounts that can login with `(class=account)` It is recommended that client applications filter accounts that can login with `(class=account)` and
and groups with `(class=group)`. If possible, group membership is defined in RFC2307bis or groups with `(class=group)`. If possible, group membership is defined in RFC2307bis or Active
Active Directory style. This means groups are determined from the "memberof" attribute which Directory style. This means groups are determined from the "memberof" attribute which contains a DN
contains a DN to a group. to a group.
LDAP binds can use any unique identifier of the account. The following are all valid bind DNs for LDAP binds can use any unique identifier of the account. The following are all valid bind DNs for
the object listed above (if it was a POSIX account, that is). the object listed above (if it was a POSIX account, that is).
```bash
ldapwhoami ... -x -D 'name=test1' ldapwhoami ... -x -D 'name=test1'
ldapwhoami ... -x -D 'spn=test1@example.com' ldapwhoami ... -x -D 'spn=test1@example.com'
ldapwhoami ... -x -D 'test1@example.com' ldapwhoami ... -x -D 'test1@example.com'
@ -169,23 +174,27 @@ the object listed above (if it was a POSIX account, that is).
ldapwhoami ... -x -D '22a65b6c-80c8-4e1a-9b76-3f3afdff8400' ldapwhoami ... -x -D '22a65b6c-80c8-4e1a-9b76-3f3afdff8400'
ldapwhoami ... -x -D 'spn=test1@example.com,dc=example,dc=com' ldapwhoami ... -x -D 'spn=test1@example.com,dc=example,dc=com'
ldapwhoami ... -x -D 'name=test1,dc=example,dc=com' ldapwhoami ... -x -D 'name=test1,dc=example,dc=com'
```
Most LDAP clients are very picky about TLS, and can be very hard to debug or display errors. Most LDAP clients are very picky about TLS, and can be very hard to debug or display errors. For
For example these commands: example these commands:
```bash
ldapsearch -H ldaps://127.0.0.1:3636 -b 'dc=example,dc=com' -x '(name=test1)' ldapsearch -H ldaps://127.0.0.1:3636 -b 'dc=example,dc=com' -x '(name=test1)'
ldapsearch -H ldap://127.0.0.1:3636 -b 'dc=example,dc=com' -x '(name=test1)' ldapsearch -H ldap://127.0.0.1:3636 -b 'dc=example,dc=com' -x '(name=test1)'
ldapsearch -H ldap://127.0.0.1:3389 -b 'dc=example,dc=com' -x '(name=test1)' ldapsearch -H ldap://127.0.0.1:3389 -b 'dc=example,dc=com' -x '(name=test1)'
```
All give the same error: All give the same error:
```
ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1) ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1)
```
This is despite the fact: This is despite the fact:
* The first command is a certificate validation error. - The first command is a certificate validation error.
* The second is a missing LDAPS on a TLS port. - The second is a missing LDAPS on a TLS port.
* The third is an incorrect port. - The third is an incorrect port.
To diagnose errors like this, you may need to add "-d 1" to your LDAP commands or client. To diagnose errors like this, you may need to add "-d 1" to your LDAP commands or client.

View file

@ -1,28 +1,27 @@
# OAuth2 # OAuth2
OAuth is a web authorisation protocol that allows "single sign on". It's key to note OAuth is a web authorisation protocol that allows "single sign on". It's key to note OAuth only
OAuth only provides authorisation, as the protocol in its default forms provides authorisation, as the protocol in its default forms do not provide identity or
do not provide identity or authentication information. All that Oauth2 provides is authentication information. All that Oauth2 provides is information that an entity is authorised for
information that an entity is authorised for the requested resources. the requested resources.
OAuth can tie into extensions allowing an identity provider to reveal information OAuth can tie into extensions allowing an identity provider to reveal information about authorised
about authorised sessions. This extends OAuth from an authorisation only system sessions. This extends OAuth from an authorisation only system to a system capable of identity and
to a system capable of identity and authorisation. Two primary methods of this authorisation. Two primary methods of this exist today: RFC7662 token introspection, and OpenID
exist today: RFC7662 token introspection, and OpenID connect. connect.
## How Does OAuth2 Work? ## How Does OAuth2 Work?
A user wishes to access a service (resource, resource server). The resource A user wishes to access a service (resource, resource server). The resource server does not have an
server does not have an active session for the client, so it redirects to the active session for the client, so it redirects to the authorisation server (Kanidm) to determine if
authorisation server (Kanidm) to determine if the client should be allowed to proceed, and the client should be allowed to proceed, and has the appropriate permissions (scopes) for the
has the appropriate permissions (scopes) for the requested resources. requested resources.
The authorisation server checks the current session of the user and may present The authorisation server checks the current session of the user and may present a login flow if
a login flow if required. Given the identity of the user known to the authorisation required. Given the identity of the user known to the authorisation sever, and the requested scopes,
sever, and the requested scopes, the authorisation server makes a decision if it the authorisation server makes a decision if it allows the authorisation to proceed. The user is
allows the authorisation to proceed. The user is then prompted to consent to the then prompted to consent to the authorisation from the authorisation server to the resource server
authorisation from the authorisation server to the resource server as some identity as some identity information may be revealed by granting this consent.
information may be revealed by granting this consent.
If successful and consent given, the user is redirected back to the resource server with an If successful and consent given, the user is redirected back to the resource server with an
authorisation code. The resource server then contacts the authorisation server directly with this authorisation code. The resource server then contacts the authorisation server directly with this
@ -30,78 +29,78 @@ code and exchanges it for a valid token that may be provided to the user's brows
The resource server may then optionally contact the token introspection endpoint of the The resource server may then optionally contact the token introspection endpoint of the
authorisation server about the provided OAuth token, which yields extra metadata about the identity authorisation server about the provided OAuth token, which yields extra metadata about the identity
that holds the token from the authorisation. This metadata may include identity information, that holds the token from the authorisation. This metadata may include identity information, but
but also may include extended metadata, sometimes refered to as "claims". Claims are also may include extended metadata, sometimes refered to as "claims". Claims are information bound
information bound to a token based on properties of the session that may allow to a token based on properties of the session that may allow the resource server to make extended
the resource server to make extended authorisation decisions without the need authorisation decisions without the need to contact the authorisation server to arbitrate.
to contact the authorisation server to arbitrate.
It's important to note that OAuth2 at its core is an authorisation system which has layered It's important to note that OAuth2 at its core is an authorisation system which has layered
identity-providing elements on top. identity-providing elements on top.
### Resource Server ### Resource Server
This is the server that a user wants to access. Common examples could be Nextcloud, a wiki, This is the server that a user wants to access. Common examples could be Nextcloud, a wiki, or
or something else. This is the system that "needs protecting" and wants to delegate authorisation something else. This is the system that "needs protecting" and wants to delegate authorisation
decisions to Kanidm. decisions to Kanidm.
It's important for you to know *how* your resource server supports OAuth2. For example, does it It's important for you to know _how_ your resource server supports OAuth2. For example, does it
support RFC 7662 token introspection or does it rely on OpenID connect for identity information? support RFC 7662 token introspection or does it rely on OpenID connect for identity information?
Does the resource server support PKCE S256? Does the resource server support PKCE S256?
In general Kanidm requires that your resource server supports: In general Kanidm requires that your resource server supports:
* HTTP basic authentication to the authorisation server - HTTP basic authentication to the authorisation server
* PKCE S256 code verification to prevent certain token attack classes - PKCE S256 code verification to prevent certain token attack classes
* OIDC only - JWT ES256 for token signatures - OIDC only - JWT ES256 for token signatures
Kanidm will expose its OAuth2 APIs at the following URLs: Kanidm will expose its OAuth2 APIs at the following URLs:
* user auth url: `https://idm.example.com/ui/oauth2` - user auth url: `https://idm.example.com/ui/oauth2`
* api auth url: `https://idm.example.com/oauth2/authorise` - api auth url: `https://idm.example.com/oauth2/authorise`
* token url: `https://idm.example.com/oauth2/token` - token url: `https://idm.example.com/oauth2/token`
* rfc7662 token introspection url: `https://idm.example.com/oauth2/token/introspect` - rfc7662 token introspection url: `https://idm.example.com/oauth2/token/introspect`
* rfc7009 token revoke url: `https://idm.example.com/oauth2/token/revoke` - rfc7009 token revoke url: `https://idm.example.com/oauth2/token/revoke`
OpenID Connect discovery - you need to substitute your OAuth2 client id in the following OpenID Connect discovery - you need to substitute your OAuth2 client id in the following urls:
urls:
* OpenID connect issuer uri: `https://idm.example.com/oauth2/openid/:client\_id:/` - OpenID connect issuer uri: `https://idm.example.com/oauth2/openid/:client\_id:/`
* OpenID connect discovery: `https://idm.example.com/oauth2/openid/:client\_id:/.well-known/openid-configuration` - OpenID connect discovery:
`https://idm.example.com/oauth2/openid/:client\_id:/.well-known/openid-configuration`
For manual OpenID configuration: For manual OpenID configuration:
* OpenID connect userinfo: `https://idm.example.com/oauth2/openid/:client\_id:/userinfo` - OpenID connect userinfo: `https://idm.example.com/oauth2/openid/:client\_id:/userinfo`
* token signing public key: `https://idm.example.com/oauth2/openid/:client\_id:/public\_key.jwk` - token signing public key: `https://idm.example.com/oauth2/openid/:client\_id:/public\_key.jwk`
### Scope Relationships ### Scope Relationships
For an authorisation to proceed, the resource server will request a list of scopes, which are For an authorisation to proceed, the resource server will request a list of scopes, which are unique
unique to that resource server. For example, when a user wishes to login to the admin panel to that resource server. For example, when a user wishes to login to the admin panel of the resource
of the resource server, it may request the "admin" scope from Kanidm for authorisation. But when server, it may request the "admin" scope from Kanidm for authorisation. But when a user wants to
a user wants to login, it may only request "access" as a scope from Kanidm. login, it may only request "access" as a scope from Kanidm.
As each resource server may have its own scopes and understanding of these, Kanidm isolates As each resource server may have its own scopes and understanding of these, Kanidm isolates scopes
scopes to each resource server connected to Kanidm. Kanidm has two methods of granting scopes to accounts (users). to each resource server connected to Kanidm. Kanidm has two methods of granting scopes to accounts
(users).
The first is scope mappings. These provide a set of scopes if a user is a member of a specific The first is scope mappings. These provide a set of scopes if a user is a member of a specific group
group within Kanidm. This allows you to create a relationship between the scopes of a resource within Kanidm. This allows you to create a relationship between the scopes of a resource server, and
server, and the groups/roles in Kanidm which can be specific to that resource server. the groups/roles in Kanidm which can be specific to that resource server.
For an authorisation to proceed, all scopes requested by the resource server must be available in the For an authorisation to proceed, all scopes requested by the resource server must be available in
final scope set that is granted to the account. the final scope set that is granted to the account.
The second is supplemental scope mappings. These function the same as scope maps where membership The second is supplemental scope mappings. These function the same as scope maps where membership of
of a group provides a set of scopes to the account. However these scopes are NOT consulted during a group provides a set of scopes to the account. However these scopes are NOT consulted during
authorisation decisions made by Kanidm. These scopes exists to allow optional properties to be authorisation decisions made by Kanidm. These scopes exists to allow optional properties to be
provided (such as personal information about a subset of accounts to be revealed) or so that the resource server provided (such as personal information about a subset of accounts to be revealed) or so that the
may make it's own authorisation decisions based on the provided scopes. resource server may make it's own authorisation decisions based on the provided scopes.
This use of scopes is the primary means to control who can access what resources. These access decisions This use of scopes is the primary means to control who can access what resources. These access
can take place either on Kanidm or the resource server. decisions can take place either on Kanidm or the resource server.
For example, if you have a resource server that always requests a scope of "read", then users For example, if you have a resource server that always requests a scope of "read", then users with
with scope maps that supply the read scope will be allowed by Kanidm to proceed to the resource server. scope maps that supply the read scope will be allowed by Kanidm to proceed to the resource server.
Kanidm can then provide the supplementary scopes into provided tokens, so that the resource server Kanidm can then provide the supplementary scopes into provided tokens, so that the resource server
can use these to choose if it wishes to display UI elements. If a user has a supplemental "admin" can use these to choose if it wishes to display UI elements. If a user has a supplemental "admin"
scope, then that user may be able to access an administration panel of the resource server. In this scope, then that user may be able to access an administration panel of the resource server. In this
@ -112,46 +111,50 @@ the resource server.
### Create the Kanidm Configuration ### Create the Kanidm Configuration
After you have understood your resource server requirements you first need to configure Kanidm. After you have understood your resource server requirements you first need to configure Kanidm. By
By default members of "system\_admins" or "idm\_hp\_oauth2\_manage\_priv" are able to create or default members of "system\_admins" or "idm\_hp\_oauth2\_manage\_priv" are able to create or manage
manage OAuth2 resource server integrations. OAuth2 resource server integrations.
You can create a new resource server with: You can create a new resource server with:
```bash
kanidm system oauth2 create <name> <displayname> <origin> kanidm system oauth2 create <name> <displayname> <origin>
kanidm system oauth2 create nextcloud "Nextcloud Production" https://nextcloud.example.com kanidm system oauth2 create nextcloud "Nextcloud Production" https://nextcloud.example.com
```
You can create a scope map with: You can create a scope map with:
```bash
kanidm system oauth2 update_scope_map <name> <kanidm_group_name> [scopes]... kanidm system oauth2 update_scope_map <name> <kanidm_group_name> [scopes]...
kanidm system oauth2 update_scope_map nextcloud nextcloud_admins admin kanidm system oauth2 update_scope_map nextcloud nextcloud_admins admin
```
{{#template {{#template ../templates/kani-warning.md imagepath=../images title=WARNING text=If you are creating
../templates/kani-warning.md an OpenID Connect (OIDC) resource server you
imagepath=../images <b>MUST</b> provide a scope map named <code>openid</code>. Without this, OpenID clients <b>WILL NOT
title=WARNING WORK</b> }}
text=If you are creating an OpenID Connect (OIDC) resource server you <b>MUST</b> provide a scope map named <code>openid</code>. Without this, OpenID clients <b>WILL NOT WORK</b>
}}
> **HINT** > **HINT** OpenID connect allows a number of scopes that affect the content of the resulting
> OpenID connect allows a number of scopes that affect the content of the resulting > authorisation token. If one of the following scopes are requested by the OpenID client, then the
> authorisation token. If one of the following scopes are requested by the OpenID client, > associated claims may be added to the authorisation token. It is not guaranteed that all of the
> then the associated claims may be added to the authorisation token. It is not guaranteed > associated claims will be added.
> that all of the associated claims will be added.
>
> * profile - (name, family\_name, given\_name, middle\_name, nickname, preferred\_username, profile, picture, website, gender, birthdate, zoneinfo, locale, and updated\_at)
> * email - (email, email\_verified)
> * address - (address)
> * phone - (phone\_number, phone\_number\_verified)
> >
> - profile - (name, family\_name, given\_name, middle\_name, nickname, preferred\_username,
> profile, picture, website, gender, birthdate, zoneinfo, locale, and updated\_at)
> - email - (email, email\_verified)
> - address - (address)
> - phone - (phone\_number, phone\_number\_verified)
You can create a supplemental scope map with: You can create a supplemental scope map with:
```bash
kanidm system oauth2 update_sup_scope_map <name> <kanidm_group_name> [scopes]... kanidm system oauth2 update_sup_scope_map <name> <kanidm_group_name> [scopes]...
kanidm system oauth2 update_sup_scope_map nextcloud nextcloud_admins admin kanidm system oauth2 update_sup_scope_map nextcloud nextcloud_admins admin
```
Once created you can view the details of the resource server. Once created you can view the details of the resource server.
```bash
kanidm system oauth2 get nextcloud kanidm system oauth2 get nextcloud
--- ---
class: oauth2_resource_server class: oauth2_resource_server
@ -162,54 +165,59 @@ Once created you can view the details of the resource server.
oauth2_rs_name: nextcloud oauth2_rs_name: nextcloud
oauth2_rs_origin: https://nextcloud.example.com oauth2_rs_origin: https://nextcloud.example.com
oauth2_rs_token_key: hidden oauth2_rs_token_key: hidden
```
### Configure the Resource Server ### Configure the Resource Server
On your resource server, you should configure the client ID as the "oauth2\_rs\_name" from On your resource server, you should configure the client ID as the "oauth2\_rs\_name" from Kanidm,
Kanidm, and the password to be the value shown in "oauth2\_rs\_basic\_secret". Ensure that and the password to be the value shown in "oauth2\_rs\_basic\_secret". Ensure that the code
the code challenge/verification method is set to S256. challenge/verification method is set to S256.
You should now be able to test authorisation. You should now be able to test authorisation.
## Resetting Resource Server Security Material ## Resetting Resource Server Security Material
In the case of disclosure of the basic secret, or some other security event where you may wish In the case of disclosure of the basic secret, or some other security event where you may wish to
to invalidate a resource servers active sessions/tokens, you can reset the secret material of invalidate a resource servers active sessions/tokens, you can reset the secret material of the
the server with: server with:
```bash
kanidm system oauth2 reset_secrets kanidm system oauth2 reset_secrets
```
Each resource server has unique signing keys and access secrets, so this is limited to each Each resource server has unique signing keys and access secrets, so this is limited to each resource
resource server. server.
## Extended Options for Legacy Clients ## Extended Options for Legacy Clients
Not all resource servers support modern standards like PKCE or ECDSA. In these situations Not all resource servers support modern standards like PKCE or ECDSA. In these situations it may be
it may be necessary to disable these on a per-resource server basis. Disabling these on necessary to disable these on a per-resource server basis. Disabling these on one resource server
one resource server will not affect others. will not affect others.
{{#template {{#template ../templates/kani-warning.md imagepath=../images title=WARNING text=Changing these
../templates/kani-warning.md settings MAY have serious consequences on the security of your resource server. You should avoid
imagepath=../images changing these if at all possible! }}
title=WARNING
text=Changing these settings MAY have serious consequences on the security of your resource server. You should avoid changing these if at all possible!
}}
To disable PKCE for a resource server: To disable PKCE for a resource server:
```bash
kanidm system oauth2 warning_insecure_client_disable_pkce <resource server name> kanidm system oauth2 warning_insecure_client_disable_pkce <resource server name>
```
To enable legacy cryptograhy (RSA PKCS1-5 SHA256): To enable legacy cryptograhy (RSA PKCS1-5 SHA256):
```bash
kanidm system oauth2 warning_enable_legacy_crypto <resource server name> kanidm system oauth2 warning_enable_legacy_crypto <resource server name>
```
## Example Integrations ## Example Integrations
### Apache mod\_auth\_openidc ### Apache mod\_auth\_openidc
Add the following to a `mod_auth_openidc.conf`. It should be included in a `mods_enabled` folder Add the following to a `mod_auth_openidc.conf`. It should be included in a `mods_enabled` folder or
or with an appropriate include. with an appropriate include.
```
OIDCRedirectURI /protected/redirect_uri OIDCRedirectURI /protected/redirect_uri
OIDCCryptoPassphrase <random password here> OIDCCryptoPassphrase <random password here>
OIDCProviderMetadataURL https://kanidm.example.com/oauth2/openid/<resource server name>/.well-known/openid-configuration OIDCProviderMetadataURL https://kanidm.example.com/oauth2/openid/<resource server name>/.well-known/openid-configuration
@ -222,91 +230,113 @@ or with an appropriate include.
# Set the `REMOTE_USER` field to the `preferred_username` instead of the UUID. # Set the `REMOTE_USER` field to the `preferred_username` instead of the UUID.
# Remember that the username can change, but this can help with systems like Nagios which use this as a display name. # Remember that the username can change, but this can help with systems like Nagios which use this as a display name.
# OIDCRemoteUserClaim preferred_username # OIDCRemoteUserClaim preferred_username
```
Other scopes can be added as required to the `OIDCScope` line, eg: `OIDCScope "openid scope2 scope3"` Other scopes can be added as required to the `OIDCScope` line, eg:
`OIDCScope "openid scope2 scope3"`
In the virtual host, to protect a location: In the virtual host, to protect a location:
```apache
<Location /> <Location />
AuthType openid-connect AuthType openid-connect
Require valid-user Require valid-user
</Location> </Location>
```
### Nextcloud ### Nextcloud
Install the module [from the nextcloud market place](https://apps.nextcloud.com/apps/user_oidc) - Install the module [from the nextcloud market place](https://apps.nextcloud.com/apps/user_oidc) - it
it can also be found in the Apps section of your deployment as "OpenID Connect user backend". can also be found in the Apps section of your deployment as "OpenID Connect user backend".
In Nextcloud's config.php you need to allow connection to remote servers: In Nextcloud's config.php you need to allow connection to remote servers:
```php
'allow_local_remote_servers' => true, 'allow_local_remote_servers' => true,
```
You may optionally choose to add: You may optionally choose to add:
```php
'allow_user_to_change_display_name' => false, 'allow_user_to_change_display_name' => false,
'lost_password_link' => 'disabled', 'lost_password_link' => 'disabled',
```
If you forget this, you may see the following error in logs: If you forget this, you may see the following error in logs:
```
Host 172.24.11.129 was not connected to because it violates local access rules Host 172.24.11.129 was not connected to because it violates local access rules
```
This module does not support PKCE or ES256. You will need to run: This module does not support PKCE or ES256. You will need to run:
```bash
kanidm system oauth2 warning_insecure_client_disable_pkce <resource server name> kanidm system oauth2 warning_insecure_client_disable_pkce <resource server name>
kanidm system oauth2 warning_enable_legacy_crypto <resource server name> kanidm system oauth2 warning_enable_legacy_crypto <resource server name>
```
In the settings menu, configure the discovery URL and client ID and secret. In the settings menu, configure the discovery URL and client ID and secret.
You can choose to disable other login methods with: You can choose to disable other login methods with:
```bash
php occ config:app:set --value=0 user_oidc allow_multiple_user_backends php occ config:app:set --value=0 user_oidc allow_multiple_user_backends
```
You can login directly by appending `?direct=1` to your login page. You can re-enable You can login directly by appending `?direct=1` to your login page. You can re-enable other backends
other backends by setting the value to `1` by setting the value to `1`
### Velociraptor ### Velociraptor
Velociraptor supports OIDC. To configure it select "Authenticate with SSO" then "OIDC" during Velociraptor supports OIDC. To configure it select "Authenticate with SSO" then "OIDC" during the
the interactive configuration generator. Alternately, you can set the following keys in server.config.yaml: interactive configuration generator. Alternately, you can set the following keys in
server.config.yaml:
```
GUI: GUI:
authenticator: authenticator:
type: OIDC type: OIDC
oidc_issuer: https://idm.example.com/oauth2/openid/:client\_id:/ oidc_issuer: https://idm.example.com/oauth2/openid/:client\_id:/
oauth_client_id: <resource server name/> oauth_client_id: <resource server name/>
oauth_client_secret: <resource server secret> oauth_client_secret: <resource server secret>
```
Velociraptor does not support PKCE. You will need to run the following: Velociraptor does not support PKCE. You will need to run the following:
```bash
kanidm system oauth2 warning_insecure_client_disable_pkce <resource server name> kanidm system oauth2 warning_insecure_client_disable_pkce <resource server name>
```
Initial users are mapped via their email in the Velociraptor server.config.yaml config: Initial users are mapped via their email in the Velociraptor server.config.yaml config:
```
GUI: GUI:
initial_users: initial_users:
- name: <email address> - name: <email address>
```
Accounts require the `openid` and `email` scopes to be authenticated. It is recommended you limit Accounts require the `openid` and `email` scopes to be authenticated. It is recommended you limit
these to a group with a scope map due to Velociraptors high impact. these to a group with a scope map due to Velociraptors high impact.
```bash
# kanidm group create velociraptor_users # kanidm group create velociraptor_users
# kanidm group add_members velociraptor_users ... # kanidm group add_members velociraptor_users ...
kanidm system oauth2 create_scope_map <resource server name> velociraptor_users openid email kanidm system oauth2 create_scope_map <resource server name> velociraptor_users openid email
```
### Vouch Proxy ### Vouch Proxy
> **WARNING** > **WARNING** Vouch proxy requires a unique identifier but does not use the proper scope, "sub". It
> Vouch proxy requires a unique identifier but does not use the proper scope, "sub". It uses the fields > uses the fields "username" or "email" as primary identifiers instead. As a result, this can cause
> "username" or "email" as primary identifiers instead. As a result, this can cause user or deployment issues, at > user or deployment issues, at worst security bypasses. You should avoid Vouch Proxy if possible
> worst security bypasses. You should avoid Vouch Proxy if possible due to these issues. > due to these issues.
> >
> * <https://github.com/vouch/vouch-proxy/issues/309> > - <https://github.com/vouch/vouch-proxy/issues/309>
> * <https://github.com/vouch/vouch-proxy/issues/310> > - <https://github.com/vouch/vouch-proxy/issues/310>
Note: **You need to run at least the version 0.37.0** Note: **You need to run at least the version 0.37.0**
Vouch Proxy supports multiple OAuth and OIDC login providers. Vouch Proxy supports multiple OAuth and OIDC login providers. To configure it you need to pass:
To configure it you need to pass:
```yaml ```yaml
oauth: oauth:
@ -324,4 +354,6 @@ oauth:
The `email` scope needs to be passed and thus the mail attribute needs to exist on the account: The `email` scope needs to be passed and thus the mail attribute needs to exist on the account:
```bash
kanidm person update <ID> --mail "YYYY@somedomain.com" --name idm_admin kanidm person update <ID> --mail "YYYY@somedomain.com" --name idm_admin
```

View file

@ -1,15 +1,15 @@
# PAM and nsswitch # PAM and nsswitch
[PAM](http://linux-pam.org) and [nsswitch](https://en.wikipedia.org/wiki/Name_Service_Switch) [PAM](http://linux-pam.org) and [nsswitch](https://en.wikipedia.org/wiki/Name_Service_Switch) are
are the core mechanisms used by Linux and BSD clients to resolve identities from an IDM service the core mechanisms used by Linux and BSD clients to resolve identities from an IDM service like
like Kanidm into accounts that can be used on the machine for various interactive tasks. Kanidm into accounts that can be used on the machine for various interactive tasks.
## The UNIX Daemon ## The UNIX Daemon
Kanidm provides a UNIX daemon that runs on any client that wants to use PAM and nsswitch integration. Kanidm provides a UNIX daemon that runs on any client that wants to use PAM and nsswitch
The daemon can cache the accounts for users who have unreliable networks, or who leave integration. The daemon can cache the accounts for users who have unreliable networks, or who leave
the site where Kanidm is hosted. The daemon is also able to cache missing-entry responses to reduce network the site where Kanidm is hosted. The daemon is also able to cache missing-entry responses to reduce
traffic and main server load. network traffic and main server load.
Additionally, running the daemon means that the PAM and nsswitch integration libraries can be small, Additionally, running the daemon means that the PAM and nsswitch integration libraries can be small,
helping to reduce the attack surface of the machine. Similarly, a tasks daemon is available that can helping to reduce the attack surface of the machine. Similarly, a tasks daemon is available that can
@ -18,28 +18,35 @@ these home directories.
We recommend you install the client daemon from your system package manager: We recommend you install the client daemon from your system package manager:
```bash
# OpenSUSE # OpenSUSE
zypper in kanidm-unixd-clients zypper in kanidm-unixd-clients
# Fedora # Fedora
dnf install kanidm-unixd-clients dnf install kanidm-unixd-clients
```
You can check the daemon is running on your Linux system with: You can check the daemon is running on your Linux system with:
```bash
systemctl status kanidm-unixd systemctl status kanidm-unixd
```
You can check the privileged tasks daemon is running with: You can check the privileged tasks daemon is running with:
```bash
systemctl status kanidm-unixd-tasks systemctl status kanidm-unixd-tasks
```
> **NOTE** The `kanidm_unixd_tasks` daemon is not required for PAM and nsswitch functionality. > **NOTE** The `kanidm_unixd_tasks` daemon is not required for PAM and nsswitch functionality. If
> If disabled, your system will function as usual. It is, however, recommended due to the features > disabled, your system will function as usual. It is, however, recommended due to the features it
> it provides supporting Kanidm's capabilities. > provides supporting Kanidm's capabilities.
Both unixd daemons use the connection configuration from /etc/kanidm/config. This is the covered in Both unixd daemons use the connection configuration from /etc/kanidm/config. This is the covered in
[client_tools](./client_tools.md#kanidm-configuration). [client_tools](./client_tools.md#kanidm-configuration).
You can also configure some unixd-specific options with the file /etc/kanidm/unixd: You can also configure some unixd-specific options with the file /etc/kanidm/unixd:
```toml
pam_allowed_login_groups = ["posix_group"] pam_allowed_login_groups = ["posix_group"]
default_shell = "/bin/sh" default_shell = "/bin/sh"
home_prefix = "/home/" home_prefix = "/home/"
@ -48,126 +55,140 @@ You can also configure some unixd-specific options with the file /etc/kanidm/uni
use_etc_skel = false use_etc_skel = false
uid_attr_map = "spn" uid_attr_map = "spn"
gid_attr_map = "spn" gid_attr_map = "spn"
```
`pam_allowed_login_groups` defines a set of POSIX groups where membership of any of these `pam_allowed_login_groups` defines a set of POSIX groups where membership of any of these groups
groups will be allowed to login via PAM. All POSIX users and groups can be resolved by nss will be allowed to login via PAM. All POSIX users and groups can be resolved by nss regardless of
regardless of PAM login status. This may be a group name, spn, or uuid. PAM login status. This may be a group name, spn, or uuid.
`default_shell` is the default shell for users. Defaults to `/bin/sh`. `default_shell` is the default shell for users. Defaults to `/bin/sh`.
`home_prefix` is the prepended path to where home directories are stored. Must end with `home_prefix` is the prepended path to where home directories are stored. Must end with a trailing
a trailing `/`. Defaults to `/home/`. `/`. Defaults to `/home/`.
`home_attr` is the default token attribute used for the home directory path. Valid `home_attr` is the default token attribute used for the home directory path. Valid choices are
choices are `uuid`, `name`, `spn`. Defaults to `uuid`. `uuid`, `name`, `spn`. Defaults to `uuid`.
`home_alias` is the default token attribute used for generating symlinks `home_alias` is the default token attribute used for generating symlinks pointing to the user's home
pointing to the user's directory. If set, this will become the value of the home path to nss calls. It is recommended you
home directory. If set, this will become the value of the home path choose a "human friendly" attribute here. Valid choices are `none`, `uuid`, `name`, `spn`. Defaults
to nss calls. It is recommended you choose a "human friendly" attribute here. to `spn`.
Valid choices are `none`, `uuid`, `name`, `spn`. Defaults to `spn`.
> **NOTICE:** > **NOTICE:** All users in Kanidm can change their name (and their spn) at any time. If you change
> All users in Kanidm can change their name (and their spn) at any time. If you change > `home_attr` from `uuid` you _must_ have a plan on how to manage these directory renames in your
> `home_attr` from `uuid` you *must* have a plan on how to manage these directory renames > system. We recommend that you have a stable ID (like the UUID), and symlinks from the name to the
> in your system. We recommend that you have a stable ID (like the UUID), and symlinks > UUID folder. Automatic support is provided for this via the unixd tasks daemon, as documented
> from the name to the UUID folder. Automatic support is provided for this via the unixd > here.
> tasks daemon, as documented here.
`use_etc_skel` controls if home directories should be prepopulated with the contents of `/etc/skel` `use_etc_skel` controls if home directories should be prepopulated with the contents of `/etc/skel`
when first created. Defaults to false. when first created. Defaults to false.
`uid_attr_map` chooses which attribute is used for domain local users in presentation. Defaults `uid_attr_map` chooses which attribute is used for domain local users in presentation. Defaults to
to `spn`. Users from a trust will always use spn. `spn`. Users from a trust will always use spn.
`gid_attr_map` chooses which attribute is used for domain local groups in presentation. Defaults `gid_attr_map` chooses which attribute is used for domain local groups in presentation. Defaults to
to `spn`. Groups from a trust will always use spn. `spn`. Groups from a trust will always use spn.
You can then check the communication status of the daemon: You can then check the communication status of the daemon:
```bash
kanidm_unixd_status kanidm_unixd_status
```
If the daemon is working, you should see: If the daemon is working, you should see:
```
[2020-02-14T05:58:37Z INFO kanidm_unixd_status] working! [2020-02-14T05:58:37Z INFO kanidm_unixd_status] working!
```
If it is not working, you will see an error message: If it is not working, you will see an error message:
```
[2020-02-14T05:58:10Z ERROR kanidm_unixd_status] Error -> [2020-02-14T05:58:10Z ERROR kanidm_unixd_status] Error ->
Os { code: 111, kind: ConnectionRefused, message: "Connection refused" } Os { code: 111, kind: ConnectionRefused, message: "Connection refused" }
```
For more information, see the For more information, see the [Troubleshooting](./pam_and_nsswitch.md#troubleshooting) section.
[Troubleshooting](./pam_and_nsswitch.md#troubleshooting) section.
## nsswitch ## nsswitch
When the daemon is running you can add the nsswitch libraries to /etc/nsswitch.conf When the daemon is running you can add the nsswitch libraries to /etc/nsswitch.conf
```
passwd: compat kanidm passwd: compat kanidm
group: compat kanidm group: compat kanidm
```
You can [create a user](./accounts_and_groups.md#creating-accounts) then You can [create a user](./accounts_and_groups.md#creating-accounts) then
[enable POSIX feature on the user](./posix_accounts.md#enabling-posix-attributes-on-accounts). [enable POSIX feature on the user](./posix_accounts.md#enabling-posix-attributes-on-accounts).
You can then test that the POSIX extended user is able to be resolved with: You can then test that the POSIX extended user is able to be resolved with:
```bash
getent passwd <account name> getent passwd <account name>
getent passwd testunix getent passwd testunix
testunix:x:3524161420:3524161420:testunix:/home/testunix:/bin/sh testunix:x:3524161420:3524161420:testunix:/home/testunix:/bin/sh
```
You can also do the same for groups. You can also do the same for groups.
```bash
getent group <group name> getent group <group name>
getent group testgroup getent group testgroup
testgroup:x:2439676479:testunix testgroup:x:2439676479:testunix
```
> **HINT** Remember to also create a UNIX password with something like > **HINT** Remember to also create a UNIX password with something like
> `kanidm account posix set_password --name idm_admin demo_user`. > `kanidm account posix set_password --name idm_admin demo_user`. Otherwise there will be no
> Otherwise there will be no credential for the account to authenticate. > credential for the account to authenticate.
## PAM ## PAM
> **WARNING:** Modifications to PAM configuration *may* leave your system in a state > **WARNING:** Modifications to PAM configuration _may_ leave your system in a state where you are
> where you are unable to login or authenticate. You should always have a recovery > unable to login or authenticate. You should always have a recovery shell open while making changes
> shell open while making changes (for example, root), or have access to single-user mode > (for example, root), or have access to single-user mode at the machine's console.
> at the machine's console.
Pluggable Authentication Modules (PAM) is the mechanism a UNIX-like system Pluggable Authentication Modules (PAM) is the mechanism a UNIX-like system that authenticates users,
that authenticates users, and to control access to some resources. This is and to control access to some resources. This is configured through a stack of modules that are
configured through a stack of modules executed in order to evaluate the request, and then each module may request or reuse authentication
that are executed in order to evaluate the request, and then each module may token information.
request or reuse authentication token information.
### Before You Start ### Before You Start
You *should* backup your /etc/pam.d directory from its original state as you You _should_ backup your /etc/pam.d directory from its original state as you _may_ change the PAM
*may* change the PAM configuration in a way that will not allow you configuration in a way that will not allow you to authenticate to your machine.
to authenticate to your machine.
```bash
cp -a /etc/pam.d /root/pam.d.backup cp -a /etc/pam.d /root/pam.d.backup
```
### SUSE / OpenSUSE ### SUSE / OpenSUSE
To configure PAM on suse you must modify four files, which control the To configure PAM on suse you must modify four files, which control the various stages of
various stages of authentication: authentication:
```bash
/etc/pam.d/common-account /etc/pam.d/common-account
/etc/pam.d/common-auth /etc/pam.d/common-auth
/etc/pam.d/common-password /etc/pam.d/common-password
/etc/pam.d/common-session /etc/pam.d/common-session
```
> **IMPORTANT** By default these files are symlinks to their corresponding `-pc` file, for example > **IMPORTANT** By default these files are symlinks to their corresponding `-pc` file, for example
> `common-account -> common-account-pc`. If you directly edit these you are updating the inner > `common-account -> common-account-pc`. If you directly edit these you are updating the inner
> content of the `-pc` file and it WILL be reset on a future upgrade. To prevent this you must > content of the `-pc` file and it WILL be reset on a future upgrade. To prevent this you must first
> first copy the `-pc` files. You can then edit the files safely. > copy the `-pc` files. You can then edit the files safely.
```bash
cp /etc/pam.d/common-account-pc /etc/pam.d/common-account cp /etc/pam.d/common-account-pc /etc/pam.d/common-account
cp /etc/pam.d/common-auth-pc /etc/pam.d/common-auth cp /etc/pam.d/common-auth-pc /etc/pam.d/common-auth
cp /etc/pam.d/common-password-pc /etc/pam.d/common-password cp /etc/pam.d/common-password-pc /etc/pam.d/common-password
cp /etc/pam.d/common-session-pc /etc/pam.d/common-session cp /etc/pam.d/common-session-pc /etc/pam.d/common-session
```
The content should look like: The content should look like:
```
# /etc/pam.d/common-auth-pc # /etc/pam.d/common-auth-pc
# Controls authentication to this system (verification of credentials) # Controls authentication to this system (verification of credentials)
auth required pam_env.so auth required pam_env.so
@ -203,27 +224,31 @@ The content should look like:
session [default=1 ignore=ignore success=ok] pam_succeed_if.so uid >= 1000 quiet_success quiet_fail session [default=1 ignore=ignore success=ok] pam_succeed_if.so uid >= 1000 quiet_success quiet_fail
session optional pam_kanidm.so session optional pam_kanidm.so
session optional pam_env.so session optional pam_env.so
```
> **WARNING:** Ensure that `pam_mkhomedir` or `pam_oddjobd` are *not* present in any stage of your > **WARNING:** Ensure that `pam_mkhomedir` or `pam_oddjobd` are _not_ present in any stage of your
> PAM configuration, as they interfere with the correct operation of the > PAM configuration, as they interfere with the correct operation of the Kanidm tasks daemon.
> Kanidm tasks daemon.
### Fedora / CentOS ### Fedora / CentOS
> **WARNING:** Kanidm currently has no support for SELinux policy - this may mean you need to > **WARNING:** Kanidm currently has no support for SELinux policy - this may mean you need to run
> run the daemon with permissive mode for the unconfined_service_t daemon type. To do this run: > the daemon with permissive mode for the unconfined_service_t daemon type. To do this run:
> `semanage permissive -a unconfined_service_t`. To undo this run `semanage permissive -d unconfined_service_t`. > `semanage permissive -a unconfined_service_t`. To undo this run
> `semanage permissive -d unconfined_service_t`.
> >
> You may also need to run `audit2allow` for sshd and other types to be able to access the UNIX daemon sockets. > You may also need to run `audit2allow` for sshd and other types to be able to access the UNIX
> daemon sockets.
These files are managed by authselect as symlinks. You can either work with These files are managed by authselect as symlinks. You can either work with authselect, or remove
authselect, or remove the symlinks first. the symlinks first.
#### Without authselect #### Without authselect
If you just remove the symlinks: If you just remove the symlinks:
Edit the content. Edit the content.
```
# /etc/pam.d/password-auth # /etc/pam.d/password-auth
auth required pam_env.so auth required pam_env.so
auth required pam_faildelay.so delay=2000000 auth required pam_faildelay.so delay=2000000
@ -282,22 +307,29 @@ Edit the content.
session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid
session required pam_unix.so session required pam_unix.so
session optional pam_kanidm.so session optional pam_kanidm.so
```
#### With authselect #### With authselect
To work with authselect: To work with authselect:
You will need to You will need to
[create a new profile](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_authentication_and_authorization_in_rhel/configuring-user-authentication-using-authselect_configuring-authentication-and-authorization-in-rhel#creating-and-deploying-your-own-authselect-profile_configuring-user-authentication-using-authselect). [create a new profile](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_authentication_and_authorization_in_rhel/configuring-user-authentication-using-authselect_configuring-authentication-and-authorization-in-rhel#creating-and-deploying-your-own-authselect-profile_configuring-user-authentication-using-authselect).
<!--TODO this URL is too short --> <!--TODO this URL is too short -->
First run the following command: First run the following command:
```bash
authselect create-profile kanidm -b sssd authselect create-profile kanidm -b sssd
```
A new folder, /etc/authselect/custom/kanidm, should be created. Inside that folder, create or A new folder, /etc/authselect/custom/kanidm, should be created. Inside that folder, create or
overwrite the following three files: nsswitch.conf, password-auth, system-auth. overwrite the following three files: nsswitch.conf, password-auth, system-auth. password-auth and
password-auth and system-auth should be the same as above. nsswitch should be system-auth should be the same as above. nsswitch should be modified for your use case. A working
modified for your use case. A working example looks like this: example looks like this:
```
passwd: compat kanidm sss files systemd passwd: compat kanidm sss files systemd
group: compat kanidm sss files systemd group: compat kanidm sss files systemd
shadow: files shadow: files
@ -313,10 +345,13 @@ modified for your use case. A working example looks like this:
protocols: files protocols: files
publickey: files publickey: files
rpc: files rpc: files
```
Then run: Then run:
```bash
authselect select custom/kanidm authselect select custom/kanidm
```
to update your profile. to update your profile.
@ -324,16 +359,17 @@ to update your profile.
### Check POSIX-status of Group and Configuration ### Check POSIX-status of Group and Configuration
If authentication is failing via PAM, make sure that a list of groups is configured in `/etc/kanidm/unixd`: If authentication is failing via PAM, make sure that a list of groups is configured in
`/etc/kanidm/unixd`:
``` ```toml
pam_allowed_login_groups = ["example_group"] pam_allowed_login_groups = ["example_group"]
``` ```
Check the status of the group with `kanidm group posix show example_group`. Check the status of the group with `kanidm group posix show example_group`. If you get something
If you get something similar to the following example: similar to the following example:
```shell ```bash
> kanidm group posix show example_group > kanidm group posix show example_group
Using cached token for name idm_admin Using cached token for name idm_admin
Error -> Http(500, Some(InvalidAccountState("Missing class: account && posixaccount OR group && posixgroup")), Error -> Http(500, Some(InvalidAccountState("Missing class: account && posixaccount OR group && posixgroup")),
@ -343,14 +379,14 @@ Error -> Http(500, Some(InvalidAccountState("Missing class: account && posixacco
POSIX-enable the group with `kanidm group posix set example_group`. You should get a result similar POSIX-enable the group with `kanidm group posix set example_group`. You should get a result similar
to this when you search for your group name: to this when you search for your group name:
```shell ```bash
> kanidm group posix show example_group > kanidm group posix show example_group
[ spn: example_group@kanidm.example.com, gidnumber: 3443347205 name: example_group, uuid: b71f137e-39f3-4368-9e58-21d26671ae24 ] [ spn: example_group@kanidm.example.com, gidnumber: 3443347205 name: example_group, uuid: b71f137e-39f3-4368-9e58-21d26671ae24 ]
``` ```
Also, ensure the target user is in the group by running: Also, ensure the target user is in the group by running:
``` ```bash
> kanidm group list_members example_group > kanidm group list_members example_group
``` ```
@ -358,12 +394,16 @@ Also, ensure the target user is in the group by running:
For the unixd daemon, you can increase the logging with: For the unixd daemon, you can increase the logging with:
```bash
systemctl edit kanidm-unixd.service systemctl edit kanidm-unixd.service
```
And add the lines: And add the lines:
```
[Service] [Service]
Environment="RUST_LOG=kanidm=debug" Environment="RUST_LOG=kanidm=debug"
```
Then restart the kanidm-unixd.service. Then restart the kanidm-unixd.service.
@ -371,33 +411,39 @@ The same pattern is true for the kanidm-unixd-tasks.service daemon.
To debug the pam module interactions add `debug` to the module arguments such as: To debug the pam module interactions add `debug` to the module arguments such as:
```
auth sufficient pam_kanidm.so debug auth sufficient pam_kanidm.so debug
```
### Check the Socket Permissions ### Check the Socket Permissions
Check that the `/var/run/kanidm-unixd/sock` has permissions mode 777, and that non-root readers can see it with Check that the `/var/run/kanidm-unixd/sock` has permissions mode 777, and that non-root readers can
ls or other tools. see it with ls or other tools.
Ensure that `/var/run/kanidm-unixd/task_sock` has permissions mode 700, and Ensure that `/var/run/kanidm-unixd/task_sock` has permissions mode 700, and that it is owned by the
that it is owned by the kanidm unixd process user. kanidm unixd process user.
### Verify that You Can Access the Kanidm Server ### Verify that You Can Access the Kanidm Server
You can check this with the client tools: You can check this with the client tools:
```bash
kanidm self whoami --name anonymous kanidm self whoami --name anonymous
```
### Ensure the Libraries are Correct ### Ensure the Libraries are Correct
You should have: You should have:
```bash
/usr/lib64/libnss_kanidm.so.2 /usr/lib64/libnss_kanidm.so.2
/usr/lib64/security/pam_kanidm.so /usr/lib64/security/pam_kanidm.so
The exact path *may* change depending on your distribution, `pam_unixd.so` should be co-located
with pam_kanidm.so. Look for it with the find command:
``` ```
The exact path _may_ change depending on your distribution, `pam_unixd.so` should be co-located with
pam_kanidm.so. Look for it with the find command:
```bash
find /usr/ -name 'pam_unix.so' find /usr/ -name 'pam_unix.so'
``` ```
@ -405,36 +451,41 @@ For example, on a Debian machine, it's located in `/usr/lib/x86_64-linux-gnu/sec
### Increase Connection Timeout ### Increase Connection Timeout
In some high-latency environments, you may need to increase the connection timeout. We set In some high-latency environments, you may need to increase the connection timeout. We set this low
this low to improve response on LANs, but over the internet this may need to be increased. to improve response on LANs, but over the internet this may need to be increased. By increasing the
By increasing the conn_timeout, you will be able to operate on higher latency conn_timeout, you will be able to operate on higher latency links, but some operations may take
links, but some operations may take longer to complete causing a degree of longer to complete causing a degree of latency.
latency.
By increasing the cache_timeout, you will need to refresh less often, but it may result in an By increasing the cache_timeout, you will need to refresh less often, but it may result in an
account lockout or group change until cache_timeout takes effect. Note that this has security account lockout or group change until cache_timeout takes effect. Note that this has security
implications: implications:
```toml
# /etc/kanidm/unixd # /etc/kanidm/unixd
# Seconds # Seconds
conn_timeout = 8 conn_timeout = 8
# Cache timeout # Cache timeout
cache_timeout = 60 cache_timeout = 60
```
### Invalidate or Clear the Cache ### Invalidate or Clear the Cache
You can invalidate the kanidm_unixd cache with: You can invalidate the kanidm_unixd cache with:
```bash
kanidm_cache_invalidate kanidm_cache_invalidate
```
You can clear (wipe) the cache with: You can clear (wipe) the cache with:
```bash
kanidm_cache_clear kanidm_cache_clear
```
There is an important distinction between these two - invalidated cache items may still There is an important distinction between these two - invalidated cache items may still be yielded
be yielded to a client request if the communication to the main Kanidm server is not to a client request if the communication to the main Kanidm server is not possible. For example, you
possible. For example, you may have your laptop in a park without wifi. may have your laptop in a park without wifi.
Clearing the cache, however, completely wipes all local data about all accounts and groups. Clearing the cache, however, completely wipes all local data about all accounts and groups. If you
If you are relying on this cached (but invalid) data, you may lose access to your accounts until are relying on this cached (but invalid) data, you may lose access to your accounts until other
other communication issues have been resolved. communication issues have been resolved.

View file

@ -1,14 +1,13 @@
# RADIUS # RADIUS
Remote Authentication Dial In User Service (RADIUS) is a network protocol Remote Authentication Dial In User Service (RADIUS) is a network protocol that is commonly used to
that is commonly used to authenticate Wi-Fi devices or Virtual Private authenticate Wi-Fi devices or Virtual Private Networks (VPNs). While it should not be a sole point
Networks (VPNs). While it should not be a sole point of trust/authentication of trust/authentication to an identity, it's still an important control for protecting network
to an identity, it's still an important control for protecting network resources. resources.
Kanidm has a philosophy that each account can have multiple credentials which Kanidm has a philosophy that each account can have multiple credentials which are related to their
are related to their devices, and limited to specific resources. RADIUS is devices, and limited to specific resources. RADIUS is no exception and has a separate credential for
no exception and has a separate credential for each account to use for each account to use for RADIUS access.
RADIUS access.
## Disclaimer ## Disclaimer
@ -16,102 +15,99 @@ It's worth noting some disclaimers about Kanidm's RADIUS integration.
### One Credential - One Account ### One Credential - One Account
Kanidm normally attempts to have credentials for each *device* and Kanidm normally attempts to have credentials for each _device_ and _application_ rather than the
*application* rather than the legacy model of one to one. legacy model of one to one.
The RADIUS protocol is only able to attest a *single* password based credential in an The RADIUS protocol is only able to attest a _single_ password based credential in an authentication
authentication attempt, which limits us to storing a single RADIUS password credential attempt, which limits us to storing a single RADIUS password credential per account. However,
per account. However, despite this limitation, it still greatly improves the despite this limitation, it still greatly improves the situation by isolating the RADIUS credential
situation by isolating the RADIUS credential from the primary or application from the primary or application credentials of the account. This solves many common security
credentials of the account. This solves many common security concerns around concerns around credential loss or disclosure, and prevents rogue devices from locking out accounts
credential loss or disclosure, and prevents rogue devices from locking out as they attempt to authenticate to Wi-Fi with expired credentials.
accounts as they attempt to authenticate to Wi-Fi with expired credentials.
Alternatelly, Kanidm supports mapping users with special configuration of certificates Alternatelly, Kanidm supports mapping users with special configuration of certificates allowing some
allowing some systems to use EAP-TLS for RADIUS authentication. This returns to the systems to use EAP-TLS for RADIUS authentication. This returns to the "per device" credential model.
"per device" credential model.
### Cleartext Credential Storage ### Cleartext Credential Storage
RADIUS offers many different types of tunnels and authentication mechanisms. RADIUS offers many different types of tunnels and authentication mechanisms. However, most client
However, most client devices "out of the box" only attempt a single type when devices "out of the box" only attempt a single type when a WPA2-Enterprise network is selected:
a WPA2-Enterprise network is selected: MSCHAPv2 with PEAP. This is a MSCHAPv2 with PEAP. This is a challenge-response protocol that requires clear text or Windows NT LAN
challenge-response protocol that requires clear text or Windows NT LAN
Manager (NTLM) credentials. Manager (NTLM) credentials.
As MSCHAPv2 with PEAP is the only practical, universal RADIUS-type supported As MSCHAPv2 with PEAP is the only practical, universal RADIUS-type supported on all devices with
on all devices with minimal configuration, we consider it imperative minimal configuration, we consider it imperative that it MUST be supported as the default. Esoteric
that it MUST be supported as the default. Esoteric RADIUS types can be used RADIUS types can be used as well, but this is up to administrators to test and configure.
as well, but this is up to administrators to test and configure.
Due to this requirement, we must store the RADIUS material as clear text or Due to this requirement, we must store the RADIUS material as clear text or NTLM hashes. It would be
NTLM hashes. It would be silly to think that NTLM is secure as it relies on silly to think that NTLM is secure as it relies on the obsolete and deprecated MD4 cryptographic
the obsolete and deprecated MD4 cryptographic hash, providing only an hash, providing only an illusion of security.
illusion of security.
This means Kanidm stores RADIUS credentials in the database as clear text. This means Kanidm stores RADIUS credentials in the database as clear text.
We believe this is a reasonable decision and is a low risk to security because: We believe this is a reasonable decision and is a low risk to security because:
* The access controls around RADIUS secrets by default are strong, limited - The access controls around RADIUS secrets by default are strong, limited to only self-account read
to only self-account read and RADIUS-server read. and RADIUS-server read.
* As RADIUS credentials are separate from the primary account credentials and - As RADIUS credentials are separate from the primary account credentials and have no other rights,
have no other rights, their disclosure is not going to lead to a full their disclosure is not going to lead to a full account compromise.
account compromise. - Having the credentials in clear text allows a better user experience as clients can view the
* Having the credentials in clear text allows a better user experience as credentials at any time to enroll further devices.
clients can view the credentials at any time to enroll further devices.
### Service Accounts Do Not Have Radius Access ### Service Accounts Do Not Have Radius Access
Due to the design of service accounts, they do not have access to radius for credential assignemnt. Due to the design of service accounts, they do not have access to radius for credential assignemnt.
If you require RADIUS usage with a service account you *may* need to use EAP-TLS or some other If you require RADIUS usage with a service account you _may_ need to use EAP-TLS or some other
authentication method. authentication method.
## Account Credential Configuration ## Account Credential Configuration
For an account to use RADIUS they must first generate a RADIUS secret unique For an account to use RADIUS they must first generate a RADIUS secret unique to that account. By
to that account. By default, all accounts can self-create this secret. default, all accounts can self-create this secret.
```bash
kanidm person radius generate_secret --name william william kanidm person radius generate_secret --name william william
kanidm person radius show_secret --name william william kanidm person radius show_secret --name william william
```
## Account Group Configuration ## Account Group Configuration
In Kanidm, accounts which can authenticate to RADIUS must be a member In Kanidm, accounts which can authenticate to RADIUS must be a member of an allowed group. This
of an allowed group. This allows you to define which users or groups may use allows you to define which users or groups may use a Wi-Fi or VPN infrastructure, and provides a
a Wi-Fi or VPN infrastructure, and provides a path for revoking access to the path for revoking access to the resources through group management. The key point of this is that
resources through group management. The key point of this is that service service accounts should not be part of this group:
accounts should not be part of this group:
```bash
kanidm group create --name idm_admin radius_access_allowed kanidm group create --name idm_admin radius_access_allowed
kanidm group add_members --name idm_admin radius_access_allowed william kanidm group add_members --name idm_admin radius_access_allowed william
```
## RADIUS Server Service Account ## RADIUS Server Service Account
To read these secrets, the RADIUS server requires an account with the To read these secrets, the RADIUS server requires an account with the correct privileges. This can
correct privileges. This can be created and assigned through the group be created and assigned through the group "idm_radius_servers", which is provided by default.
"idm_radius_servers", which is provided by default.
First, create the service account and add it to the group: First, create the service account and add it to the group:
```shell ```bash
kanidm service-account create --name admin radius_service_account "Radius Service Account" kanidm service-account create --name admin radius_service_account "Radius Service Account"
kanidm group add_members --name admin idm_radius_servers radius_service_account kanidm group add_members --name admin idm_radius_servers radius_service_account
``` ```
Now reset the account password, using the `admin` account: Now reset the account password, using the `admin` account:
```shell ```bash
kanidm service-account credential generate-pw --name admin radius_service_account kanidm service-account credential generate-pw --name admin radius_service_account
``` ```
## Deploying a RADIUS Container ## Deploying a RADIUS Container
We provide a RADIUS container that has all the needed integrations. We provide a RADIUS container that has all the needed integrations. This container requires some
This container requires some cryptographic material, with the following files being in `/etc/raddb/certs`. (Modifiable in the configuration) cryptographic material, with the following files being in `/etc/raddb/certs`. (Modifiable in the
configuration)
| filename | description | | filename | description |
| --- | --- | | -------- | ------------------------------------------------------------- |
| ca.pem | The signing CA of the RADIUS certificate | | ca.pem | The signing CA of the RADIUS certificate |
| dh.pem | The output of `openssl dhparam -in ca.pem -out ./dh.pem 2048` | | dh.pem | The output of `openssl dhparam -in ca.pem -out ./dh.pem 2048` |
| cert.pem | The certificate for the RADIUS server | | cert.pem | The certificate for the RADIUS server |
@ -156,12 +152,10 @@ radius_clients = [
# radius_dh_path = "/etc/raddb/certs/dh.pem" # radius_dh_path = "/etc/raddb/certs/dh.pem"
# the CA certificate # the CA certificate
# radius_ca_path = "/etc/raddb/certs/ca.pem" # radius_ca_path = "/etc/raddb/certs/ca.pem"
``` ```
## A fully configured example ## A fully configured example
```toml ```toml
url = "https://example.com" url = "https://example.com"
@ -186,6 +180,7 @@ radius_clients = [
{ name = "docker" , ipaddr = "172.17.0.0/16", secret = "testing123" }, { name = "docker" , ipaddr = "172.17.0.0/16", secret = "testing123" },
] ]
``` ```
## Moving to Production ## Moving to Production
To expose this to a Wi-Fi infrastructure, add your NAS in the configuration: To expose this to a Wi-Fi infrastructure, add your NAS in the configuration:
@ -199,11 +194,10 @@ radius_clients = [
Then re-create/run your docker instance and expose the ports by adding Then re-create/run your docker instance and expose the ports by adding
`-p 1812:1812 -p 1812:1812/udp` to the command. `-p 1812:1812 -p 1812:1812/udp` to the command.
If you have any issues, check the logs from the RADIUS output, as they tend If you have any issues, check the logs from the RADIUS output, as they tend to indicate the cause of
to indicate the cause of the problem. To increase the logging level you can the problem. To increase the logging level you can re-run your environment with debug enabled:
re-run your environment with debug enabled:
```shell ```bash
docker rm radiusd docker rm radiusd
docker run --name radiusd \ docker run --name radiusd \
-e DEBUG=True \ -e DEBUG=True \
@ -214,8 +208,7 @@ docker run --name radiusd \
kanidm/radius:latest kanidm/radius:latest
``` ```
Note: the RADIUS container *is* configured to provide Note: the RADIUS container _is_ configured to provide
[Tunnel-Private-Group-ID](https://freeradius.org/rfc/rfc2868.html#Tunnel-Private-Group-ID), [Tunnel-Private-Group-ID](https://freeradius.org/rfc/rfc2868.html#Tunnel-Private-Group-ID), so if
so if you wish to use Wi-Fi-assigned VLANs on your infrastructure, you can you wish to use Wi-Fi-assigned VLANs on your infrastructure, you can assign these by groups in the
assign these by groups in the configuration file as shown in the above examples. configuration file as shown in the above examples.

View file

@ -1,17 +1,23 @@
# Traefik # Traefik
Traefik is a flexible HTTP reverse proxy webserver that can be integrated with Docker to allow dynamic configuration Traefik is a flexible HTTP reverse proxy webserver that can be integrated with Docker to allow
and to automatically use LetsEncrypt to provide valid TLS certificates. dynamic configuration and to automatically use LetsEncrypt to provide valid TLS certificates. We can
We can leverage this in the setup of Kanidm by specifying the configuration of Kanidm and Traefik in the same [Docker Compose configuration](https://docs.docker.com/compose/). leverage this in the setup of Kanidm by specifying the configuration of Kanidm and Traefik in the
same [Docker Compose configuration](https://docs.docker.com/compose/).
## Example setup ## Example setup
Create a new directory and copy the following YAML file into it as `docker-compose.yml`.
Edit the YAML to update the LetsEncrypt account email for your domain and the FQDN where Kanidm will be made available.
Ensure you adjust this file or Kanidm's configuration to have a matching HTTPS port; the line `traefik.http.services.kanidm.loadbalancer.server.port=8443` sets this on the Traefik side.
> **NOTE** You will need to generate self-signed certificates for Kanidm, and copy the configuration into the `kanidm_data` volume. Some instructions are available in the "Installing the Server" section of this book.
Create a new directory and copy the following YAML file into it as `docker-compose.yml`. Edit the
YAML to update the LetsEncrypt account email for your domain and the FQDN where Kanidm will be made
available. Ensure you adjust this file or Kanidm's configuration to have a matching HTTPS port; the
line `traefik.http.services.kanidm.loadbalancer.server.port=8443` sets this on the Traefik side.
> **NOTE** You will need to generate self-signed certificates for Kanidm, and copy the configuration
> into the `kanidm_data` volume. Some instructions are available in the "Installing the Server"
> section of this book.
`docker-compose.yml` `docker-compose.yml`
```yaml ```yaml
version: "3.4" version: "3.4"

View file

@ -1,37 +1,33 @@
# Introduction to Kanidm # Introduction to Kanidm
Kanidm is an identity management server, acting as an authority on account information, authentication Kanidm is an identity management server, acting as an authority on account information,
and authorisation within a technical environment. authentication and authorisation within a technical environment.
The intent of the Kanidm project is to: The intent of the Kanidm project is to:
* Provide a single truth source for accounts, groups and privileges. - Provide a single truth source for accounts, groups and privileges.
* Enable integrations to systems and services so they can authenticate accounts. - Enable integrations to systems and services so they can authenticate accounts.
* Make system, network, application and web authentication easy and accessible. - Make system, network, application and web authentication easy and accessible.
* Secure and reliable by default, aiming for the highest levels of quality. - Secure and reliable by default, aiming for the highest levels of quality.
{{#template {{#template templates/kani-warning.md imagepath=images title=NOTICE text=Kanidm is still a work in
templates/kani-warning.md progress. Many features will evolve and change over time which may not be suitable for all users. }}
imagepath=images
title=NOTICE
text=Kanidm is still a work in progress. Many features will evolve and change over time which may not be suitable for all users.
}}
## Why do I want Kanidm? ## Why do I want Kanidm?
Whether you work in a business, a volunteer organisation, or are an enthusiast who manages Whether you work in a business, a volunteer organisation, or are an enthusiast who manages their
their personal services, you need methods of authenticating and identifying personal services, you need methods of authenticating and identifying to your systems, and
to your systems, and subsequently, ways to determine what authorisation and privileges you have subsequently, ways to determine what authorisation and privileges you have while accessing these
while accessing these systems. systems.
We've probably all been in workplaces where you end up with multiple accounts on various We've probably all been in workplaces where you end up with multiple accounts on various systems -
systems - one for a workstation, different SSH keys for different tasks, maybe some shared one for a workstation, different SSH keys for different tasks, maybe some shared account passwords.
account passwords. Not only is it difficult for people to manage all these different credentials Not only is it difficult for people to manage all these different credentials and what they have
and what they have access to, but it also means that sometimes these credentials have more access to, but it also means that sometimes these credentials have more access or privilege than
access or privilege than they require. they require.
Kanidm acts as a central authority of accounts in your organisation and allows each account to associate Kanidm acts as a central authority of accounts in your organisation and allows each account to
many devices and credentials with different privileges. An example of how this looks: associate many devices and credentials with different privileges. An example of how this looks:
┌──────────────────┐ ┌──────────────────┐
┌┴─────────────────┐│ ┌┴─────────────────┐│
@ -78,19 +74,20 @@ many devices and credentials with different privileges. An example of how this l
│ You │ │ You │
└──────────┘ └──────────┘
A key design goal is that you authenticate with your device in some manner, and then your device will A key design goal is that you authenticate with your device in some manner, and then your device
continue to authenticate you in the future. Each of these different types of credentials, from SSH keys, will continue to authenticate you in the future. Each of these different types of credentials, from
application passwords, to RADIUS passwords and others, are "things your device knows". Each password SSH keys, application passwords, to RADIUS passwords and others, are "things your device knows".
has limited capability, and can only access that exact service or resource. Each password has limited capability, and can only access that exact service or resource.
This helps improve security; a compromise of the service or the network transmission does not This helps improve security; a compromise of the service or the network transmission does not grant
grant you unlimited access to your account and all its privileges. As the credentials are specific you unlimited access to your account and all its privileges. As the credentials are specific to a
to a device, if a device is compromised you can revoke its associated credentials. If a device, if a device is compromised you can revoke its associated credentials. If a specific service
specific service is compromised, only the credentials for that service need to be revoked. is compromised, only the credentials for that service need to be revoked.
Due to this model, and the design of Kanidm to centre the device and to have more per-service credentials, Due to this model, and the design of Kanidm to centre the device and to have more per-service
workflows and automation are added or designed to reduce human handling. credentials, workflows and automation are added or designed to reduce human handling.
## Library documentation ## Library documentation
Looking for the `rustdoc` documentation for the libraries themselves? [Click here!](https://kanidm.com/documentation/) Looking for the `rustdoc` documentation for the libraries themselves?
[Click here!](https://kanidm.com/documentation/)

View file

@ -9,9 +9,9 @@ kanidmd currently responds to HTTP GET requests at the `/status` endpoint with a
either "true" or "false". `true` indicates that the platform is responding to requests. either "true" or "false". `true` indicates that the platform is responding to requests.
| URL | `<hostname>/status` | | URL | `<hostname>/status` |
| --- | --- | | ------------------ | ------------------------------------------------ |
| Example URL | `https://example.com/status` | | Example URL | `https://example.com/status` |
| Expected response | One of either `true` or `false` (without quotes) | | Expected response | One of either `true` or `false` (without quotes) |
| Additional Headers | x-kanidm-opid | Additional Headers | x-kanidm-opid |
| Content Type | application/json | | Content Type | application/json |
| Cookies | kanidm-session | | Cookies | kanidm-session |

View file

@ -9,7 +9,7 @@ Packages are known to exist for the following distributions:
To ease packaging for your distribution, the `Makefile` has targets for sets of binary outputs. To ease packaging for your distribution, the `Makefile` has targets for sets of binary outputs.
| Target | Description | | Target | Description |
| --- | --- | | ---------------------- | --------------------------- |
| `release/kanidm` | Kanidm's CLI | | `release/kanidm` | Kanidm's CLI |
| `release/kanidmd` | The server daemon | | `release/kanidmd` | The server daemon |
| `release/kanidm-ssh` | SSH-related utilities | | `release/kanidm-ssh` | SSH-related utilities |

View file

@ -5,7 +5,8 @@
This happens in Docker currently, and here's some instructions for doing it for Ubuntu: This happens in Docker currently, and here's some instructions for doing it for Ubuntu:
1. Start in the root directory of the repository. 1. Start in the root directory of the repository.
2. Run `./platform/debian/ubuntu_docker_builder.sh` This'll start a container, mounting the repository in `~/kanidm/`. 2. Run `./platform/debian/ubuntu_docker_builder.sh` This'll start a container, mounting the
repository in `~/kanidm/`.
3. Install the required dependencies by running `./platform/debian/install_deps.sh`. 3. Install the required dependencies by running `./platform/debian/install_deps.sh`.
4. Building packages uses make, get a list by running `make -f ./platform/debian/Makefile help` 4. Building packages uses make, get a list by running `make -f ./platform/debian/Makefile help`
@ -23,12 +24,16 @@ debs/all:
build all the debs build all the debs
``` ```
5. So if you wanted to build the package for the Kanidm CLI, run `make -f ./platform/debian/Makefile debs/kanidm`. 5. So if you wanted to build the package for the Kanidm CLI, run
6. The package will be copied into the `target` directory of the repository on the docker host - not just in the container. `make -f ./platform/debian/Makefile debs/kanidm`.
6. The package will be copied into the `target` directory of the repository on the docker host - not
just in the container.
## Adding a package ## Adding a package
There's a set of default configuration files in `packaging/`; if you want to add a package definition, add a folder with the package name and then files in there will be copied over the top of the ones from `packaging/` on build. There's a set of default configuration files in `packaging/`; if you want to add a package
definition, add a folder with the package name and then files in there will be copied over the top
of the ones from `packaging/` on build.
You'll need two custom files at minimum: You'll need two custom files at minimum:
@ -38,14 +43,14 @@ You'll need two custom files at minimum:
There's a lot of other files that can go into a .deb, some handy ones are: There's a lot of other files that can go into a .deb, some handy ones are:
| Filename | What it does | | Filename | What it does |
| --- | --- | | -------- | ------------------------------------------------------------------------ |
| preinst | Runs before installation occurs | | preinst | Runs before installation occurs |
| postrm | Runs after removal happens | | postrm | Runs after removal happens |
| prerm | Runs before removal happens - handy to shut down services. | | prerm | Runs before removal happens - handy to shut down services. |
| postinst | Runs after installation occurs - we're using that to show notes to users | | postinst | Runs after installation occurs - we're using that to show notes to users |
## Some Debian packaging links ## Some Debian packaging links
* [DH reference](https://www.debian.org/doc/manuals/maint-guide/dreq.en.html) - Explains what needs to be done for packaging (mostly). - [DH reference](https://www.debian.org/doc/manuals/maint-guide/dreq.en.html) - Explains what needs
* [Reference for what goes in control files](https://www.debian.org/doc/debian-policy/ch-controlfields) to be done for packaging (mostly).
- [Reference for what goes in control files](https://www.debian.org/doc/debian-policy/ch-controlfields)

View file

@ -1,32 +1,33 @@
# Password Quality and Badlisting # Password Quality and Badlisting
Kanidm embeds a set of tools to help your users use and create strong passwords. Kanidm embeds a set of tools to help your users use and create strong passwords. This is important
This is important as not all user types will require multi-factor authentication (MFA) as not all user types will require multi-factor authentication (MFA) for their roles, but
for their roles, but compromised accounts still pose a risk. There may also be deployment compromised accounts still pose a risk. There may also be deployment or other barriers to a site
or other barriers to a site rolling out sitewide MFA. rolling out sitewide MFA.
## Quality Checking ## Quality Checking
Kanidm enforces that all passwords are checked by the library "[zxcvbn](https://github.com/dropbox/zxcvbn)". Kanidm enforces that all passwords are checked by the library
This has a large number of checks for password quality. It also provides constructive feedback to users on how "[zxcvbn](https://github.com/dropbox/zxcvbn)". This has a large number of checks for password
to improve their passwords if they are rejected. quality. It also provides constructive feedback to users on how to improve their passwords if they
are rejected.
Some things that zxcvbn looks for is use of the account name or email in the password, common passwords, Some things that zxcvbn looks for is use of the account name or email in the password, common
low entropy passwords, dates, reverse words and more. passwords, low entropy passwords, dates, reverse words and more.
This library can not be disabled - all passwords in Kanidm must pass this check. This library can not be disabled - all passwords in Kanidm must pass this check.
## Password Badlisting ## Password Badlisting
This is the process of configuring a list of passwords to exclude from being able to be used. This is the process of configuring a list of passwords to exclude from being able to be used. This
This is especially useful if a specific business has been notified of compromised accounts, allowing is especially useful if a specific business has been notified of compromised accounts, allowing you
you to maintain a list of customised excluded passwords. to maintain a list of customised excluded passwords.
The other value to this feature is being able to badlist common passwords that zxcvbn does not detect, or The other value to this feature is being able to badlist common passwords that zxcvbn does not
from other large scale password compromises. detect, or from other large scale password compromises.
By default we ship with a preconfigured badlist that is updated over time as new password breach lists are By default we ship with a preconfigured badlist that is updated over time as new password breach
made available. lists are made available.
The password badlist by default is append only, meaning it can only grow, but will never remove The password badlist by default is append only, meaning it can only grow, but will never remove
passwords previously considered breached. passwords previously considered breached.
@ -35,18 +36,21 @@ passwords previously considered breached.
You can display the current badlist with: You can display the current badlist with:
```bash
kanidm system pw-badlist show kanidm system pw-badlist show
```
You can update your own badlist with: You can update your own badlist with:
```bash
kanidm system pw-badlist upload "path/to/badlist" [...] kanidm system pw-badlist upload "path/to/badlist" [...]
```
Multiple bad lists can be listed and uploaded at once. These are preprocessed to identify and remove Multiple bad lists can be listed and uploaded at once. These are preprocessed to identify and remove
passwords that zxcvbn and our password rules would already have eliminated. That helps to make the bad passwords that zxcvbn and our password rules would already have eliminated. That helps to make the
list more efficent to operate over at run time. bad list more efficent to operate over at run time.
## Password Rotation ## Password Rotation
Kanidm will never support this "anti-feature". Password rotation encourages poor password hygiene and Kanidm will never support this "anti-feature". Password rotation encourages poor password hygiene
is not shown to prevent any attacks. and is not shown to prevent any attacks.

View file

@ -1,73 +1,69 @@
# POSIX Accounts and Groups # POSIX Accounts and Groups
Kanidm has features that enable its accounts and groups to be consumed on Kanidm has features that enable its accounts and groups to be consumed on POSIX-like machines, such
POSIX-like machines, such as Linux, FreeBSD, or others. Both service accounts as Linux, FreeBSD, or others. Both service accounts and person accounts can be used on POSIX
and person accounts can be used on POSIX systems. systems.
## Notes on POSIX Features ## Notes on POSIX Features
Many design decisions have been made in the POSIX features Many design decisions have been made in the POSIX features of Kanidm that are intended to make
of Kanidm that are intended to make distributed systems easier to manage and distributed systems easier to manage and client systems more secure.
client systems more secure.
### UID and GID Numbers ### UID and GID Numbers
In Kanidm there is no difference between a UID and a GID number. On most UNIX systems In Kanidm there is no difference between a UID and a GID number. On most UNIX systems a user will
a user will create all files with a primary user and group. The primary group is create all files with a primary user and group. The primary group is effectively equivalent to the
effectively equivalent to the permissions of the user. It is very easy to see scenarios permissions of the user. It is very easy to see scenarios where someone may change the account to
where someone may change the account to have a shared primary group (ie `allusers`), have a shared primary group (ie `allusers`), but without changing the umask on all client systems.
but without changing the umask on all client systems. This can cause users' data to be This can cause users' data to be compromised by any member of the same shared group.
compromised by any member of the same shared group.
To prevent this, many systems create a "user private group", or UPG. This group has the To prevent this, many systems create a "user private group", or UPG. This group has the GID number
GID number matching the UID of the user, and the user sets their primary matching the UID of the user, and the user sets their primary group ID to the GID number of the UPG.
group ID to the GID number of the UPG.
As there is now an equivalence between the UID and GID number of the user and the UPG, As there is now an equivalence between the UID and GID number of the user and the UPG, there is no
there is no benefit in separating these values. As a result Kanidm accounts *only* benefit in separating these values. As a result Kanidm accounts _only_ have a GID number, which is
have a GID number, which is also considered to be its UID number as well. This has the benefit also considered to be its UID number as well. This has the benefit of preventing the accidental
of preventing the accidental creation of a separate group that has an overlapping GID number creation of a separate group that has an overlapping GID number (the `uniqueness` attribute of the
(the `uniqueness` attribute of the schema will block the creation). schema will block the creation).
### UPG Generation ### UPG Generation
Due to the requirement that a user have a UPG for security, many systems create these as Due to the requirement that a user have a UPG for security, many systems create these as two
two independent items. For example in /etc/passwd and /etc/group: independent items. For example in /etc/passwd and /etc/group:
```
# passwd # passwd
william:x:654401105:654401105::/home/william:/bin/zsh william:x:654401105:654401105::/home/william:/bin/zsh
# group # group
william:x:654401105: william:x:654401105:
```
Other systems like FreeIPA use a plugin that generates a UPG as a seperate group entry on Other systems like FreeIPA use a plugin that generates a UPG as a seperate group entry on creation
creation of the account. This means there are two entries for an account, and they must of the account. This means there are two entries for an account, and they must be kept in lock-step.
be kept in lock-step.
Kanidm does neither of these. As the GID number of the user must be unique, and a user Kanidm does neither of these. As the GID number of the user must be unique, and a user implies the
implies the UPG must exist, we can generate UPG's on-demand from the account. UPG must exist, we can generate UPG's on-demand from the account. This has a single side effect -
This has a single side effect - that you are unable to add any members to a that you are unable to add any members to a UPG - given the nature of a user private group, this is
UPG - given the nature of a user private group, this is the point. the point.
### GID Number Generation ### GID Number Generation
Kanidm will have asynchronous replication as a feature between writable Kanidm will have asynchronous replication as a feature between writable database servers. In this
database servers. In this case, we need to be able to allocate stable and reliable case, we need to be able to allocate stable and reliable GID numbers to accounts on replicas that
GID numbers to accounts on replicas that may not be in continual communication. may not be in continual communication.
To do this, we use the last 32 bits of the account or group's UUID to generate the GID number. To do this, we use the last 32 bits of the account or group's UUID to generate the GID number.
A valid concern is the possibility of duplication in the lower 32 bits. Given the A valid concern is the possibility of duplication in the lower 32 bits. Given the birthday problem,
birthday problem, if you have 77,000 groups and accounts, you have a 50% chance if you have 77,000 groups and accounts, you have a 50% chance of duplication. With 50,000 you have a
of duplication. With 50,000 you have a 20% chance, 9,300 you have a 1% chance and 20% chance, 9,300 you have a 1% chance and with 2900 you have a 0.1% chance.
with 2900 you have a 0.1% chance.
We advise that if you have a site with >10,000 users you should use an external system We advise that if you have a site with >10,000 users you should use an external system to allocate
to allocate GID numbers serially or consistently to avoid potential duplication events. GID numbers serially or consistently to avoid potential duplication events.
This design decision is made as most small sites will benefit greatly from the This design decision is made as most small sites will benefit greatly from the auto-allocation
auto-allocation policy and the simplicity of its design, while larger enterprises policy and the simplicity of its design, while larger enterprises will already have IDM or business
will already have IDM or business process applications for HR/People that are process applications for HR/People that are capable of supplying this kind of data in batch jobs.
capable of supplying this kind of data in batch jobs.
## Enabling POSIX Attributes ## Enabling POSIX Attributes
@ -78,6 +74,7 @@ To enable POSIX account features and IDs on an account, you require the permissi
You can then use the following command to enable POSIX extensions on a person or service account. You can then use the following command to enable POSIX extensions on a person or service account.
```bash
kanidm [person OR service-account] posix set --name idm_admin <account_id> [--shell SHELL --gidnumber GID] kanidm [person OR service-account] posix set --name idm_admin <account_id> [--shell SHELL --gidnumber GID]
kanidm person posix set --name idm_admin demo_user kanidm person posix set --name idm_admin demo_user
@ -87,39 +84,48 @@ You can then use the following command to enable POSIX extensions on a person or
kanidm service-account posix set --name idm_admin demo_account kanidm service-account posix set --name idm_admin demo_account
kanidm service-account posix set --name idm_admin demo_account --shell /bin/zsh kanidm service-account posix set --name idm_admin demo_account --shell /bin/zsh
kanidm service-account posix set --name idm_admin demo_account --gidnumber 2001 kanidm service-account posix set --name idm_admin demo_account --gidnumber 2001
```
You can view the accounts POSIX token details with: You can view the accounts POSIX token details with:
```bash
kanidm person posix show --name anonymous demo_user kanidm person posix show --name anonymous demo_user
kanidm service-account posix show --name anonymous demo_account kanidm service-account posix show --name anonymous demo_account
```
### Enabling POSIX Attributes on Groups ### Enabling POSIX Attributes on Groups
To enable POSIX group features and IDs on an account, you require the permission `idm_group_unix_extend_priv`. To enable POSIX group features and IDs on an account, you require the permission
This is provided to `idm_admins` in the default database. `idm_group_unix_extend_priv`. This is provided to `idm_admins` in the default database.
You can then use the following command to enable POSIX extensions: You can then use the following command to enable POSIX extensions:
```bash
kanidm group posix set --name idm_admin <group_id> [--gidnumber GID] kanidm group posix set --name idm_admin <group_id> [--gidnumber GID]
kanidm group posix set --name idm_admin demo_group kanidm group posix set --name idm_admin demo_group
kanidm group posix set --name idm_admin demo_group --gidnumber 2001 kanidm group posix set --name idm_admin demo_group --gidnumber 2001
```
You can view the accounts POSIX token details with: You can view the accounts POSIX token details with:
```bash
kanidm group posix show --name anonymous demo_group kanidm group posix show --name anonymous demo_group
```
POSIX-enabled groups will supply their members as POSIX members to clients. There is no POSIX-enabled groups will supply their members as POSIX members to clients. There is no special or
special or separate type of membership for POSIX members required. separate type of membership for POSIX members required.
## Troubleshooting Common Issues ## Troubleshooting Common Issues
### subuid conflicts with Podman ### subuid conflicts with Podman
Due to the way that Podman operates, in some cases using the Kanidm client inside non-root containers Due to the way that Podman operates, in some cases using the Kanidm client inside non-root
with Kanidm accounts may fail with an error such as: containers with Kanidm accounts may fail with an error such as:
```
ERRO[0000] cannot find UID/GID for user NAME: No subuid ranges found for user "NAME" in /etc/subuid ERRO[0000] cannot find UID/GID for user NAME: No subuid ranges found for user "NAME" in /etc/subuid
```
This is a fault in Podman and how it attempts to provide non-root containers, when UID/GIDs This is a fault in Podman and how it attempts to provide non-root containers, when UID/GIDs are
are greater than 65535. In this case you may manually allocate your users GID number to be greater than 65535. In this case you may manually allocate your users GID number to be between
between 1000 - 65535, which may not trigger the fault. 1000 - 65535, which may not trigger the fault.

View file

@ -2,7 +2,9 @@
## Software Installation Method ## Software Installation Method
> **NOTE** Our preferred deployment method is in containers, and this documentation assumes you're running in docker. Kanidm will alternately run as a daemon/service, and server builds are available for multiple platforms if you prefer this option. > **NOTE** Our preferred deployment method is in containers, and this documentation assumes you're
> running in docker. Kanidm will alternately run as a daemon/service, and server builds are
> available for multiple platforms if you prefer this option.
We provide docker images for the server components. They can be found at: We provide docker images for the server components. They can be found at:
@ -11,14 +13,19 @@ We provide docker images for the server components. They can be found at:
You can fetch these by running the commands: You can fetch these by running the commands:
```bash
docker pull kanidm/server:x86_64_latest docker pull kanidm/server:x86_64_latest
docker pull kanidm/radius:latest docker pull kanidm/radius:latest
```
If you do not meet the [system requirements](#system-requirements) for your CPU you should use: If you do not meet the [system requirements](#system-requirements) for your CPU you should use:
```bash
docker pull kanidm/server:latest docker pull kanidm/server:latest
```
You may need to adjust your example commands throughout this document to suit your desired server type. You may need to adjust your example commands throughout this document to suit your desired server
type.
## Development Version ## Development Version
@ -34,8 +41,10 @@ report issues, we will make every effort to help resolve them.
If you are using the x86\_64 cpu-optimised version, you must have a CPU that is from 2013 or newer If you are using the x86\_64 cpu-optimised version, you must have a CPU that is from 2013 or newer
(Haswell, Ryzen). The following instruction flags are used. (Haswell, Ryzen). The following instruction flags are used.
```
cmov, cx8, fxsr, mmx, sse, sse2, cx16, sahf, popcnt, sse3, sse4.1, sse4.2, avx, avx2, cmov, cx8, fxsr, mmx, sse, sse2, cx16, sahf, popcnt, sse3, sse4.1, sse4.2, avx, avx2,
bmi, bmi2, f16c, fma, lzcnt, movbe, xsave bmi, bmi2, f16c, fma, lzcnt, movbe, xsave
```
Older or unsupported CPUs may raise a SIGIL (Illegal Instruction) on hardware that is not supported Older or unsupported CPUs may raise a SIGIL (Illegal Instruction) on hardware that is not supported
by the project. by the project.
@ -45,48 +54,54 @@ In this case, you should use the standard server:latest image.
In the future we may apply a baseline of flags as a requirement for x86\_64 for the server:latest In the future we may apply a baseline of flags as a requirement for x86\_64 for the server:latest
image. These flags will be: image. These flags will be:
```
cmov, cx8, fxsr, mmx, sse, sse2 cmov, cx8, fxsr, mmx, sse, sse2
```
{{#template {{#template templates/kani-alert.md imagepath=images title=Tip text=You can check your cpu flags on
templates/kani-alert.md Linux with the command `lscpu` }}
imagepath=images
title=Tip
text=You can check your cpu flags on Linux with the command `lscpu`
}}
#### Memory #### Memory
Kanidm extensively uses memory caching, trading memory consumption to improve parallel throughput. Kanidm extensively uses memory caching, trading memory consumption to improve parallel throughput.
You should expect to see 64KB of ram per entry in your database, depending on cache tuning and settings. You should expect to see 64KB of ram per entry in your database, depending on cache tuning and
settings.
#### Disk #### Disk
You should expect to use up to 8KB of disk per entry you plan to store. At an estimate 10,000 entry You should expect to use up to 8KB of disk per entry you plan to store. At an estimate 10,000 entry
databases will consume 40MB, 100,000 entry will consume 400MB. databases will consume 40MB, 100,000 entry will consume 400MB.
For best performance, you should use non-volatile memory express (NVME), or other Flash storage media. For best performance, you should use non-volatile memory express (NVME), or other Flash storage
media.
## TLS ## TLS
You'll need a volume where you can place configuration, certificates, and the database: You'll need a volume where you can place configuration, certificates, and the database:
```bash
docker volume create kanidmd docker volume create kanidmd
```
You should have a chain.pem and key.pem in your kanidmd volume. The reason for requiring You should have a chain.pem and key.pem in your kanidmd volume. The reason for requiring Transport
Transport Layer Security (TLS, which replaces the deprecated Secure Sockets Layer, SSL) is explained in [why tls](./why_tls.md). In summary, TLS is our root of trust between the Layer Security (TLS, which replaces the deprecated Secure Sockets Layer, SSL) is explained in
server and clients, and a critical element of ensuring a secure system. [why tls](./why_tls.md). In summary, TLS is our root of trust between the server and clients, and a
critical element of ensuring a secure system.
The key.pem should be a single PEM private key, with no encryption. The file content should be The key.pem should be a single PEM private key, with no encryption. The file content should be
similar to: similar to:
```
-----BEGIN RSA PRIVATE KEY----- -----BEGIN RSA PRIVATE KEY-----
MII...<base64> MII...<base64>
-----END RSA PRIVATE KEY----- -----END RSA PRIVATE KEY-----
```
The chain.pem is a series of PEM formatted certificates. The leaf certificate, or the certificate The chain.pem is a series of PEM formatted certificates. The leaf certificate, or the certificate
that matches the private key should be the first certificate in the file. This should be followed that matches the private key should be the first certificate in the file. This should be followed by
by the series of intermediates, and the final certificate should be the CA root. For example: the series of intermediates, and the final certificate should be the CA root. For example:
```
-----BEGIN CERTIFICATE----- -----BEGIN CERTIFICATE-----
<leaf certificate> <leaf certificate>
-----END CERTIFICATE----- -----END CERTIFICATE-----
@ -97,13 +112,14 @@ by the series of intermediates, and the final certificate should be the CA root.
-----BEGIN CERTIFICATE----- -----BEGIN CERTIFICATE-----
<ca/croot certificate> <ca/croot certificate>
-----END CERTIFICATE----- -----END CERTIFICATE-----
```
> **HINT** > **HINT** If you are using Let's Encrypt the provided files "fullchain.pem" and "privkey.pem" are
> If you are using Let's Encrypt the provided files "fullchain.pem" and "privkey.pem" are already > already correctly formatted as required for Kanidm.
> correctly formatted as required for Kanidm.
You can validate that the leaf certificate matches the key with the command: You can validate that the leaf certificate matches the key with the command:
```bash
# ECDSA # ECDSA
openssl ec -in key.pem -pubout | openssl sha1 openssl ec -in key.pem -pubout | openssl sha1
1c7e7bf6ef8f83841daeedf16093bda585fc5bb0 1c7e7bf6ef8f83841daeedf16093bda585fc5bb0
@ -115,26 +131,34 @@ You can validate that the leaf certificate matches the key with the command:
d2188932f520e45f2e76153fbbaf13f81ea6c1ef d2188932f520e45f2e76153fbbaf13f81ea6c1ef
# openssl x509 -noout -modulus -in chain.pem | openssl sha1 # openssl x509 -noout -modulus -in chain.pem | openssl sha1
d2188932f520e45f2e76153fbbaf13f81ea6c1ef d2188932f520e45f2e76153fbbaf13f81ea6c1ef
```
If your chain.pem contains the CA certificate, you can validate this file with the command: If your chain.pem contains the CA certificate, you can validate this file with the command:
```bash
openssl verify -CAfile chain.pem chain.pem openssl verify -CAfile chain.pem chain.pem
```
If your chain.pem does not contain the CA certificate (Let's Encrypt chains do not contain the CA If your chain.pem does not contain the CA certificate (Let's Encrypt chains do not contain the CA
for example) then you can validate with this command. for example) then you can validate with this command.
```bash
openssl verify -untrusted fullchain.pem fullchain.pem openssl verify -untrusted fullchain.pem fullchain.pem
```
> **NOTE** Here "-untrusted" flag means a list of further certificates in the chain to build up > **NOTE** Here "-untrusted" flag means a list of further certificates in the chain to build up to
> to the root is provided, but that the system CA root should be consulted. Verification is NOT bypassed > the root is provided, but that the system CA root should be consulted. Verification is NOT
> or allowed to be invalid. > bypassed or allowed to be invalid.
If these verifications pass you can now use these certificates with Kanidm. To put the certificates If these verifications pass you can now use these certificates with Kanidm. To put the certificates
in place you can use a shell container that mounts the volume such as: in place you can use a shell container that mounts the volume such as:
```bash
docker run --rm -i -t -v kanidmd:/data -v /my/host/path/work:/work opensuse/leap:latest /bin/sh -c "cp /work/* /data/" docker run --rm -i -t -v kanidmd:/data -v /my/host/path/work:/work opensuse/leap:latest /bin/sh -c "cp /work/* /data/"
```
OR for a shell into the volume: OR for a shell into the volume:
```bash
docker run --rm -i -t -v kanidmd:/data opensuse/leap:latest /bin/sh docker run --rm -i -t -v kanidmd:/data opensuse/leap:latest /bin/sh
```

View file

@ -1,25 +1,22 @@
# Recycle Bin # Recycle Bin
The recycle bin is a storage of deleted entries from the server. This allows The recycle bin is a storage of deleted entries from the server. This allows recovery from mistakes
recovery from mistakes for a period of time. for a period of time.
{{#template {{#template\
templates/kani-warning.md templates/kani-warning.md imagepath=images title=Warning! text=The recycle bin is a best effort -
imagepath=images when recovering in some cases not everything can be "put back" the way it was. Be sure to check your
title=Warning! entries are valid once they have been revived. }}
text=The recycle bin is a best effort - when recovering in some cases not everything can be "put back" the way it was. Be sure to check your entries are valid once they have been revived.
}}
## Where is the Recycle Bin? ## Where is the Recycle Bin?
The recycle bin is stored as part of your main database - it is included in all The recycle bin is stored as part of your main database - it is included in all backups and
backups and restores, just like any other data. It is also replicated between restores, just like any other data. It is also replicated between all servers.
all servers.
## How do Things Get Into the Recycle Bin? ## How do Things Get Into the Recycle Bin?
Any delete operation of an entry will cause it to be sent to the recycle bin. No Any delete operation of an entry will cause it to be sent to the recycle bin. No configuration or
configuration or specification is required. specification is required.
## How Long Do Items Stay in the Recycle Bin? ## How Long Do Items Stay in the Recycle Bin?
@ -29,25 +26,32 @@ Currently they stay up to 1 week before they are removed.
You can display all items in the Recycle Bin with: You can display all items in the Recycle Bin with:
```bash
kanidm recycle-bin list --name admin kanidm recycle-bin list --name admin
```
You can show a single item with: You can show a single item with:
```bash
kanidm recycle-bin get --name admin <uuid> kanidm recycle-bin get --name admin <uuid>
```
An entry can be revived with: An entry can be revived with:
```bash
kanidm recycle-bin revive --name admin <uuid> kanidm recycle-bin revive --name admin <uuid>
```
## Edge Cases ## Edge Cases
The recycle bin is a best effort to restore your data - there are some cases where The recycle bin is a best effort to restore your data - there are some cases where the revived
the revived entries may not be the same as their were when they were deleted. This entries may not be the same as their were when they were deleted. This generally revolves around
generally revolves around reference types such as group membership, or when the reference reference types such as group membership, or when the reference type includes supplemental map data
type includes supplemental map data such as the oauth2 scope map type. such as the oauth2 scope map type.
An example of this data loss is the following steps: An example of this data loss is the following steps:
```
add user1 add user1
add group1 add group1
add user1 as member of group1 add user1 as member of group1
@ -55,10 +59,12 @@ An example of this data loss is the following steps:
delete group1 delete group1
revive user1 revive user1
revive group1 revive group1
```
In this series of steps, due to the way that referential integrity is implemented, the In this series of steps, due to the way that referential integrity is implemented, the membership of
membership of user1 in group1 would be lost in this process. To explain why: user1 in group1 would be lost in this process. To explain why:
```
add user1 add user1
add group1 add group1
add user1 as member of group1 // refint between the two established, and memberof added add user1 as member of group1 // refint between the two established, and memberof added
@ -66,9 +72,10 @@ membership of user1 in group1 would be lost in this process. To explain why:
delete group1 // user1 now removes memberof group1 from refint delete group1 // user1 now removes memberof group1 from refint
revive user1 // re-add groups based on directmemberof (empty set) revive user1 // re-add groups based on directmemberof (empty set)
revive group1 // no members revive group1 // no members
```
These issues could be looked at again in the future, but for now we think that deletes of These issues could be looked at again in the future, but for now we think that deletes of groups is
groups is rare - we expect recycle bin to save you in "opps" moments, and in a majority rare - we expect recycle bin to save you in "opps" moments, and in a majority of cases you may
of cases you may delete a group or a user and then restore them. To handle this series delete a group or a user and then restore them. To handle this series of steps requires extra code
of steps requires extra code complexity in how we flag operations. For more, complexity in how we flag operations. For more, see
see [This issue on github](https://github.com/kanidm/kanidm/issues/177). [This issue on github](https://github.com/kanidm/kanidm/issues/177).

View file

@ -1,28 +1,28 @@
# Security Hardening # Security Hardening
Kanidm ships with a secure-by-default configuration, however that is only as strong Kanidm ships with a secure-by-default configuration, however that is only as strong as the
as the environment that Kanidm operates in. This could be your container environment environment that Kanidm operates in. This could be your container environment or your Unix-like
or your Unix-like system. system.
This chapter will detail a number of warnings and security practices you should This chapter will detail a number of warnings and security practices you should follow to ensure
follow to ensure that Kanidm operates in a secure environment. that Kanidm operates in a secure environment.
The main server is a high-value target for a potential attack, as Kanidm serves as The main server is a high-value target for a potential attack, as Kanidm serves as the authority on
the authority on identity and authorisation in a network. Compromise of the Kanidm identity and authorisation in a network. Compromise of the Kanidm server is equivalent to a
server is equivalent to a full-network take over, also known as "game over". full-network take over, also known as "game over".
The unixd resolver is also a high value target as it can be accessed to allow unauthorised The unixd resolver is also a high value target as it can be accessed to allow unauthorised access to
access to a server, to intercept communications to the server, or more. This also must be protected a server, to intercept communications to the server, or more. This also must be protected carefully.
carefully.
For this reason, Kanidm's components must be protected carefully. Kanidm avoids many classic For this reason, Kanidm's components must be protected carefully. Kanidm avoids many classic attacks
attacks by being developed in a memory safe language, but risks still exist. by being developed in a memory safe language, but risks still exist.
## Startup Warnings ## Startup Warnings
At startup Kanidm will warn you if the environment it is running in is suspicious or At startup Kanidm will warn you if the environment it is running in is suspicious or has risks. For
has risks. For example: example:
```bash
kanidmd server -c /tmp/server.toml kanidmd server -c /tmp/server.toml
WARNING: permissions on /tmp/server.toml may not be secure. Should be readonly to running uid. This could be a security risk ... WARNING: permissions on /tmp/server.toml may not be secure. Should be readonly to running uid. This could be a security risk ...
WARNING: /tmp/server.toml has 'everyone' permission bits in the mode. This could be a security risk ... WARNING: /tmp/server.toml has 'everyone' permission bits in the mode. This could be a security risk ...
@ -32,19 +32,20 @@ has risks. For example:
WARNING: permissions on ../insecure/key.pem may not be secure. Should be readonly to running uid. This could be a security risk ... WARNING: permissions on ../insecure/key.pem may not be secure. Should be readonly to running uid. This could be a security risk ...
WARNING: ../insecure/key.pem has 'everyone' permission bits in the mode. This could be a security risk ... WARNING: ../insecure/key.pem has 'everyone' permission bits in the mode. This could be a security risk ...
WARNING: DB folder /tmp has 'everyone' permission bits in the mode. This could be a security risk ... WARNING: DB folder /tmp has 'everyone' permission bits in the mode. This could be a security risk ...
```
Each warning highlights an issue that may exist in your environment. It is not possible for us to Each warning highlights an issue that may exist in your environment. It is not possible for us to
prescribe an exact configuration that may secure your system. This is why we only present prescribe an exact configuration that may secure your system. This is why we only present possible
possible risks. risks.
### Should be Read-only to Running UID ### Should be Read-only to Running UID
Files, such as configuration files, should be read-only to the UID of the Kanidm daemon. If an attacker is Files, such as configuration files, should be read-only to the UID of the Kanidm daemon. If an
able to gain code execution, they are then unable to modify the configuration to write, or to over-write attacker is able to gain code execution, they are then unable to modify the configuration to write,
files in other locations, or to tamper with the systems configuration. or to over-write files in other locations, or to tamper with the systems configuration.
This can be prevented by changing the files ownership to another user, or removing "write" bits This can be prevented by changing the files ownership to another user, or removing "write" bits from
from the group. the group.
### 'everyone' Permission Bits in the Mode ### 'everyone' Permission Bits in the Mode
@ -57,10 +58,11 @@ configuration, and removing "everyone" bits from the files in question.
### Owned by the Current UID, Which May Allow File Permission Changes ### Owned by the Current UID, Which May Allow File Permission Changes
File permissions in UNIX systems are a discretionary access control system, which means the File permissions in UNIX systems are a discretionary access control system, which means the named
named UID owner is able to further modify the access of a file regardless of the current UID owner is able to further modify the access of a file regardless of the current settings. For
settings. For example: example:
```bash
[william@amethyst 12:25] /tmp > touch test [william@amethyst 12:25] /tmp > touch test
[william@amethyst 12:25] /tmp > ls -al test [william@amethyst 12:25] /tmp > ls -al test
-rw-r--r-- 1 william wheel 0 29 Jul 12:25 test -rw-r--r-- 1 william wheel 0 29 Jul 12:25 test
@ -70,65 +72,73 @@ settings. For example:
[william@amethyst 12:25] /tmp > chmod 644 test [william@amethyst 12:25] /tmp > chmod 644 test
[william@amethyst 12:26] /tmp > ls -al test [william@amethyst 12:26] /tmp > ls -al test
-rw-r--r-- 1 william wheel 0 29 Jul 12:25 test -rw-r--r-- 1 william wheel 0 29 Jul 12:25 test
```
Notice that even though the file was set to "read only" to william, and no permission to any Notice that even though the file was set to "read only" to william, and no permission to any other
other users, user "william" can change the bits to add write permissions back or permissions users, user "william" can change the bits to add write permissions back or permissions for other
for other users. users.
This can be prevent by making the file owner a different UID than the running process for kanidm. This can be prevent by making the file owner a different UID than the running process for kanidm.
### A Secure Example ### A Secure Example
Between these three issues it can be hard to see a possible strategy to secure files, however Between these three issues it can be hard to see a possible strategy to secure files, however one
one way exists - group read permissions. The most effective method to secure resources for Kanidm way exists - group read permissions. The most effective method to secure resources for Kanidm is to
is to set configurations to: set configurations to:
```bash
[william@amethyst 12:26] /etc/kanidm > ls -al server.toml [william@amethyst 12:26] /etc/kanidm > ls -al server.toml
-r--r----- 1 root kanidm 212 28 Jul 16:53 server.toml -r--r----- 1 root kanidm 212 28 Jul 16:53 server.toml
```
The Kanidm server should be run as "kanidm:kanidm" with the appropriate user and user private The Kanidm server should be run as "kanidm:kanidm" with the appropriate user and user private group
group created on your system. This applies to unixd configuration as well. created on your system. This applies to unixd configuration as well.
For the database your data folder should be: For the database your data folder should be:
```bash
[root@amethyst 12:38] /data/kanidm > ls -al . [root@amethyst 12:38] /data/kanidm > ls -al .
total 1064 total 1064
drwxrwx--- 3 root kanidm 96 29 Jul 12:38 . drwxrwx--- 3 root kanidm 96 29 Jul 12:38 .
-rw-r----- 1 kanidm kanidm 544768 29 Jul 12:38 kanidm.db -rw-r----- 1 kanidm kanidm 544768 29 Jul 12:38 kanidm.db
```
This means 770 root:kanidm. This allows Kanidm to create new files in the folder, but prevents This means 770 root:kanidm. This allows Kanidm to create new files in the folder, but prevents
Kanidm from being able to change the permissions of the folder. Because the folder does not have Kanidm from being able to change the permissions of the folder. Because the folder does not have
"everyone" mode bits, the content of the database is secure because users can now cd/read "everyone" mode bits, the content of the database is secure because users can now cd/read from the
from the directory. directory.
Configurations for clients, such as /etc/kanidm/config, should be secured with read-only permissions Configurations for clients, such as /etc/kanidm/config, should be secured with read-only permissions
and owned by root: and owned by root:
```bash
[william@amethyst 12:26] /etc/kanidm > ls -al config [william@amethyst 12:26] /etc/kanidm > ls -al config
-r--r--r-- 1 root root 38 10 Jul 10:10 config -r--r--r-- 1 root root 38 10 Jul 10:10 config
```
This file should be "everyone"-readable, which is why the bits are defined as such. This file should be "everyone"-readable, which is why the bits are defined as such.
> NOTE: Why do you use 440 or 444 modes? > NOTE: Why do you use 440 or 444 modes?
> >
> A bug exists in the implementation of readonly() in rust that checks this as "does a write > A bug exists in the implementation of readonly() in rust that checks this as "does a write bit
> bit exist for any user" vs "can the current UID write the file?". This distinction is subtle > exist for any user" vs "can the current UID write the file?". This distinction is subtle but it
> but it affects the check. We don't believe this is a significant issue though, because > affects the check. We don't believe this is a significant issue though, because setting these to
> setting these to 440 and 444 helps to prevent accidental changes by an administrator anyway > 440 and 444 helps to prevent accidental changes by an administrator anyway
## Running as Non-root in docker ## Running as Non-root in docker
The commands provided in this book will run kanidmd as "root" in the container to make the onboarding The commands provided in this book will run kanidmd as "root" in the container to make the
smoother. However, this is not recommended in production for security reasons. onboarding smoother. However, this is not recommended in production for security reasons.
You should allocate unique UID and GID numbers for the service to run as on your host You should allocate unique UID and GID numbers for the service to run as on your host system. In
system. In this example we use `1000:1000` this example we use `1000:1000`
You will need to adjust the permissions on the `/data` volume to ensure that the process You will need to adjust the permissions on the `/data` volume to ensure that the process can manage
can manage the files. Kanidm requires the ability to write to the `/data` directory to create the files. Kanidm requires the ability to write to the `/data` directory to create the sqlite files.
the sqlite files. This UID/GID number should match the above. You could consider the following This UID/GID number should match the above. You could consider the following changes to help isolate
changes to help isolate these changes: these changes:
```bash
docker run --rm -i -t -v kanidmd:/data opensuse/leap:latest /bin/sh docker run --rm -i -t -v kanidmd:/data opensuse/leap:latest /bin/sh
mkdir /data/db/ mkdir /data/db/
chown 1000:1000 /data/db/ chown 1000:1000 /data/db/
@ -136,14 +146,15 @@ changes to help isolate these changes:
sed -i -e "s/db_path.*/db_path = \"\/data\/db\/kanidm.db\"/g" /data/server.toml sed -i -e "s/db_path.*/db_path = \"\/data\/db\/kanidm.db\"/g" /data/server.toml
chown root:root /data/server.toml chown root:root /data/server.toml
chmod 644 /data/server.toml chmod 644 /data/server.toml
```
Note that the example commands all run inside the docker container. Note that the example commands all run inside the docker container.
You can then use this to run the Kanidm server in docker with a user: You can then use this to run the Kanidm server in docker with a user:
```bash
docker run --rm -i -t -u 1000:1000 -v kanidmd:/data kanidm/server:latest /sbin/kanidmd ... docker run --rm -i -t -u 1000:1000 -v kanidmd:/data kanidm/server:latest /sbin/kanidmd ...
```
> **HINT** > **HINT** You need to use the UID or GID number with the `-u` argument, as the container can't
> You need to use the UID or GID number with the `-u` argument, as the container can't resolve > resolve usernames from the host system.
> usernames from the host system.

View file

@ -2,49 +2,56 @@
### Configuring server.toml ### Configuring server.toml
You need a configuration file in the volume named `server.toml`. (Within the container it should be `/data/server.toml`) Its contents should be as follows: You need a configuration file in the volume named `server.toml`. (Within the container it should be
`/data/server.toml`) Its contents should be as follows:
``` ```
{{#rustdoc_include ../../examples/server_container.toml}} {{#rustdoc_include ../../examples/server_container.toml}}
``` ```
This example is located in [examples/server_container.toml](https://github.com/kanidm/kanidm/blob/master/examples/server_container.toml). This example is located in
[examples/server_container.toml](https://github.com/kanidm/kanidm/blob/master/examples/server_container.toml).
{{#template {{#template templates/kani-warning.md imagepath=images title=Warning! text=You MUST set the `domain`
templates/kani-warning.md name correctly, aligned with your `origin`, else the server may refuse to start or some features
imagepath=images (e.g. webauthn, oauth) may not work correctly! }}
title=Warning!
text=You MUST set the `domain` name correctly, aligned with your `origin`, else the server may refuse to start or some features (e.g. webauthn, oauth) may not work correctly!
}}
### Check the configuration is valid. ### Check the configuration is valid.
You should test your configuration is valid before you proceed. You should test your configuration is valid before you proceed.
```bash
docker run --rm -i -t -v kanidmd:/data \ docker run --rm -i -t -v kanidmd:/data \
kanidm/server:latest /sbin/kanidmd configtest -c /data/server.toml kanidm/server:latest /sbin/kanidmd configtest -c /data/server.toml
```
### Default Admin Account ### Default Admin Account
Then you can setup the initial admin account and initialise the database into your volume. This command Then you can setup the initial admin account and initialise the database into your volume. This
will generate a new random password for the admin account. command will generate a new random password for the admin account.
```bash
docker run --rm -i -t -v kanidmd:/data \ docker run --rm -i -t -v kanidmd:/data \
kanidm/server:latest /sbin/kanidmd recover_account -c /data/server.toml admin kanidm/server:latest /sbin/kanidmd recover_account -c /data/server.toml admin
# success - recover_account password for user admin: vv... # success - recover_account password for user admin: vv...
```
### Run the Server ### Run the Server
Now we can run the server so that it can accept connections. This defaults to using `-c /data/server.toml` Now we can run the server so that it can accept connections. This defaults to using
`-c /data/server.toml`
```bash
docker run -p 443:8443 -v kanidmd:/data kanidm/server:latest docker run -p 443:8443 -v kanidmd:/data kanidm/server:latest
```
### Using the NET\_BIND\_SERVICE capability ### Using the NET\_BIND\_SERVICE capability
If you plan to run without using docker port mapping or some other reverse proxy, and your bindaddress If you plan to run without using docker port mapping or some other reverse proxy, and your
or ldapbindaddress port is less than `1024` you will need the `NET_BIND_SERVICE` in docker to allow bindaddress or ldapbindaddress port is less than `1024` you will need the `NET_BIND_SERVICE` in
these port binds. You can add this with `--cap-add` in your docker run command. docker to allow these port binds. You can add this with `--cap-add` in your docker run command.
```bash
docker run --cap-add NET_BIND_SERVICE --network [host OR macvlan OR ipvlan] \ docker run --cap-add NET_BIND_SERVICE --network [host OR macvlan OR ipvlan] \
-v kanidmd:/data kanidm/server:latest -v kanidmd:/data kanidm/server:latest
```

View file

@ -2,18 +2,22 @@
### Preserving the Previous Image ### Preserving the Previous Image
You may wish to preserve the previous image before updating. This is useful if an issue is encountered You may wish to preserve the previous image before updating. This is useful if an issue is
in upgrades. encountered in upgrades.
```bash
docker tag kanidm/server:latest kanidm/server:<DATE> docker tag kanidm/server:latest kanidm/server:<DATE>
docker tag kanidm/server:latest kanidm/server:2022-10-24 docker tag kanidm/server:latest kanidm/server:2022-10-24
```
### Update your Image ### Update your Image
Pull the latest version of Kanidm that matches your CPU profile Pull the latest version of Kanidm that matches your CPU profile
```bash
docker pull kanidm/server:latest docker pull kanidm/server:latest
docker pull kanidm/server:x86_64_latest docker pull kanidm/server:x86_64_latest
```
### Perform a backup ### Perform a backup
@ -21,42 +25,50 @@ See [backup and restore](backup_restore.md)
### Update your Instance ### Update your Instance
{{#template {{#template templates/kani-warning.md imagepath=images title=WARNING text=It is not always
templates/kani-warning.md guaranteed that downgrades are possible. It is critical you know how to backup and restore before
imagepath=images you proceed with this step. }}
title=WARNING
text=It is not always guaranteed that downgrades are possible. It is critical you know how to backup and restore before you proceed with this step.
}}
Docker updates by deleting and recreating the instance. All that needs to be preserved in your Docker updates by deleting and recreating the instance. All that needs to be preserved in your
storage volume. storage volume.
```bash
docker stop <previous instance name> docker stop <previous instance name>
```
You can test that your configuration is correct, and the server should correctly start. You can test that your configuration is correct, and the server should correctly start.
```bash
docker run --rm -i -t -v kanidmd:/data \ docker run --rm -i -t -v kanidmd:/data \
kanidm/server:latest /sbin/kanidmd configtest -c /data/server.toml kanidm/server:latest /sbin/kanidmd configtest -c /data/server.toml
```
You can then follow through with the upgrade You can then follow through with the upgrade
```bash
docker run -p PORTS -v kanidmd:/data \ docker run -p PORTS -v kanidmd:/data \
OTHER_CUSTOM_OPTIONS \ OTHER_CUSTOM_OPTIONS \
kanidm/server:latest kanidm/server:latest
```
Once you confirm the upgrade is successful you can delete the previous instance Once you confirm the upgrade is successful you can delete the previous instance
```bash
docker rm <previous instance name> docker rm <previous instance name>
```
If you encounter an issue you can revert to the previous version. If you encounter an issue you can revert to the previous version.
```bash
docker stop <new instance name> docker stop <new instance name>
docker start <previous instance name> docker start <previous instance name>
```
If you deleted the previous instance, you can recreate it from your preserved tag instead. If you deleted the previous instance, you can recreate it from your preserved tag instead.
```bash
docker run -p ports -v volumes kanidm/server:<DATE> docker run -p ports -v volumes kanidm/server:<DATE>
```
In some cases the downgrade to the previous instance may not work. If the server from your previous In some cases the downgrade to the previous instance may not work. If the server from your previous
version fails to start, you may need to restore from backup. version fails to start, you may need to restore from backup.

View file

@ -1,102 +1,119 @@
# SSH Key Distribution # SSH Key Distribution
To support SSH authentication securely to a large set of hosts running SSH, we support To support SSH authentication securely to a large set of hosts running SSH, we support distribution
distribution of SSH public keys via the Kanidm server. Both persons and service accounts of SSH public keys via the Kanidm server. Both persons and service accounts support SSH public keys
support SSH public keys on their accounts. on their accounts.
## Configuring Accounts ## Configuring Accounts
To view the current SSH public keys on accounts, you can use: To view the current SSH public keys on accounts, you can use:
```bash
kanidm person|service-account ssh list_publickeys --name <login user> <account to view> kanidm person|service-account ssh list_publickeys --name <login user> <account to view>
kanidm person|service-account ssh list_publickeys --name idm_admin william kanidm person|service-account ssh list_publickeys --name idm_admin william
```
All users by default can self-manage their SSH public keys. To upload a key, a command like this All users by default can self-manage their SSH public keys. To upload a key, a command like this is
is the best way to do so: the best way to do so:
```bash
kanidm person|service-account ssh add_publickey --name william william 'test-key' "`cat ~/.ssh/id_rsa.pub`" kanidm person|service-account ssh add_publickey --name william william 'test-key' "`cat ~/.ssh/id_rsa.pub`"
```
To remove (revoke) an SSH public key, delete them by the tag name: To remove (revoke) an SSH public key, delete them by the tag name:
```bash
kanidm person|service-account ssh delete_publickey --name william william 'test-key' kanidm person|service-account ssh delete_publickey --name william william 'test-key'
```
## Security Notes ## Security Notes
As a security feature, Kanidm validates *all* public keys to ensure they are valid SSH public keys. As a security feature, Kanidm validates _all_ public keys to ensure they are valid SSH public keys.
Uploading a private key or other data will be rejected. For example: Uploading a private key or other data will be rejected. For example:
```bash
kanidm person|service-account ssh add_publickey --name william william 'test-key' "invalid" kanidm person|service-account ssh add_publickey --name william william 'test-key' "invalid"
Enter password: Enter password:
... Some(SchemaViolation(InvalidAttributeSyntax)))' ... ... Some(SchemaViolation(InvalidAttributeSyntax)))' ...
```
## Server Configuration ## Server Configuration
### Public Key Caching Configuration ### Public Key Caching Configuration
If you have kanidm_unixd running, you can use it to locally cache SSH public keys. This means you If you have kanidm_unixd running, you can use it to locally cache SSH public keys. This means you
can still SSH into your machines, even if your network is down, you move away from Kanidm, or can still SSH into your machines, even if your network is down, you move away from Kanidm, or some
some other interruption occurs. other interruption occurs.
The kanidm_ssh_authorizedkeys command is part of the kanidm-unix-clients package, so should be installed The kanidm_ssh_authorizedkeys command is part of the kanidm-unix-clients package, so should be
on the servers. It communicates to kanidm_unixd, so you should have a configured PAM/nsswitch installed on the servers. It communicates to kanidm_unixd, so you should have a configured
setup as well. PAM/nsswitch setup as well.
You can test this is configured correctly by running: You can test this is configured correctly by running:
```bash
kanidm_ssh_authorizedkeys <account name> kanidm_ssh_authorizedkeys <account name>
```
If the account has SSH public keys you should see them listed, one per line. If the account has SSH public keys you should see them listed, one per line.
To configure servers to accept these keys, you must change their /etc/ssh/sshd_config to To configure servers to accept these keys, you must change their /etc/ssh/sshd_config to contain the
contain the lines: lines:
```
PubkeyAuthentication yes PubkeyAuthentication yes
UsePAM yes UsePAM yes
AuthorizedKeysCommand /usr/bin/kanidm_ssh_authorizedkeys %u AuthorizedKeysCommand /usr/bin/kanidm_ssh_authorizedkeys %u
AuthorizedKeysCommandUser nobody AuthorizedKeysCommandUser nobody
```
Restart sshd, and then attempt to authenticate with the keys. Restart sshd, and then attempt to authenticate with the keys.
It's highly recommended you keep your client configuration and sshd_configuration in a configuration It's highly recommended you keep your client configuration and sshd_configuration in a configuration
management tool such as salt or ansible. management tool such as salt or ansible.
> **NOTICE:** > **NOTICE:** With a working SSH key setup, you should also consider adding the following
> With a working SSH key setup, you should also consider adding the following
> sshd_config options as hardening. > sshd_config options as hardening.
```
PermitRootLogin no PermitRootLogin no
PasswordAuthentication no PasswordAuthentication no
PermitEmptyPasswords no PermitEmptyPasswords no
GSSAPIAuthentication no GSSAPIAuthentication no
KerberosAuthentication no KerberosAuthentication no
```
### Direct Communication Configuration ### Direct Communication Configuration
In this mode, the authorised keys commands will contact Kanidm directly. In this mode, the authorised keys commands will contact Kanidm directly.
> **NOTICE:** > **NOTICE:** As Kanidm is contacted directly there is no SSH public key cache. Any network outage
> As Kanidm is contacted directly there is no SSH public key cache. Any network > or communication loss may prevent you accessing your systems. You should only use this version if
> outage or communication loss may prevent you accessing your systems. You should > you have a requirement for it.
> only use this version if you have a requirement for it.
The kanidm_ssh_authorizedkeys_direct command is part of the kanidm-clients package, so should be installed The kanidm_ssh_authorizedkeys_direct command is part of the kanidm-clients package, so should be
on the servers. installed on the servers.
To configure the tool, you should edit /etc/kanidm/config, as documented in [clients](./client_tools.md) To configure the tool, you should edit /etc/kanidm/config, as documented in
[clients](./client_tools.md)
You can test this is configured correctly by running: You can test this is configured correctly by running:
```bash
kanidm_ssh_authorizedkeys_direct -D anonymous <account name> kanidm_ssh_authorizedkeys_direct -D anonymous <account name>
```
If the account has SSH public keys you should see them listed, one per line. If the account has SSH public keys you should see them listed, one per line.
To configure servers to accept these keys, you must change their /etc/ssh/sshd_config to To configure servers to accept these keys, you must change their /etc/ssh/sshd_config to contain the
contain the lines: lines:
```
PubkeyAuthentication yes PubkeyAuthentication yes
UsePAM yes UsePAM yes
AuthorizedKeysCommand /usr/bin/kanidm_ssh_authorizedkeys_direct -D anonymous %u AuthorizedKeysCommand /usr/bin/kanidm_ssh_authorizedkeys_direct -D anonymous %u
AuthorizedKeysCommandUser nobody AuthorizedKeysCommandUser nobody
```
Restart sshd, and then attempt to authenticate with the keys. Restart sshd, and then attempt to authenticate with the keys.

View file

@ -9,13 +9,13 @@ Kanidm to work with these, it is possible to synchronised data between these IDM
Currently Kanidm can consume (import) data from another IDM system. There are two major use cases Currently Kanidm can consume (import) data from another IDM system. There are two major use cases
for this: for this:
* Running Kanidm in parallel with another IDM system - Running Kanidm in parallel with another IDM system
* Migrating from an existing IDM to Kanidm - Migrating from an existing IDM to Kanidm
An incoming IDM data source is bound to Kanidm by a sync account. All synchronised entries will An incoming IDM data source is bound to Kanidm by a sync account. All synchronised entries will have
have a reference to the sync account that they came from defined by their "sync parent uuid". a reference to the sync account that they came from defined by their "sync parent uuid". While an
While an entry is owned by a sync account we refer to the sync account as having authority over entry is owned by a sync account we refer to the sync account as having authority over the content
the content of that entry. of that entry.
The sync process is driven by a sync tool. This tool extracts the current state of the sync from The sync process is driven by a sync tool. This tool extracts the current state of the sync from
Kanidm, requests the set of changes (differences) from the IDM source, and then submits these Kanidm, requests the set of changes (differences) from the IDM source, and then submits these
@ -23,45 +23,50 @@ changes to Kanidm. Kanidm will update and apply these changes and commit the new
success. success.
In the event of a conflict or data import error, Kanidm will halt and rollback the synchronisation In the event of a conflict or data import error, Kanidm will halt and rollback the synchronisation
to the last good state. The sync tool should be reconfigured to exclude the conflicting entry or to the last good state. The sync tool should be reconfigured to exclude the conflicting entry or to
to remap it's properties to resolve the conflict. The operation can then be retried. remap it's properties to resolve the conflict. The operation can then be retried.
This process can continue long term to allow Kanidm to operate in parallel to another IDM system. If This process can continue long term to allow Kanidm to operate in parallel to another IDM system. If
this is for a migration however, the sync account can be finalised. This terminates the sync account this is for a migration however, the sync account can be finalised. This terminates the sync account
and removes the sync parent uuid from all synchronised entries, moving authority of the entry into and removes the sync parent uuid from all synchronised entries, moving authority of the entry into
Kanidm. Kanidm.
Alternatelly, the sync account can be terminated which removes all synchronised content that was submitted. Alternatelly, the sync account can be terminated which removes all synchronised content that was
submitted.
## Creating a Sync Account ## Creating a Sync Account
Creating a sync account requires administration permissions. By default this is available to Creating a sync account requires administration permissions. By default this is available to members
members of the "system\_admins" group which "admin" is a memberof by default. of the "system\_admins" group which "admin" is a memberof by default.
```bash
kanidm system sync create <sync account name> kanidm system sync create <sync account name>
kanidm system sync create ipasync kanidm system sync create ipasync
```
Once the sync account is created you can then generate the sync token which identifies the Once the sync account is created you can then generate the sync token which identifies the sync
sync tool. tool.
```bash
kanidm system sync generate-token <sync account name> <token label> kanidm system sync generate-token <sync account name> <token label>
kanidm system sync generate-token ipasync mylabel kanidm system sync generate-token ipasync mylabel
token: eyJhbGci... token: eyJhbGci...
```
{{#template {{#template\
../templates/kani-warning.md ../templates/kani-warning.md imagepath=../images title=Warning! text=The sync account token has a
imagepath=../images high level of privilege, able to create new accounts and groups. It should be treated carefully as a
title=Warning! result! }}
text=The sync account token has a high level of privilege, able to create new accounts and groups. It should be treated carefully as a result!
}}
If you need to revoke the token, you can do so with: If you need to revoke the token, you can do so with:
```bash
kanidm system sync destroy-token <sync account name> kanidm system sync destroy-token <sync account name>
kanidm system sync destroy-token ipasync kanidm system sync destroy-token ipasync
```
Destroying the token does NOT affect the state of the sync account and it's synchronised entries. Creating Destroying the token does NOT affect the state of the sync account and it's synchronised entries.
a new token and providing that to the sync tool will continue the sync process. Creating a new token and providing that to the sync tool will continue the sync process.
## Operating the Sync Tool ## Operating the Sync Tool
@ -84,16 +89,15 @@ If you are performing a migration from an external IDM to Kanidm, when that migr
you can nominate that Kanidm now owns all of the imported data. This is achieved by finalising the you can nominate that Kanidm now owns all of the imported data. This is achieved by finalising the
sync account. sync account.
{{#template {{#template ../templates/kani-warning.md imagepath=../images title=Warning! text=You can not undo
../templates/kani-warning.md this operation. Once you have finalised an agreement, Kanidm owns all of the synchronised data, and
imagepath=../images you can not resume synchronisation. }}
title=Warning!
text=You can not undo this operation. Once you have finalised an agreement, Kanidm owns all of the synchronised data, and you can not resume synchronisation.
}}
```bash
kanidm system sync finalise <sync account name> kanidm system sync finalise <sync account name>
kanidm system sync finalise ipasync kanidm system sync finalise ipasync
# Do you want to continue? This operation can NOT be undone. [y/N] # Do you want to continue? This operation can NOT be undone. [y/N]
```
Once finalised, imported accounts can now be fully managed by Kanidm. Once finalised, imported accounts can now be fully managed by Kanidm.
@ -102,16 +106,14 @@ Once finalised, imported accounts can now be fully managed by Kanidm.
If you decide to cease importing accounts or need to remove all imported accounts from a sync If you decide to cease importing accounts or need to remove all imported accounts from a sync
account, you can choose to terminate the agreement removing all data that was imported. account, you can choose to terminate the agreement removing all data that was imported.
{{#template {{#template ../templates/kani-warning.md imagepath=../images title=Warning! text=You can not undo
../templates/kani-warning.md this operation. Once you have terminated an agreement, Kanidm deletes all of the synchronised data,
imagepath=../images and you can not resume synchronisation. }}
title=Warning!
text=You can not undo this operation. Once you have terminated an agreement, Kanidm deletes all of the synchronised data, and you can not resume synchronisation.
}}
```bash
kanidm system sync terminate <sync account name> kanidm system sync terminate <sync account name>
kanidm system sync terminate ipasync kanidm system sync terminate ipasync
# Do you want to continue? This operation can NOT be undone. [y/N] # Do you want to continue? This operation can NOT be undone. [y/N]
```
Once terminated all imported data will be deleted by Kanidm. Once terminated all imported data will be deleted by Kanidm.

View file

@ -19,62 +19,75 @@ to understand how to connect to Kanidm.
The sync tool specific components are configured in it's own configuration file. The sync tool specific components are configured in it's own configuration file.
``` ```rust
{{#rustdoc_include ../../../examples/kanidm-ipa-sync}} {{#rustdoc_include ../../../examples/kanidm-ipa-sync}}
``` ```
This example is located in [examples/kanidm-ipa-sync](https://github.com/kanidm/kanidm/blob/master/examples/kanidm-ipa-sync). This example is located in
[examples/kanidm-ipa-sync](https://github.com/kanidm/kanidm/blob/master/examples/kanidm-ipa-sync).
In addition to this, you must make some configuration changes to FreeIPA to enable synchronisation. In addition to this, you must make some configuration changes to FreeIPA to enable synchronisation.
You can find the name of your 389 Directory Server instance with: You can find the name of your 389 Directory Server instance with:
```bash
dsconf --list dsconf --list
```
Using this you can show the current status of the retro changelog plugin to see if you need Using this you can show the current status of the retro changelog plugin to see if you need to
to change it's configuration. change it's configuration.
```bash
dsconf <instance name> plugin retro-changelog show dsconf <instance name> plugin retro-changelog show
dsconf slapd-DEV-KANIDM-COM plugin retro-changelog show dsconf slapd-DEV-KANIDM-COM plugin retro-changelog show
```
You must modify the retro changelog plugin to include the full scope of the database suffix so that You must modify the retro changelog plugin to include the full scope of the database suffix so that
the sync tool can view the changes to the database. Currently dsconf can not modify the include-suffix the sync tool can view the changes to the database. Currently dsconf can not modify the
so you must do this manually. include-suffix so you must do this manually.
You need to change the `nsslapd-include-suffix` to match your FreeIPA baseDN here. You can You need to change the `nsslapd-include-suffix` to match your FreeIPA baseDN here. You can access
access the basedn with: the basedn with:
```bash
ldapsearch -H ldaps://<IPA SERVER HOSTNAME/IP> -x -b '' -s base namingContexts ldapsearch -H ldaps://<IPA SERVER HOSTNAME/IP> -x -b '' -s base namingContexts
# namingContexts: dc=ipa,dc=dev,dc=kanidm,dc=com # namingContexts: dc=ipa,dc=dev,dc=kanidm,dc=com
You should ignore `cn=changelog` and `o=ipaca` as these are system internal namingContexts. You
can then create an ldapmodify like the following.
``` ```
You should ignore `cn=changelog` and `o=ipaca` as these are system internal namingContexts. You can
then create an ldapmodify like the following.
```rust
{{#rustdoc_include ../../../iam_migrations/freeipa/00config-mod.ldif}} {{#rustdoc_include ../../../iam_migrations/freeipa/00config-mod.ldif}}
``` ```
And apply it with: And apply it with:
```bash
ldapmodify -f change.ldif -H ldaps://<IPA SERVER HOSTNAME/IP> -x -D 'cn=Directory Manager' -W ldapmodify -f change.ldif -H ldaps://<IPA SERVER HOSTNAME/IP> -x -D 'cn=Directory Manager' -W
# Enter LDAP Password: # Enter LDAP Password:
```
You must then reboot your FreeIPA server. You must then reboot your FreeIPA server.
## Running the Sync Tool Manually ## Running the Sync Tool Manually
You can perform a dry run with the sync tool manually to check your configurations are You can perform a dry run with the sync tool manually to check your configurations are correct and
correct and that the tool can synchronise from FreeIPA. that the tool can synchronise from FreeIPA.
```bash
kanidm-ipa-sync [-c /path/to/kanidm/config] -i /path/to/kanidm-ipa-sync -n kanidm-ipa-sync [-c /path/to/kanidm/config] -i /path/to/kanidm-ipa-sync -n
kanidm-ipa-sync -i /etc/kanidm/ipa-sync -n kanidm-ipa-sync -i /etc/kanidm/ipa-sync -n
```
## Running the Sync Tool Automatically ## Running the Sync Tool Automatically
The sync tool can be run on a schedule if you configure the `schedule` parameter, and provide The sync tool can be run on a schedule if you configure the `schedule` parameter, and provide the
the option "--schedule" on the cli option "--schedule" on the cli
```bash
kanidm-ipa-sync [-c /path/to/kanidm/config] -i /path/to/kanidm-ipa-sync --schedule kanidm-ipa-sync [-c /path/to/kanidm/config] -i /path/to/kanidm-ipa-sync --schedule
```
## Monitoring the Sync Tool ## Monitoring the Sync Tool
@ -85,10 +98,11 @@ You can configure a status listener that can be monitored via tcp with the param
An example of monitoring this with netcat is: An example of monitoring this with netcat is:
```bash
# status_bind = "[::1]:12345" # status_bind = "[::1]:12345"
# nc ::1 12345 # nc ::1 12345
Ok Ok
```
It's important to note no details are revealed via the status socket, and is purely for Ok or Err status It's important to note no details are revealed via the status socket, and is purely for Ok or Err
of the last sync. status of the last sync.

View file

@ -4,18 +4,21 @@ Some things to try.
## Is the server started? ## Is the server started?
If you don't see "ready to rock! 🪨" in your logs, it's not started. Scroll back and look for errors!dd If you don't see "ready to rock! 🪨" in your logs, it's not started. Scroll back and look for
errors!dd
## Can you connect? ## Can you connect?
If the server's running on `idm.example.com:8443` then a simple connectivity test is done using [curl](https://curl.se). If the server's running on `idm.example.com:8443` then a simple connectivity test is done using
[curl](https://curl.se).
Run the following command: Run the following command:
```shell ```shell
curl -k https://idm.example.com:8443/status curl -k https://idm.example.com:8443/status
``` ```
This is similar to what you *should* see: This is similar to what you _should_ see:
```shell ```shell
{{#rustdoc_include troubleshooting/curl_connection_test.txt}} {{#rustdoc_include troubleshooting/curl_connection_test.txt}}
@ -38,9 +41,11 @@ If you see something like this:
curl: (7) Failed to connect to idm.example.com port 8443 after 5 ms: Connection refused curl: (7) Failed to connect to idm.example.com port 8443 after 5 ms: Connection refused
``` ```
Then either your DNS is wrong (it's pointing at 10.0.0.1) or you can't connect to the server for some reason. Then either your DNS is wrong (it's pointing at 10.0.0.1) or you can't connect to the server for
some reason.
If you get errors about certificates, try adding `-k` to skip certificate verification checking and just test connectivity: If you get errors about certificates, try adding `-k` to skip certificate verification checking and
just test connectivity:
``` ```
curl -vk https://idm.example.com:8443 curl -vk https://idm.example.com:8443
@ -48,9 +53,10 @@ curl -vk https://idm.example.com:8443
## Server things to check ## Server things to check
* Has the config file got `bindaddress = "127.0.0.1:8443"` ? Change it to `bindaddress = "[::]:8443"`, so it listens on all interfaces. - Has the config file got `bindaddress = "127.0.0.1:8443"` ? Change it to
* Is there a firewall on the server? `bindaddress = "[::]:8443"`, so it listens on all interfaces.
* If you're running in docker, did you expose the port? (`-p 8443:8443`) - Is there a firewall on the server?
- If you're running in docker, did you expose the port? (`-p 8443:8443`)
## Client things to check ## Client things to check
@ -59,4 +65,3 @@ Try running commands with `RUST_LOG=debug` to get more information:
``` ```
RUST_LOG=debug kanidm login --name anonymous RUST_LOG=debug kanidm login --name anonymous
``` ```

View file

@ -1,32 +1,29 @@
# Why TLS? # Why TLS?
You may have noticed that Kanidm requires you to configure TLS in your container. You may have noticed that Kanidm requires you to configure TLS in your container.
We are a secure-by-design rather than secure-by-installation system, so TLS for We are a secure-by-design rather than secure-by-installation system, so TLS for all connections is
all connections is considered mandatory. considered mandatory.
## What are Secure Cookies? ## What are Secure Cookies?
`secure-cookies` is a flag set in cookies that asks a client to transmit them `secure-cookies` is a flag set in cookies that asks a client to transmit them back to the origin
back to the origin site if and only if HTTPS is present in the URL. site if and only if HTTPS is present in the URL.
Certificate authority (CA) verification is *not* checked - you can use invalid, Certificate authority (CA) verification is _not_ checked - you can use invalid, out of date
out of date certificates, or even certificates where the `subjectAltName` does certificates, or even certificates where the `subjectAltName` does not match, but the client must
not match, but the client must see https:// as the destination else it *will not* see https:// as the destination else it _will not_ send the cookies.
send the cookies.
## How Does That Affect Kanidm? ## How Does That Affect Kanidm?
Kanidm's authentication system is a stepped challenge response design, where you Kanidm's authentication system is a stepped challenge response design, where you initially request
initially request an "intent" to authenticate. Once you establish this intent, an "intent" to authenticate. Once you establish this intent, the server sets up a session-id into a
the server sets up a session-id into a cookie, and informs the client of cookie, and informs the client of what authentication methods can proceed.
what authentication methods can proceed.
If you do NOT have a HTTPS URL, the cookie with the session-id is not transmitted. If you do NOT have a HTTPS URL, the cookie with the session-id is not transmitted. The server
The server detects this as an invalid-state request in the authentication design, detects this as an invalid-state request in the authentication design, and immediately breaks the
and immediately breaks the connection, because it appears insecure. connection, because it appears insecure.
Simply put, we are trying to use settings like `secure_cookies` to add constraints Simply put, we are trying to use settings like `secure_cookies` to add constraints to the server so
to the server so that you *must* perform and adhere to best practices - such that you _must_ perform and adhere to best practices - such as having TLS present on your
as having TLS present on your communication channels. communication channels.

View file

@ -1,28 +1,22 @@
Mozilla Public License Version 2.0 # Mozilla Public License Version 2.0
==================================
1. Definitions 1. Definitions
--------------
1.1. "Contributor" ---
means each individual or legal entity that creates, contributes to
the creation of, or owns Covered Software.
1.2. "Contributor Version" 1.1. "Contributor" means each individual or legal entity that creates, contributes to the creation
means the combination of the Contributions of others (if any) used of, or owns Covered Software.
by a Contributor and that particular Contributor's Contribution.
1.3. "Contribution" 1.2. "Contributor Version" means the combination of the Contributions of others (if any) used by a
means Covered Software of a particular Contributor. Contributor and that particular Contributor's Contribution.
1.4. "Covered Software" 1.3. "Contribution" means Covered Software of a particular Contributor.
means Source Code Form to which the initial Contributor has attached
the notice in Exhibit A, the Executable Form of such Source Code
Form, and Modifications of such Source Code Form, in each case
including portions thereof.
1.5. "Incompatible With Secondary Licenses" 1.4. "Covered Software" means Source Code Form to which the initial Contributor has attached the
means notice in Exhibit A, the Executable Form of such Source Code Form, and Modifications of such Source
Code Form, in each case including portions thereof.
1.5. "Incompatible With Secondary Licenses" means
(a) that the initial Contributor has attached the notice described (a) that the initial Contributor has attached the notice described
in Exhibit B to the Covered Software; or in Exhibit B to the Covered Software; or
@ -31,23 +25,17 @@ Mozilla Public License Version 2.0
version 1.1 or earlier of the License, but not also under the version 1.1 or earlier of the License, but not also under the
terms of a Secondary License. terms of a Secondary License.
1.6. "Executable Form" 1.6. "Executable Form" means any form of the work other than Source Code Form.
means any form of the work other than Source Code Form.
1.7. "Larger Work" 1.7. "Larger Work" means a work that combines Covered Software with other material, in a separate
means a work that combines Covered Software with other material, in file or files, that is not Covered Software.
a separate file or files, that is not Covered Software.
1.8. "License" 1.8. "License" means this document.
means this document.
1.9. "Licensable" 1.9. "Licensable" means having the right to grant, to the maximum extent possible, whether at the
means having the right to grant, to the maximum extent possible, time of the initial grant or subsequently, any and all of the rights conveyed by this License.
whether at the time of the initial grant or subsequently, any and
all of the rights conveyed by this License.
1.10. "Modifications" 1.10. "Modifications" means any of the following:
means any of the following:
(a) any file in Source Code Form that results from an addition to, (a) any file in Source Code Form that results from an addition to,
deletion from, or modification of the contents of Covered deletion from, or modification of the contents of Covered
@ -56,319 +44,284 @@ Mozilla Public License Version 2.0
(b) any new file in Source Code Form that contains any Covered (b) any new file in Source Code Form that contains any Covered
Software. Software.
1.11. "Patent Claims" of a Contributor 1.11. "Patent Claims" of a Contributor means any patent claim(s), including without limitation,
means any patent claim(s), including without limitation, method, method, process, and apparatus claims, in any patent Licensable by such Contributor that would be
process, and apparatus claims, in any patent Licensable by such infringed, but for the grant of the License, by the making, using, selling, offering for sale,
Contributor that would be infringed, but for the grant of the having made, import, or transfer of either its Contributions or its Contributor Version.
License, by the making, using, selling, offering for sale, having
made, import, or transfer of either its Contributions or its
Contributor Version.
1.12. "Secondary License" 1.12. "Secondary License" means either the GNU General Public License, Version 2.0, the GNU Lesser
means either the GNU General Public License, Version 2.0, the GNU General Public License, Version 2.1, the GNU Affero General Public License, Version 3.0, or any
Lesser General Public License, Version 2.1, the GNU Affero General later versions of those licenses.
Public License, Version 3.0, or any later versions of those
licenses.
1.13. "Source Code Form" 1.13. "Source Code Form" means the form of the work preferred for making modifications.
means the form of the work preferred for making modifications.
1.14. "You" (or "Your") 1.14. "You" (or "Your") means an individual or a legal entity exercising rights under this License.
means an individual or a legal entity exercising rights under this For legal entities, "You" includes any entity that controls, is controlled by, or is under common
License. For legal entities, "You" includes any entity that control with You. For purposes of this definition, "control" means (a) the power, direct or
controls, is controlled by, or is under common control with You. For indirect, to cause the direction or management of such entity, whether by contract or otherwise, or
purposes of this definition, "control" means (a) the power, direct (b) ownership of more than fifty percent (50%) of the outstanding shares or beneficial ownership of
or indirect, to cause the direction or management of such entity, such entity.
whether by contract or otherwise, or (b) ownership of more than
fifty percent (50%) of the outstanding shares or beneficial
ownership of such entity.
2. License Grants and Conditions 2. License Grants and Conditions
--------------------------------
---
2.1. Grants 2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free, Each Contributor hereby grants You a world-wide, royalty-free, non-exclusive license:
non-exclusive license:
(a) under intellectual property rights (other than patent or trademark) (a) under intellectual property rights (other than patent or trademark) Licensable by such
Licensable by such Contributor to use, reproduce, make available, Contributor to use, reproduce, make available, modify, display, perform, distribute, and otherwise
modify, display, perform, distribute, and otherwise exploit its exploit its Contributions, either on an unmodified basis, with Modifications, or as part of a Larger
Contributions, either on an unmodified basis, with Modifications, or Work; and
as part of a Larger Work; and
(b) under Patent Claims of such Contributor to make, use, sell, offer (b) under Patent Claims of such Contributor to make, use, sell, offer for sale, have made, import,
for sale, have made, import, and otherwise transfer either its and otherwise transfer either its Contributions or its Contributor Version.
Contributions or its Contributor Version.
2.2. Effective Date 2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution The licenses granted in Section 2.1 with respect to any Contribution become effective for each
become effective for each Contribution on the date the Contributor first Contribution on the date the Contributor first distributes such Contribution.
distributes such Contribution.
2.3. Limitations on Grant Scope 2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under The licenses granted in this Section 2 are the only rights granted under this License. No additional
this License. No additional rights or licenses will be implied from the rights or licenses will be implied from the distribution or licensing of Covered Software under this
distribution or licensing of Covered Software under this License. License. Notwithstanding Section 2.1(b) above, no patent license is granted by a Contributor:
Notwithstanding Section 2.1(b) above, no patent license is granted by a
Contributor:
(a) for any code that a Contributor has removed from Covered Software; (a) for any code that a Contributor has removed from Covered Software; or
or
(b) for infringements caused by: (i) Your and any other third party's (b) for infringements caused by: (i) Your and any other third party's modifications of Covered
modifications of Covered Software, or (ii) the combination of its Software, or (ii) the combination of its Contributions with other software (except as part of its
Contributions with other software (except as part of its Contributor Contributor Version); or
Version); or
(c) under Patent Claims infringed by Covered Software in the absence of (c) under Patent Claims infringed by Covered Software in the absence of its Contributions.
its Contributions.
This License does not grant any rights in the trademarks, service marks, This License does not grant any rights in the trademarks, service marks, or logos of any Contributor
or logos of any Contributor (except as may be necessary to comply with (except as may be necessary to comply with the notice requirements in Section 3.4).
the notice requirements in Section 3.4).
2.4. Subsequent Licenses 2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to No Contributor makes additional grants as a result of Your choice to distribute the Covered Software
distribute the Covered Software under a subsequent version of this under a subsequent version of this License (see Section 10.2) or under the terms of a Secondary
License (see Section 10.2) or under the terms of a Secondary License (if License (if permitted under the terms of Section 3.3).
permitted under the terms of Section 3.3).
2.5. Representation 2.5. Representation
Each Contributor represents that the Contributor believes its Each Contributor represents that the Contributor believes its Contributions are its original
Contributions are its original creation(s) or it has sufficient rights creation(s) or it has sufficient rights to grant the rights to its Contributions conveyed by this
to grant the rights to its Contributions conveyed by this License. License.
2.6. Fair Use 2.6. Fair Use
This License is not intended to limit any rights You have under This License is not intended to limit any rights You have under applicable copyright doctrines of
applicable copyright doctrines of fair use, fair dealing, or other fair use, fair dealing, or other equivalents.
equivalents.
2.7. Conditions 2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in Section 2.1.
in Section 2.1.
3. Responsibilities 3. Responsibilities
-------------------
---
3.1. Distribution of Source Form 3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any All distribution of Covered Software in Source Code Form, including any Modifications that You
Modifications that You create or to which You contribute, must be under create or to which You contribute, must be under the terms of this License. You must inform
the terms of this License. You must inform recipients that the Source recipients that the Source Code Form of the Covered Software is governed by the terms of this
Code Form of the Covered Software is governed by the terms of this License, and how they can obtain a copy of this License. You may not attempt to alter or restrict
License, and how they can obtain a copy of this License. You may not the recipients' rights in the Source Code Form.
attempt to alter or restrict the recipients' rights in the Source Code
Form.
3.2. Distribution of Executable Form 3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then: If You distribute Covered Software in Executable Form then:
(a) such Covered Software must also be made available in Source Code (a) such Covered Software must also be made available in Source Code Form, as described in Section
Form, as described in Section 3.1, and You must inform recipients of 3.1, and You must inform recipients of the Executable Form how they can obtain a copy of such Source
the Executable Form how they can obtain a copy of such Source Code Code Form by reasonable means in a timely manner, at a charge no more than the cost of distribution
Form by reasonable means in a timely manner, at a charge no more to the recipient; and
than the cost of distribution to the recipient; and
(b) You may distribute such Executable Form under the terms of this (b) You may distribute such Executable Form under the terms of this License, or sublicense it under
License, or sublicense it under different terms, provided that the different terms, provided that the license for the Executable Form does not attempt to limit or
license for the Executable Form does not attempt to limit or alter alter the recipients' rights in the Source Code Form under this License.
the recipients' rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work 3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice, You may create and distribute a Larger Work under terms of Your choice, provided that You also
provided that You also comply with the requirements of this License for comply with the requirements of this License for the Covered Software. If the Larger Work is a
the Covered Software. If the Larger Work is a combination of Covered combination of Covered Software with a work governed by one or more Secondary Licenses, and the
Software with a work governed by one or more Secondary Licenses, and the Covered Software is not Incompatible With Secondary Licenses, this License permits You to
Covered Software is not Incompatible With Secondary Licenses, this additionally distribute such Covered Software under the terms of such Secondary License(s), so that
License permits You to additionally distribute such Covered Software the recipient of the Larger Work may, at their option, further distribute the Covered Software under
under the terms of such Secondary License(s), so that the recipient of the terms of either this License or such Secondary License(s).
the Larger Work may, at their option, further distribute the Covered
Software under the terms of either this License or such Secondary
License(s).
3.4. Notices 3.4. Notices
You may not remove or alter the substance of any license notices You may not remove or alter the substance of any license notices (including copyright notices,
(including copyright notices, patent notices, disclaimers of warranty, patent notices, disclaimers of warranty, or limitations of liability) contained within the Source
or limitations of liability) contained within the Source Code Form of Code Form of the Covered Software, except that You may alter any license notices to the extent
the Covered Software, except that You may alter any license notices to required to remedy known factual inaccuracies.
the extent required to remedy known factual inaccuracies.
3.5. Application of Additional Terms 3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support, You may choose to offer, and to charge a fee for, warranty, support, indemnity or liability
indemnity or liability obligations to one or more recipients of Covered obligations to one or more recipients of Covered Software. However, You may do so only on Your own
Software. However, You may do so only on Your own behalf, and not on behalf, and not on behalf of any Contributor. You must make it absolutely clear that any such
behalf of any Contributor. You must make it absolutely clear that any warranty, support, indemnity, or liability obligation is offered by You alone, and You hereby agree
such warranty, support, indemnity, or liability obligation is offered by to indemnify every Contributor for any liability incurred by such Contributor as a result of
You alone, and You hereby agree to indemnify every Contributor for any warranty, support, indemnity or liability terms You offer. You may include additional disclaimers of
liability incurred by such Contributor as a result of warranty, support, warranty and limitations of liability specific to any jurisdiction.
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
4. Inability to Comply Due to Statute or Regulation 4. Inability to Comply Due to Statute or Regulation
---------------------------------------------------
If it is impossible for You to comply with any of the terms of this ---
License with respect to some or all of the Covered Software due to
statute, judicial order, or regulation then You must: (a) comply with If it is impossible for You to comply with any of the terms of this License with respect to some or
the terms of this License to the maximum extent possible; and (b) all of the Covered Software due to statute, judicial order, or regulation then You must: (a) comply
describe the limitations and the code they affect. Such description must with the terms of this License to the maximum extent possible; and (b) describe the limitations and
be placed in a text file included with all distributions of the Covered the code they affect. Such description must be placed in a text file included with all distributions
Software under this License. Except to the extent prohibited by statute of the Covered Software under this License. Except to the extent prohibited by statute or
or regulation, such description must be sufficiently detailed for a regulation, such description must be sufficiently detailed for a recipient of ordinary skill to be
recipient of ordinary skill to be able to understand it. able to understand it.
5. Termination 5. Termination
--------------
5.1. The rights granted under this License will terminate automatically ---
if You fail to comply with any of its terms. However, if You become
compliant, then the rights granted under this License from a particular
Contributor are reinstated (a) provisionally, unless and until such
Contributor explicitly and finally terminates Your grants, and (b) on an
ongoing basis, if such Contributor fails to notify You of the
non-compliance by some reasonable means prior to 60 days after You have
come back into compliance. Moreover, Your grants from a particular
Contributor are reinstated on an ongoing basis if such Contributor
notifies You of the non-compliance by some reasonable means, this is the
first time You have received notice of non-compliance with this License
from such Contributor, and You become compliant prior to 30 days after
Your receipt of the notice.
5.2. If You initiate litigation against any entity by asserting a patent 5.1. The rights granted under this License will terminate automatically if You fail to comply with
infringement claim (excluding declaratory judgment actions, any of its terms. However, if You become compliant, then the rights granted under this License from
counter-claims, and cross-claims) alleging that a Contributor Version a particular Contributor are reinstated (a) provisionally, unless and until such Contributor
directly or indirectly infringes any patent, then the rights granted to explicitly and finally terminates Your grants, and (b) on an ongoing basis, if such Contributor
You by any and all Contributors for the Covered Software under Section fails to notify You of the non-compliance by some reasonable means prior to 60 days after You have
2.1 of this License shall terminate. come back into compliance. Moreover, Your grants from a particular Contributor are reinstated on an
ongoing basis if such Contributor notifies You of the non-compliance by some reasonable means, this
is the first time You have received notice of non-compliance with this License from such
Contributor, and You become compliant prior to 30 days after Your receipt of the notice.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all 5.2. If You initiate litigation against any entity by asserting a patent infringement claim
end user license agreements (excluding distributors and resellers) which (excluding declaratory judgment actions, counter-claims, and cross-claims) alleging that a
have been validly granted by You or Your distributors under this License Contributor Version directly or indirectly infringes any patent, then the rights granted to You by
prior to termination shall survive termination. any and all Contributors for the Covered Software under Section 2.1 of this License shall terminate.
************************************************************************ 5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user license agreements
* * (excluding distributors and resellers) which have been validly granted by You or Your distributors
* 6. Disclaimer of Warranty * under this License prior to termination shall survive termination.
* ------------------------- *
* *
* Covered Software is provided under this License on an "as is" *
* basis, without warranty of any kind, either expressed, implied, or *
* statutory, including, without limitation, warranties that the *
* Covered Software is free of defects, merchantable, fit for a *
* particular purpose or non-infringing. The entire risk as to the *
* quality and performance of the Covered Software is with You. *
* Should any Covered Software prove defective in any respect, You *
* (not any Contributor) assume the cost of any necessary servicing, *
* repair, or correction. This disclaimer of warranty constitutes an *
* essential part of this License. No use of any Covered Software is *
* authorized under this License except under this disclaimer. *
* *
************************************************************************
************************************************************************ ---
* *
* 7. Limitation of Liability * -
* -------------------------- * -
* * -
* Under no circumstances and under no legal theory, whether tort * 6. Disclaimer of Warranty *
* (including negligence), contract, or otherwise, shall any * - ------------------------- *
* Contributor, or anyone who distributes Covered Software as * -
* permitted above, be liable to You for any direct, indirect, * -
* special, incidental, or consequential damages of any character * - Covered Software is provided under this License on an "as is" *
* including, without limitation, damages for lost profits, loss of * - basis, without warranty of any kind, either expressed, implied, or *
* goodwill, work stoppage, computer failure or malfunction, or any * - statutory, including, without limitation, warranties that the *
* and all other commercial damages or losses, even if such party * - Covered Software is free of defects, merchantable, fit for a *
* shall have been informed of the possibility of such damages. This * - particular purpose or non-infringing. The entire risk as to the *
* limitation of liability shall not apply to liability for death or * - quality and performance of the Covered Software is with You. *
* personal injury resulting from such party's negligence to the * - Should any Covered Software prove defective in any respect, You *
* extent applicable law prohibits such limitation. Some * - (not any Contributor) assume the cost of any necessary servicing, *
* jurisdictions do not allow the exclusion or limitation of * - repair, or correction. This disclaimer of warranty constitutes an *
* incidental or consequential damages, so this exclusion and * - essential part of this License. No use of any Covered Software is *
* limitation may not apply to You. * - authorized under this License except under this disclaimer. *
* * -
************************************************************************ -
---
---
-
-
-
7. Limitation of Liability *
- -------------------------- *
-
-
- Under no circumstances and under no legal theory, whether tort *
- (including negligence), contract, or otherwise, shall any *
- Contributor, or anyone who distributes Covered Software as *
- permitted above, be liable to You for any direct, indirect, *
- special, incidental, or consequential damages of any character *
- including, without limitation, damages for lost profits, loss of *
- goodwill, work stoppage, computer failure or malfunction, or any *
- and all other commercial damages or losses, even if such party *
- shall have been informed of the possibility of such damages. This *
- limitation of liability shall not apply to liability for death or *
- personal injury resulting from such party's negligence to the *
- extent applicable law prohibits such limitation. Some *
- jurisdictions do not allow the exclusion or limitation of *
- incidental or consequential damages, so this exclusion and *
- limitation may not apply to You. *
-
-
---
8. Litigation 8. Litigation
-------------
Any litigation relating to this License may be brought only in the ---
courts of a jurisdiction where the defendant maintains its principal
place of business and such litigation shall be governed by laws of that Any litigation relating to this License may be brought only in the courts of a jurisdiction where
jurisdiction, without reference to its conflict-of-law provisions. the defendant maintains its principal place of business and such litigation shall be governed by
Nothing in this Section shall prevent a party's ability to bring laws of that jurisdiction, without reference to its conflict-of-law provisions. Nothing in this
cross-claims or counter-claims. Section shall prevent a party's ability to bring cross-claims or counter-claims.
9. Miscellaneous 9. Miscellaneous
----------------
This License represents the complete agreement concerning the subject ---
matter hereof. If any provision of this License is held to be
unenforceable, such provision shall be reformed only to the extent This License represents the complete agreement concerning the subject matter hereof. If any
necessary to make it enforceable. Any law or regulation which provides provision of this License is held to be unenforceable, such provision shall be reformed only to the
that the language of a contract shall be construed against the drafter extent necessary to make it enforceable. Any law or regulation which provides that the language of a
shall not be used to construe this License against a Contributor. contract shall be construed against the drafter shall not be used to construe this License against a
Contributor.
10. Versions of the License 10. Versions of the License
---------------------------
---
10.1. New Versions 10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section Mozilla Foundation is the license steward. Except as provided in Section 10.3, no one other than the
10.3, no one other than the license steward has the right to modify or license steward has the right to modify or publish new versions of this License. Each version will
publish new versions of this License. Each version will be given a be given a distinguishing version number.
distinguishing version number.
10.2. Effect of New Versions 10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version You may distribute the Covered Software under the terms of the version of the License under which
of the License under which You originally received the Covered Software, You originally received the Covered Software, or under the terms of any subsequent version published
or under the terms of any subsequent version published by the license by the license steward.
steward.
10.3. Modified Versions 10.3. Modified Versions
If you create software not governed by this License, and you want to If you create software not governed by this License, and you want to create a new license for such
create a new license for such software, you may create and use a software, you may create and use a modified version of this License if you rename the license and
modified version of this License if you rename the license and remove remove any references to the name of the license steward (except to note that such modified license
any references to the name of the license steward (except to note that differs from this License).
such modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary 10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses
Licenses
If You choose to distribute Source Code Form that is Incompatible With If You choose to distribute Source Code Form that is Incompatible With Secondary Licenses under the
Secondary Licenses under the terms of this version of the License, the terms of this version of the License, the notice described in Exhibit B of this License must be
notice described in Exhibit B of this License must be attached. attached.
Exhibit A - Source Code Form License Notice ## Exhibit A - Source Code Form License Notice
-------------------------------------------
This Source Code Form is subject to the terms of the Mozilla Public This Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of
License, v. 2.0. If a copy of the MPL was not distributed with this the MPL was not distributed with this file, You can obtain one at http://mozilla.org/MPL/2.0/.
file, You can obtain one at http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular If it is not possible or desirable to put the notice in a particular file, then You may include the
file, then You may include the notice in a location (such as a LICENSE notice in a location (such as a LICENSE file in a relevant directory) where a recipient would be
file in a relevant directory) where a recipient would be likely to look likely to look for such a notice.
for such a notice.
You may add additional accurate notices of copyright ownership. You may add additional accurate notices of copyright ownership.
Exhibit B - "Incompatible With Secondary Licenses" Notice ## Exhibit B - "Incompatible With Secondary Licenses" Notice
---------------------------------------------------------
This Source Code Form is "Incompatible With Secondary Licenses", as
defined by the Mozilla Public License, v. 2.0.
This Source Code Form is "Incompatible With Secondary Licenses", as defined by the Mozilla Public
License, v. 2.0.

View file

@ -6,30 +6,31 @@
## About ## About
Kanidm is a simple and secure identity management platform, which provides services to allow Kanidm is a simple and secure identity management platform, which provides services to allow other
other systems and application to authenticate against. The project aims for the highest levels systems and application to authenticate against. The project aims for the highest levels of
of reliability, security and ease of use. reliability, security and ease of use.
The goal of this project is to be a complete identity management provider, covering the broadest The goal of this project is to be a complete identity management provider, covering the broadest
possible set of requirements and integrations. You should not need any other components (like Keycloak) possible set of requirements and integrations. You should not need any other components (like
when you use Kanidm. We want to create a project that will be suitable for everything Keycloak) when you use Kanidm. We want to create a project that will be suitable for everything from
from personal home deployments, to the largest enterprise needs. personal home deployments, to the largest enterprise needs.
To achieve this we rely heavily on strict defaults, simple configuration, and self-healing components. To achieve this we rely heavily on strict defaults, simple configuration, and self-healing
components.
The project is still growing and some areas are developing at a fast pace. The core of the server The project is still growing and some areas are developing at a fast pace. The core of the server
however is reliable and we make all effort to ensure upgrades will always work. however is reliable and we make all effort to ensure upgrades will always work.
Kanidm supports: Kanidm supports:
* Oauth2/OIDC Authentication provider for web SSO - Oauth2/OIDC Authentication provider for web SSO
* Read only LDAPS gateway - Read only LDAPS gateway
* Linux/Unix integration (with offline authentication) - Linux/Unix integration (with offline authentication)
* SSH key distribution to Linux/Unix systems - SSH key distribution to Linux/Unix systems
* RADIUS for network authentication - RADIUS for network authentication
* Passkeys / Webauthn for secure cryptographic authentication - Passkeys / Webauthn for secure cryptographic authentication
* A self service web ui - A self service web ui
* Complete CLI tooling for administration - Complete CLI tooling for administration
If you want to host your own centralised authentication service, then Kanidm is for you! If you want to host your own centralised authentication service, then Kanidm is for you!
@ -40,7 +41,8 @@ If you want to deploy Kanidm to see what it can do, you should read the Kanidm b
- [Kanidm book (Latest stable)](https://kanidm.github.io/kanidm/stable/) - [Kanidm book (Latest stable)](https://kanidm.github.io/kanidm/stable/)
- [Kanidm book (Latest commit)](https://kanidm.github.io/kanidm/master/) - [Kanidm book (Latest commit)](https://kanidm.github.io/kanidm/master/)
We also publish [support guidelines](https://github.com/kanidm/kanidm/blob/master/project_docs/RELEASE_AND_SUPPORT.md) We also publish
[support guidelines](https://github.com/kanidm/kanidm/blob/master/project_docs/RELEASE_AND_SUPPORT.md)
for what the project will support. for what the project will support.
## Code of Conduct / Ethics ## Code of Conduct / Ethics
@ -54,8 +56,8 @@ See our documentation on [rights and ethics]
## Getting in Contact / Questions ## Getting in Contact / Questions
We have a [gitter community channel] where we can talk. Firstyear is also happy to We have a [gitter community channel] where we can talk. Firstyear is also happy to answer questions
answer questions via email, which can be found on their github profile. via email, which can be found on their github profile.
[gitter community channel]: https://gitter.im/kanidm/community [gitter community channel]: https://gitter.im/kanidm/community
@ -63,29 +65,29 @@ answer questions via email, which can be found on their github profile.
### LLDAP ### LLDAP
[LLDAP](https://github.com/nitnelave/lldap) is a similar project aiming for a small and easy to administer [LLDAP](https://github.com/nitnelave/lldap) is a similar project aiming for a small and easy to
LDAP server with a web administration portal. Both projects use the [Kanidm LDAP bindings](https://github.com/kanidm/ldap3), and have administer LDAP server with a web administration portal. Both projects use the
many similar ideas. [Kanidm LDAP bindings](https://github.com/kanidm/ldap3), and have many similar ideas.
The primary benefit of Kanidm over LLDAP is that Kanidm offers a broader set of "built in" features The primary benefit of Kanidm over LLDAP is that Kanidm offers a broader set of "built in" features
like Oauth2 and OIDC. To use these from LLDAP you need an external portal like Keycloak, where in Kanidm like Oauth2 and OIDC. To use these from LLDAP you need an external portal like Keycloak, where in
they are "built in". However that is also a strength of LLDAP is that is offers "less" which may make Kanidm they are "built in". However that is also a strength of LLDAP is that is offers "less" which
it easier to administer and deploy for you. may make it easier to administer and deploy for you.
If Kanidm is too complex for your needs, you should check out LLDAP as a smaller alternative. If you If Kanidm is too complex for your needs, you should check out LLDAP as a smaller alternative. If you
want a project which has a broader feature set out of the box, then Kanidm might be a better fit. want a project which has a broader feature set out of the box, then Kanidm might be a better fit.
### 389-ds / OpenLDAP ### 389-ds / OpenLDAP
Both 389-ds and OpenLDAP are generic LDAP servers. This means they only provide LDAP and you need Both 389-ds and OpenLDAP are generic LDAP servers. This means they only provide LDAP and you need to
to bring your own IDM configuration on top. bring your own IDM configuration on top.
If you need the highest levels of customisation possible from your LDAP deployment, then these are If you need the highest levels of customisation possible from your LDAP deployment, then these are
probably better alternatives. If you want a service that is easier to setup and focused on IDM, then probably better alternatives. If you want a service that is easier to setup and focused on IDM, then
Kanidm is a better choice. Kanidm is a better choice.
Kanidm was originally inspired by many elements of both 389-ds and OpenLDAP. Already Kanidm is as fast Kanidm was originally inspired by many elements of both 389-ds and OpenLDAP. Already Kanidm is as
as (or faster than) 389-ds for performance and scaling. fast as (or faster than) 389-ds for performance and scaling.
### FreeIPA ### FreeIPA
@ -101,15 +103,14 @@ Kanidm is probably for you.
## Developer Getting Started ## Developer Getting Started
If you want to develop on the server, there is a getting started [guide for developers]. IDM If you want to develop on the server, there is a getting started [guide for developers]. IDM is a
is a diverse topic and we encourage contributions of many kinds in the project, from people of diverse topic and we encourage contributions of many kinds in the project, from people of all
all backgrounds. backgrounds.
[guide for developers]: https://kanidm.github.io/kanidm/master/DEVELOPER_README.html [guide for developers]: https://kanidm.github.io/kanidm/master/DEVELOPER_README.html
## What does Kanidm mean? ## What does Kanidm mean?
The original project name was rsidm while it was a thought experiment. Now that it's growing The original project name was rsidm while it was a thought experiment. Now that it's growing and
and developing, we gave it a better project name. Kani is Japanese for "crab". Rust's mascot is a crab. developing, we gave it a better project name. Kani is Japanese for "crab". Rust's mascot is a crab.
IDM is the common industry term for identity management services. IDM is the common industry term for identity management services.

View file

@ -1,14 +1,12 @@
# Developer Principles # Developer Principles
As a piece of software that stores the identities of people, the project becomes As a piece of software that stores the identities of people, the project becomes bound to social and
bound to social and political matters. The decisions we make have consequences political matters. The decisions we make have consequences on many people - many who never have the
on many people - many who never have the chance to choose what software is used chance to choose what software is used to store their identities (think employees in a business).
to store their identities (think employees in a business).
This means we have a responsibility to not only be aware of our impact on our This means we have a responsibility to not only be aware of our impact on our direct users
direct users (developers, system administrators, dev ops, security and more) (developers, system administrators, dev ops, security and more) but also the impact on indirect
but also the impact on indirect consumers - many of who are unlikely to be in consumers - many of who are unlikely to be in a position to contact us to ask for changes and help.
a position to contact us to ask for changes and help.
## Ethics / Rights ## Ethics / Rights
@ -18,58 +16,52 @@ If you have not already, please see our documentation on [rights and ethics]
## Humans First ## Humans First
We must at all times make decisions that put humans first. We must respect We must at all times make decisions that put humans first. We must respect all cultures, languages,
all cultures, languages, and identities and how they are represented. and identities and how they are represented.
This may mean we make technical choices that are difficult or more complex, This may mean we make technical choices that are difficult or more complex, or different to "how
or different to "how things have always been done". But we do this to things have always been done". But we do this to ensure that all people can have their identities
ensure that all people can have their identities stored how they choose. stored how they choose.
For example, any user may change their name, display name and legal name at For example, any user may change their name, display name and legal name at any time. Many
any time. Many applications will break as they primary key from name when applications will break as they primary key from name when this occurs. But this is the fault of the
this occurs. But this is the fault of the application. Name changes must application. Name changes must be allowed. Our job as technical experts is to allow that to happen.
be allowed. Our job as technical experts is to allow that to happen.
We will never put a burden on the user to correct for poor designs on We will never put a burden on the user to correct for poor designs on our part. For example, locking
our part. For example, locking an account if it logs in from a different an account if it logs in from a different country unless the user logs in before hand to indicate
country unless the user logs in before hand to indicate where they are where they are going. This makes the user responsible for a burden (changing the allowed login
going. This makes the user responsible for a burden (changing the allowed login country) when the real problem is preventing bruteforce attacks - which can be technically solved in
country) when the real problem is preventing bruteforce attacks - which better ways that don't put administrative load to humans.
can be technically solved in better ways that don't put administrative
load to humans.
## Correct and Simple ## Correct and Simple
As a piece of security sensitive software we must always put correctness As a piece of security sensitive software we must always put correctness first. All code must have
first. All code must have tests. All developers must be able to run all tests. All developers must be able to run all tests on their machine and environment of choice.
tests on their machine and environment of choice.
This means that the following must always work: This means that the following must always work:
```bash
git clone ... git clone ...
cargo test cargo test
```
If a test or change would require extra requirements, dependencies, or If a test or change would require extra requirements, dependencies, or preconfiguration, then we can
preconfiguration, then we can no longer provide the above. Testing must no longer provide the above. Testing must be easy and accesible, else we wont do it, and that leads
be easy and accesible, else we wont do it, and that leads to poor to poor software quality.
software quality.
The project must be simple. Any one should be able to understand how it The project must be simple. Any one should be able to understand how it works and why those
works and why those decisions were made. decisions were made.
## Languages ## Languages
The core server will (for now) always be written in Rust. This is due to The core server will (for now) always be written in Rust. This is due to the strong type guarantees
the strong type guarantees it gives, and how that can help raise the it gives, and how that can help raise the quality of our project.
quality of our project.
## Over-Configuration ## Over-Configuration
Configuration will be allowed, but only if it does not impact the statements Configuration will be allowed, but only if it does not impact the statements above. Having
above. Having configuration is good, but allowing too much (IE a scripting configuration is good, but allowing too much (IE a scripting engine for security rules) can give
engine for security rules) can give deployments the ability to violate human deployments the ability to violate human first principles, which reflects badly on us.
first principles, which reflects badly on us.
All configuration items, must be constrained to fit within our principles All configuration items, must be constrained to fit within our principles so that every kanidm
so that every kanidm deployment, will always provide a positive experience deployment, will always provide a positive experience to all people.
to all people.

View file

@ -2,24 +2,24 @@
Kanidm is released on a 3 month (quarterly) basis. Kanidm is released on a 3 month (quarterly) basis.
* February 1st - February 1st
* May 1st - May 1st
* August 1st - August 1st
* November 1st - November 1st
Releases will be tagged and branched in git. Releases will be tagged and branched in git.
1.2.0 will be released as the first supported version once the project believes the project is 1.2.0 will be released as the first supported version once the project believes the project is in a
in a maintainable longterm state, without requiring backward breaking changes. There is no current maintainable longterm state, without requiring backward breaking changes. There is no current
estimated date for 1.2.0. estimated date for 1.2.0.
## Support ## Support
Releases during alpha will recieve limited fixes once released. Specifically we will resolve: Releases during alpha will recieve limited fixes once released. Specifically we will resolve:
* Moderate security issues and above - Moderate security issues and above
* Flaws leading to dataloss or corruption - Flaws leading to dataloss or corruption
* Other quality fixes at the discrestion of the project team - Other quality fixes at the discrestion of the project team
These will be backported to the latest stable branch only. These will be backported to the latest stable branch only.
@ -27,23 +27,25 @@ These will be backported to the latest stable branch only.
There are a number of "surfaces" that can be considered as "API" in Kanidm. There are a number of "surfaces" that can be considered as "API" in Kanidm.
* JSON HTTP end points of kanidmd - JSON HTTP end points of kanidmd
* unix domain socket API of `kanidm_unixd` resolver - unix domain socket API of `kanidm_unixd` resolver
* LDAP interface of kanidm - LDAP interface of kanidm
* CLI interface of kanidm admin command - CLI interface of kanidm admin command
* Many other interaction surfaces - Many other interaction surfaces
During the Alpha, there is no guarantee that *any* of these APIs named here or not named will remain stable. During the Alpha, there is no guarantee that _any_ of these APIs named here or not named will remain
Only elements from "the same release" are guaranteed to work with each other. stable. Only elements from "the same release" are guaranteed to work with each other.
Once an official release is made, only the JSON API and LDAP interface will be declared stable. Once an official release is made, only the JSON API and LDAP interface will be declared stable.
The unix domain socket API is internal and will never be "stable". The unix domain socket API is internal and will never be "stable".
The CLI is *not* an API and can change with the interest of human interaction during any release. The CLI is _not_ an API and can change with the interest of human interaction during any release.
## Python module ## Python module
The python module will typically trail changes in functionality of the core Rust code, and will be developed as we it for our own needs - please feel free to add functionality or improvements, or [ask for them in a Github issue](http://github.com/kanidm/kanidm/issues/new/choose)! The python module will typically trail changes in functionality of the core Rust code, and will be
developed as we it for our own needs - please feel free to add functionality or improvements, or
[ask for them in a Github issue](http://github.com/kanidm/kanidm/issues/new/choose)!
All code changes will include full type-casting wherever possible. All code changes will include full type-casting wherever possible.

View file

@ -1,74 +1,74 @@
## Pre-Reqs ## Pre-Reqs
```bash
cargo install cargo-audit cargo install cargo-audit
cargo install cargo-outdated cargo install cargo-outdated
```
## Check List ## Check List
### Start a release ### Start a release
* [ ] git checkout -b YYYYMMDD-release - [ ] git checkout -b YYYYMMDD-release
### Cargo Tasks ### Cargo Tasks
* [ ] cargo outdated -R - [ ] cargo outdated -R
* [ ] cargo audit - [ ] cargo audit
* [ ] cargo test - [ ] cargo test
### Code Changes ### Code Changes
* [ ] upgrade crypto policy values if requires - [ ] upgrade crypto policy values if requires
* [ ] bump index version in constants - [ ] bump index version in constants
* [ ] check for breaking db entry changes. - [ ] check for breaking db entry changes.
### Administration ### Administration
* [ ] update version in ./kanidmd\_web\_ui/Cargo.toml - [ ] update version in ./kanidmd\_web\_ui/Cargo.toml
* [ ] update version in ./Cargo.toml - [ ] update version in ./Cargo.toml
* [ ] cargo test - [ ] cargo test
* [ ] build wasm components with release profile - [ ] build wasm components with release profile
* [ ] Update `RELEASE_NOTES.md` - [ ] Update `RELEASE_NOTES.md`
* [ ] git commit - [ ] git commit
* [ ] git rebase -i HEAD~X - [ ] git rebase -i HEAD~X
* [ ] git push origin YYYYMMDD-release - [ ] git push origin YYYYMMDD-release
* [ ] Merge PR - [ ] Merge PR
### Git Management ### Git Management
* [ ] git checkout master - [ ] git checkout master
* [ ] git branch 1.1.0-alpha.x (Note no v to prevent ref conflict) - [ ] git branch 1.1.0-alpha.x (Note no v to prevent ref conflict)
* [ ] git checkout v1.1.0-alpha.x - [ ] git checkout v1.1.0-alpha.x
* [ ] git tag v1.1.0-alpha.x - [ ] git tag v1.1.0-alpha.x
* [ ] Final inspect of the branch - [ ] Final inspect of the branch
* [ ] git push origin 1.1.0-alpha.x - [ ] git push origin 1.1.0-alpha.x
* [ ] git push origin 1.1.0-alpha.x --tags - [ ] git push origin 1.1.0-alpha.x --tags
### Cargo publish ### Cargo publish
* [ ] publish `kanidm_proto` - [ ] publish `kanidm_proto`
* [ ] publish `kanidmd/kanidm` - [ ] publish `kanidmd/kanidm`
* [ ] publish `kanidm_client` - [ ] publish `kanidm_client`
* [ ] publish `kanidm_tools` - [ ] publish `kanidm_tools`
### Docker ### Docker
* [ ] docker buildx use cluster - [ ] docker buildx use cluster
* [ ] `make buildx/kanidmd/x86_64_v3 buildx/kanidmd buildx/radiusd` - [ ] `make buildx/kanidmd/x86_64_v3 buildx/kanidmd buildx/radiusd`
* [ ] Update the readme on docker https://hub.docker.com/repository/docker/kanidm/server - [ ] Update the readme on docker https://hub.docker.com/repository/docker/kanidm/server
### Distro ### Distro
* [ ] vendor and release to build.opensuse.org - [ ] vendor and release to build.opensuse.org
### Follow up ### Follow up
* [ ] git checkout master - [ ] git checkout master
* [ ] git pull - [ ] git pull
* [ ] git branch YYYYMMDD-dev-version - [ ] git branch YYYYMMDD-dev-version
* [ ] update version in ./kanidmd\_web\_ui/Cargo.toml - [ ] update version in ./kanidmd\_web\_ui/Cargo.toml
* [ ] update version in ./Cargo.toml - [ ] update version in ./Cargo.toml
* [ ] build wasm components with debug profile - [ ] build wasm components with debug profile

View file

@ -2,28 +2,32 @@
A Python module for interacting with Kanidm. A Python module for interacting with Kanidm.
Currently in very very very early beta, please [log an issue](https://github.com/kanidm/kanidm/issues/new/choose) for feature requests and bugs. Currently in very very very early beta, please
[log an issue](https://github.com/kanidm/kanidm/issues/new/choose) for feature requests and bugs.
## Installation ## Installation
```shell ```bash
python -m pip install kanidm python -m pip install kanidm
``` ```
## Documentation ## Documentation
Documentation can be generated by [cloning the repository](https://github.com/kanidm/kanidm) and running `make docs/pykanidm/build`. The documentation will appear in `./pykanidm/site`. You'll need make and the [poetry](https://pypi.org/project/poetry/) package installed. Documentation can be generated by [cloning the repository](https://github.com/kanidm/kanidm) and
running `make docs/pykanidm/build`. The documentation will appear in `./pykanidm/site`. You'll need
make and the [poetry](https://pypi.org/project/poetry/) package installed.
## Testing ## Testing
Set up your dev environment using `poetry` - `python -m pip install poetry && poetry install`. Set up your dev environment using `poetry` - `python -m pip install poetry && poetry install`.
Pytest it used for testing, if you don't have a live server to test against and config set up, use `poetry run pytest -m 'not network'`. Pytest it used for testing, if you don't have a live server to test against and config set up, use
`poetry run pytest -m 'not network'`.
## Changelog ## Changelog
| Version | Date | Notes | | Version | Date | Notes |
| --- | --- | --- | | ------- | ---------- | ----------------------------------------------------- |
| 0.0.1 | 2022-08-16 | Initial release | | 0.0.1 | 2022-08-16 | Initial release |
| 0.0.2 | 2022-08-16 | Updated license, including test code in package | | 0.0.2 | 2022-08-16 | Updated license, including test code in package |
| 0.0.3 | 2022-08-17 | Updated test suite to allow skipping of network tests | | 0.0.3 | 2022-08-17 | Updated test suite to allow skipping of network tests |

View file

@ -1,4 +1,3 @@
# kanidm.KanidmClient # kanidm.KanidmClient
::: kanidm.KanidmClient ::: kanidm.KanidmClient

View file

@ -1,4 +1,3 @@
# kanidm.types.KanidmClientConfig # kanidm.types.KanidmClientConfig
::: kanidm.types.KanidmClientConfig ::: kanidm.types.KanidmClientConfig

View file

@ -1,4 +1,3 @@
# kanidm.types.RadiusClient # kanidm.types.RadiusClient
::: kanidm.types.RadiusClient ::: kanidm.types.RadiusClient

View file

@ -1,3 +1 @@
::: kanidm.tokens ::: kanidm.tokens