docs: reformat book and introduce workflow to ensure it stays formatted (#1286)

This commit is contained in:
Jan Christoph Ebersbach 2022-12-26 23:52:03 +01:00 committed by GitHub
parent 6207c3ff51
commit fd8afa065f
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
72 changed files with 3624 additions and 3432 deletions

10
.editorconfig Normal file
View file

@ -0,0 +1,10 @@
# Documentation: https://editorconfig.org/
root = true
[*.md]
charset = utf-8
end_of_line = lf
indent_size = 2
max_line_length = 100
trim_trailing_whitespace = true

View file

@ -8,13 +8,17 @@ assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear description of what the problem is. Ex. I'm confused by, or would like to know how to...
**Describe the solution you'd like**
A description of what you'd expect to happen.
**Describe alternatives you've considered**
Are there any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

View file

@ -7,7 +7,7 @@ assignees: ''
---
<!--
<!--
Please see the security policy in SECURITY.md, located in the root of the repository.

View file

@ -24,10 +24,19 @@ jobs:
libsqlite3-dev libudev-dev \
libpam0g-dev
- name: Setup deno
uses: denoland/setup-deno@v1 # Documentation: https://github.com/denoland/setup-deno
with:
deno-version: v1.x
- name: Test document formatting
run: |
make test/doc/format
- name: Setup mdBook
uses: peaceiris/actions-mdbook@v1
with:
mdbook-version: 'latest'
mdbook-version: "latest"
- uses: actions-rs/toolchain@v1
with:
@ -46,7 +55,7 @@ jobs:
- name: Install python 3.10
uses: actions/setup-python@v4
with:
python-version: '3.10'
python-version: "3.10"
- name: pykanidm docs
run: |
python -m pip install poetry

View file

@ -1,6 +1,10 @@
## Our Pledge
In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.
In the interest of fostering an open and welcoming environment, we as contributors and maintainers
pledge to making participation in our project and our community a harassment-free experience for
everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity
and expression, level of experience, education, socio-economic status, nationality, personal
appearance, race, religion, or sexual identity and orientation.
## Our Standards
@ -17,29 +21,44 @@ Examples of unacceptable behavior by participants include:
- The use of sexualized language or imagery and unwelcome sexual attention or advances
- Trolling, insulting/derogatory comments, and personal or political attacks
- Public or private harassment
- Publishing others private information, such as a physical or electronic address, without explicit permission
- Publishing others private information, such as a physical or electronic address, without explicit
permission
- Other conduct which could reasonably be considered inappropriate in a professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
Project maintainers are responsible for clarifying the standards of acceptable behavior and are
expected to take appropriate and fair corrective action in response to any instances of unacceptable
behavior.
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
Scope
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits,
code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or
to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful. Scope
This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
This Code of Conduct applies both within project spaces and in public spaces when an individual is
representing the project or its community. Examples of representing a project or community include
using an official project e-mail address, posting via an official social media account, or acting as
an appointed representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at:
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting
the project team at:
* william at blackhats.net.au
* charcol at redhat.com
- william at blackhats.net.au
- charcol at redhat.com
All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
All complaints will be reviewed and investigated and will result in a response that is deemed
necessary and appropriate to the circumstances. The project team is obligated to maintain
confidentiality with regard to the reporter of an incident. Further details of specific enforcement
policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the projects leadership.
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face
temporary or permanent repercussions as determined by other members of the projects leadership.
## Attribution
This Code of Conduct is adapted from the Contributor Covenant, version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
This Code of Conduct is adapted from the Contributor Covenant, version 1.4, available at
https://www.contributor-covenant.org/version/1/4/code-of-conduct.html

140
FAQ.md
View file

@ -1,35 +1,32 @@
Frequently Asked Questions
--------------------------
## Frequently Asked Questions
This is a list of common questions that are generally raised by developers
or technical users.
This is a list of common questions that are generally raised by developers or technical users.
Why don't you use library/project X?
------------------------------------
## Why don't you use library/project X?
A critical aspect of kanidm is the ability to test it. Generally requests to add libraries
or projects can come in different forms so I'll answer to a few of them:
A critical aspect of kanidm is the ability to test it. Generally requests to add libraries or
projects can come in different forms so I'll answer to a few of them:
## Is the library in Rust?
If it's not in Rust, it's not ellegible for inclusion. There is a single exception today
(rlm python) but it's very likely this will also be removed in the future. Keeping a single
language helps with testing, but also makes the project more accesible and consistent to
developers. Additionally, features exist in Rust that help to improve quality of the project
from development to production.
If it's not in Rust, it's not ellegible for inclusion. There is a single exception today (rlm
python) but it's very likely this will also be removed in the future. Keeping a single language
helps with testing, but also makes the project more accesible and consistent to developers.
Additionally, features exist in Rust that help to improve quality of the project from development to
production.
## Is the project going to create a microservice like architecture?
If the project (such as an external OAuth/OIDC gateway, or a different DB layer) would be used in a tight-knit
manner to Kanidm then it is no longer a microservice, but a monolith with multiple
moving parts. This creates production fragility and issues such as:
If the project (such as an external OAuth/OIDC gateway, or a different DB layer) would be used in a
tight-knit manner to Kanidm then it is no longer a microservice, but a monolith with multiple moving
parts. This creates production fragility and issues such as:
* Differences and difficulties in correlating log events
* Design choices of the project not being compatible with Kanidm's model
* Extra requirements for testing/production configuration
- Differences and difficulties in correlating log events
- Design choices of the project not being compatible with Kanidm's model
- Extra requirements for testing/production configuration
This last point is key. It is a critical part of kanidm that the following must
work on all machines, and run every single test in the suite.
This last point is key. It is a critical part of kanidm that the following must work on all
machines, and run every single test in the suite.
```
git clone https://github.com/kanidm/kanidm.git
@ -37,80 +34,83 @@ cd kanidm
cargo test
```
Not only this, but it's very important for quality that running `cargo test` truly tests the
entire stack of the application - from the database, all the way to the client utilities and
other daemons communicating to a real server. Many developer choices have already been made to
ensure that testing is the most important aspect of the project to ensure that every feature is
high quality and reliable.
Not only this, but it's very important for quality that running `cargo test` truly tests the entire
stack of the application - from the database, all the way to the client utilities and other daemons
communicating to a real server. Many developer choices have already been made to ensure that testing
is the most important aspect of the project to ensure that every feature is high quality and
reliable.
Additon of extra projects or dependencies, would violate this principle and lead to a situation
where it would not be possible to effectively test for all developers.
Why don't you use Raft/Etcd/MongoDB/Other to solve replication?
---------------------------------------------------------------
## Why don't you use Raft/Etcd/MongoDB/Other to solve replication?
There are a number of reasons why these are generally not compatible. Generally these databases
or technolgies do solve problems, but they are not the problems in Kanidm.
There are a number of reasons why these are generally not compatible. Generally these databases or
technolgies do solve problems, but they are not the problems in Kanidm.
## CAP theorem
CAP theorem states that in a database you must choose only two of the three possible
elements:
CAP theorem states that in a database you must choose only two of the three possible elements:
* Consistency - All servers in a topology see the same data at all times
* Availability - All servers in a a topology can accept write operations at all times
* Partitioning - In the case of a network seperation in the topology, all systems can continue to process read operations
- Consistency - All servers in a topology see the same data at all times
- Availability - All servers in a a topology can accept write operations at all times
- Partitioning - In the case of a network seperation in the topology, all systems can continue to
process read operations
Many protocols like Raft or Etcd are databases that provide PC guarantees. They guarantee that
they are always consistent, and can always be read in the face of patitioning, but to accept
a write, they must not be experiencing a partitioning event. Generally this is achieved by
the fact that these systems elect a single node to process all operations, and then re-elect
a new node in the case of partitioning events. The elections will fail if a quorum is not met
disallowing writes throughout the topology.
Many protocols like Raft or Etcd are databases that provide PC guarantees. They guarantee that they
are always consistent, and can always be read in the face of patitioning, but to accept a write,
they must not be experiencing a partitioning event. Generally this is achieved by the fact that
these systems elect a single node to process all operations, and then re-elect a new node in the
case of partitioning events. The elections will fail if a quorum is not met disallowing writes
throughout the topology.
This doesn't work for Authentication systems, and global scale databases. As you introduce non-negligible
network latency, the processing of write operations will decrease in these systems. This is why
Google's Spanner is a PA system.
This doesn't work for Authentication systems, and global scale databases. As you introduce
non-negligible network latency, the processing of write operations will decrease in these systems.
This is why Google's Spanner is a PA system.
PA systems are also considered to be "eventually consistent". All nodes can provide reads and writes
at all times, but during a network partitioning or after a write there is a delay for all nodes to
arrive at a consistent database state. A key element is that the nodes perform an consistency operation
that uses application aware rules to allow all servers to arrive at the same state *without*
communication between the nodes.
arrive at a consistent database state. A key element is that the nodes perform an consistency
operation that uses application aware rules to allow all servers to arrive at the same state
_without_ communication between the nodes.
## Update Resolutionn
Many databases do exist that are PA, such as CouchDB or MongoDB. However, they often do not have
the properties required in update resoultion that is required for Kanidm.
Many databases do exist that are PA, such as CouchDB or MongoDB. However, they often do not have the
properties required in update resoultion that is required for Kanidm.
An example of this is that CouchDB uses object-level resolution. This means that if two servers
update the same entry the "latest write wins". An example of where this won't work for Kanidm
is if one server locks the account as an admin is revoking the access of an account, but another
account updates the username. If the username update happenned second, the lock event would
be lost creating a security risk. There are certainly cases where this resolution method is
valid, but Kanidm is not one.
update the same entry the "latest write wins". An example of where this won't work for Kanidm is if
one server locks the account as an admin is revoking the access of an account, but another account
updates the username. If the username update happenned second, the lock event would be lost creating
a security risk. There are certainly cases where this resolution method is valid, but Kanidm is not
one.
Another example is MongoDB. While it does attribute level resolution, it does this without the application
awareness of Kanidm. For example, in Kanidm if we have an account lock based on time, we can select
the latest time value to over-write the following, or we could have a counter that can correctly
increment/advance between the servers. However, Mongo is not aware of these rules, and it would
not be able to give the experience we desire. Mongo is a very good database, it's just not the
Another example is MongoDB. While it does attribute level resolution, it does this without the
application awareness of Kanidm. For example, in Kanidm if we have an account lock based on time, we
can select the latest time value to over-write the following, or we could have a counter that can
correctly increment/advance between the servers. However, Mongo is not aware of these rules, and it
would not be able to give the experience we desire. Mongo is a very good database, it's just not the
right choice for Kanidm.
Additionally, it's worth noting that most of these other database would violate the previous
desires to keep the language as Rust and may require external configuration or daemons which
may not be possible to test.
Additionally, it's worth noting that most of these other database would violate the previous desires
to keep the language as Rust and may require external configuration or daemons which may not be
possible to test.
## How PAM/nsswitch Work
Linux and BSD clients can resolve identities from Kanidm into accounts via PAM and nsswitch.
Linux and BSD clients can resolve identities from Kanidm into accounts via PAM and nsswitch.
Name Service Switch (NSS) is used for connecting the computers with different data sources to resolve name-service information.
By adding the nsswitch libraries to /etc/nsswitch.conf, we are telling NSS to lookup password info and group identities in Kanidm:
Name Service Switch (NSS) is used for connecting the computers with different data sources to
resolve name-service information. By adding the nsswitch libraries to /etc/nsswitch.conf, we are
telling NSS to lookup password info and group identities in Kanidm:
passwd: compat kanidm
group: compat kanidm
```
passwd: compat kanidm
group: compat kanidm
```
When a service like sudo, sshd, su etc. wants to authenticate someone, it opens the pam.d config of that service,
then performs authentication according to the modules defined in the pam.d config.
For example, if you run `ls -al /etc/pam.d /usr/etc/pam.d` in SUSE, you can see the services and their respective pam.d config.
When a service like sudo, sshd, su etc. wants to authenticate someone, it opens the pam.d config of
that service, then performs authentication according to the modules defined in the pam.d config. For
example, if you run `ls -al /etc/pam.d /usr/etc/pam.d` in SUSE, you can see the services and their
respective pam.d config.

View file

@ -1,28 +1,22 @@
Mozilla Public License Version 2.0
==================================
# Mozilla Public License Version 2.0
1. Definitions
--------------
1.1. "Contributor"
means each individual or legal entity that creates, contributes to
the creation of, or owns Covered Software.
---
1.2. "Contributor Version"
means the combination of the Contributions of others (if any) used
by a Contributor and that particular Contributor's Contribution.
1.1. "Contributor" means each individual or legal entity that creates, contributes to the creation
of, or owns Covered Software.
1.3. "Contribution"
means Covered Software of a particular Contributor.
1.2. "Contributor Version" means the combination of the Contributions of others (if any) used by a
Contributor and that particular Contributor's Contribution.
1.4. "Covered Software"
means Source Code Form to which the initial Contributor has attached
the notice in Exhibit A, the Executable Form of such Source Code
Form, and Modifications of such Source Code Form, in each case
including portions thereof.
1.3. "Contribution" means Covered Software of a particular Contributor.
1.5. "Incompatible With Secondary Licenses"
means
1.4. "Covered Software" means Source Code Form to which the initial Contributor has attached the
notice in Exhibit A, the Executable Form of such Source Code Form, and Modifications of such Source
Code Form, in each case including portions thereof.
1.5. "Incompatible With Secondary Licenses" means
(a) that the initial Contributor has attached the notice described
in Exhibit B to the Covered Software; or
@ -31,23 +25,17 @@ Mozilla Public License Version 2.0
version 1.1 or earlier of the License, but not also under the
terms of a Secondary License.
1.6. "Executable Form"
means any form of the work other than Source Code Form.
1.6. "Executable Form" means any form of the work other than Source Code Form.
1.7. "Larger Work"
means a work that combines Covered Software with other material, in
a separate file or files, that is not Covered Software.
1.7. "Larger Work" means a work that combines Covered Software with other material, in a separate
file or files, that is not Covered Software.
1.8. "License"
means this document.
1.8. "License" means this document.
1.9. "Licensable"
means having the right to grant, to the maximum extent possible,
whether at the time of the initial grant or subsequently, any and
all of the rights conveyed by this License.
1.9. "Licensable" means having the right to grant, to the maximum extent possible, whether at the
time of the initial grant or subsequently, any and all of the rights conveyed by this License.
1.10. "Modifications"
means any of the following:
1.10. "Modifications" means any of the following:
(a) any file in Source Code Form that results from an addition to,
deletion from, or modification of the contents of Covered
@ -56,319 +44,284 @@ Mozilla Public License Version 2.0
(b) any new file in Source Code Form that contains any Covered
Software.
1.11. "Patent Claims" of a Contributor
means any patent claim(s), including without limitation, method,
process, and apparatus claims, in any patent Licensable by such
Contributor that would be infringed, but for the grant of the
License, by the making, using, selling, offering for sale, having
made, import, or transfer of either its Contributions or its
Contributor Version.
1.11. "Patent Claims" of a Contributor means any patent claim(s), including without limitation,
method, process, and apparatus claims, in any patent Licensable by such Contributor that would be
infringed, but for the grant of the License, by the making, using, selling, offering for sale,
having made, import, or transfer of either its Contributions or its Contributor Version.
1.12. "Secondary License"
means either the GNU General Public License, Version 2.0, the GNU
Lesser General Public License, Version 2.1, the GNU Affero General
Public License, Version 3.0, or any later versions of those
licenses.
1.12. "Secondary License" means either the GNU General Public License, Version 2.0, the GNU Lesser
General Public License, Version 2.1, the GNU Affero General Public License, Version 3.0, or any
later versions of those licenses.
1.13. "Source Code Form"
means the form of the work preferred for making modifications.
1.13. "Source Code Form" means the form of the work preferred for making modifications.
1.14. "You" (or "Your")
means an individual or a legal entity exercising rights under this
License. For legal entities, "You" includes any entity that
controls, is controlled by, or is under common control with You. For
purposes of this definition, "control" means (a) the power, direct
or indirect, to cause the direction or management of such entity,
whether by contract or otherwise, or (b) ownership of more than
fifty percent (50%) of the outstanding shares or beneficial
ownership of such entity.
1.14. "You" (or "Your") means an individual or a legal entity exercising rights under this License.
For legal entities, "You" includes any entity that controls, is controlled by, or is under common
control with You. For purposes of this definition, "control" means (a) the power, direct or
indirect, to cause the direction or management of such entity, whether by contract or otherwise, or
(b) ownership of more than fifty percent (50%) of the outstanding shares or beneficial ownership of
such entity.
2. License Grants and Conditions
--------------------------------
---
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
Each Contributor hereby grants You a world-wide, royalty-free, non-exclusive license:
(a) under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or
as part of a Larger Work; and
(a) under intellectual property rights (other than patent or trademark) Licensable by such
Contributor to use, reproduce, make available, modify, display, perform, distribute, and otherwise
exploit its Contributions, either on an unmodified basis, with Modifications, or as part of a Larger
Work; and
(b) under Patent Claims of such Contributor to make, use, sell, offer
for sale, have made, import, and otherwise transfer either its
Contributions or its Contributor Version.
(b) under Patent Claims of such Contributor to make, use, sell, offer for sale, have made, import,
and otherwise transfer either its Contributions or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution
become effective for each Contribution on the date the Contributor first
distributes such Contribution.
The licenses granted in Section 2.1 with respect to any Contribution become effective for each
Contribution on the date the Contributor first distributes such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under
this License. No additional rights or licenses will be implied from the
distribution or licensing of Covered Software under this License.
Notwithstanding Section 2.1(b) above, no patent license is granted by a
Contributor:
The licenses granted in this Section 2 are the only rights granted under this License. No additional
rights or licenses will be implied from the distribution or licensing of Covered Software under this
License. Notwithstanding Section 2.1(b) above, no patent license is granted by a Contributor:
(a) for any code that a Contributor has removed from Covered Software;
or
(a) for any code that a Contributor has removed from Covered Software; or
(b) for infringements caused by: (i) Your and any other third party's
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its Contributor
Version); or
(b) for infringements caused by: (i) Your and any other third party's modifications of Covered
Software, or (ii) the combination of its Contributions with other software (except as part of its
Contributor Version); or
(c) under Patent Claims infringed by Covered Software in the absence of
its Contributions.
(c) under Patent Claims infringed by Covered Software in the absence of its Contributions.
This License does not grant any rights in the trademarks, service marks,
or logos of any Contributor (except as may be necessary to comply with
the notice requirements in Section 3.4).
This License does not grant any rights in the trademarks, service marks, or logos of any Contributor
(except as may be necessary to comply with the notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this
License (see Section 10.2) or under the terms of a Secondary License (if
permitted under the terms of Section 3.3).
No Contributor makes additional grants as a result of Your choice to distribute the Covered Software
under a subsequent version of this License (see Section 10.2) or under the terms of a Secondary
License (if permitted under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its
Contributions are its original creation(s) or it has sufficient rights
to grant the rights to its Contributions conveyed by this License.
Each Contributor represents that the Contributor believes its Contributions are its original
creation(s) or it has sufficient rights to grant the rights to its Contributions conveyed by this
License.
2.6. Fair Use
This License is not intended to limit any rights You have under
applicable copyright doctrines of fair use, fair dealing, or other
equivalents.
This License is not intended to limit any rights You have under applicable copyright doctrines of
fair use, fair dealing, or other equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted
in Section 2.1.
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in Section 2.1.
3. Responsibilities
-------------------
---
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under
the terms of this License. You must inform recipients that the Source
Code Form of the Covered Software is governed by the terms of this
License, and how they can obtain a copy of this License. You may not
attempt to alter or restrict the recipients' rights in the Source Code
Form.
All distribution of Covered Software in Source Code Form, including any Modifications that You
create or to which You contribute, must be under the terms of this License. You must inform
recipients that the Source Code Form of the Covered Software is governed by the terms of this
License, and how they can obtain a copy of this License. You may not attempt to alter or restrict
the recipients' rights in the Source Code Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
(a) such Covered Software must also be made available in Source Code
Form, as described in Section 3.1, and You must inform recipients of
the Executable Form how they can obtain a copy of such Source Code
Form by reasonable means in a timely manner, at a charge no more
than the cost of distribution to the recipient; and
(a) such Covered Software must also be made available in Source Code Form, as described in Section
3.1, and You must inform recipients of the Executable Form how they can obtain a copy of such Source
Code Form by reasonable means in a timely manner, at a charge no more than the cost of distribution
to the recipient; and
(b) You may distribute such Executable Form under the terms of this
License, or sublicense it under different terms, provided that the
license for the Executable Form does not attempt to limit or alter
the recipients' rights in the Source Code Form under this License.
(b) You may distribute such Executable Form under the terms of this License, or sublicense it under
different terms, provided that the license for the Executable Form does not attempt to limit or
alter the recipients' rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for
the Covered Software. If the Larger Work is a combination of Covered
Software with a work governed by one or more Secondary Licenses, and the
Covered Software is not Incompatible With Secondary Licenses, this
License permits You to additionally distribute such Covered Software
under the terms of such Secondary License(s), so that the recipient of
the Larger Work may, at their option, further distribute the Covered
Software under the terms of either this License or such Secondary
License(s).
You may create and distribute a Larger Work under terms of Your choice, provided that You also
comply with the requirements of this License for the Covered Software. If the Larger Work is a
combination of Covered Software with a work governed by one or more Secondary Licenses, and the
Covered Software is not Incompatible With Secondary Licenses, this License permits You to
additionally distribute such Covered Software under the terms of such Secondary License(s), so that
the recipient of the Larger Work may, at their option, further distribute the Covered Software under
the terms of either this License or such Secondary License(s).
3.4. Notices
You may not remove or alter the substance of any license notices
(including copyright notices, patent notices, disclaimers of warranty,
or limitations of liability) contained within the Source Code Form of
the Covered Software, except that You may alter any license notices to
the extent required to remedy known factual inaccuracies.
You may not remove or alter the substance of any license notices (including copyright notices,
patent notices, disclaimers of warranty, or limitations of liability) contained within the Source
Code Form of the Covered Software, except that You may alter any license notices to the extent
required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on
behalf of any Contributor. You must make it absolutely clear that any
such warranty, support, indemnity, or liability obligation is offered by
You alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
You may choose to offer, and to charge a fee for, warranty, support, indemnity or liability
obligations to one or more recipients of Covered Software. However, You may do so only on Your own
behalf, and not on behalf of any Contributor. You must make it absolutely clear that any such
warranty, support, indemnity, or liability obligation is offered by You alone, and You hereby agree
to indemnify every Contributor for any liability incurred by such Contributor as a result of
warranty, support, indemnity or liability terms You offer. You may include additional disclaimers of
warranty and limitations of liability specific to any jurisdiction.
4. Inability to Comply Due to Statute or Regulation
---------------------------------------------------
If it is impossible for You to comply with any of the terms of this
License with respect to some or all of the Covered Software due to
statute, judicial order, or regulation then You must: (a) comply with
the terms of this License to the maximum extent possible; and (b)
describe the limitations and the code they affect. Such description must
be placed in a text file included with all distributions of the Covered
Software under this License. Except to the extent prohibited by statute
or regulation, such description must be sufficiently detailed for a
recipient of ordinary skill to be able to understand it.
---
If it is impossible for You to comply with any of the terms of this License with respect to some or
all of the Covered Software due to statute, judicial order, or regulation then You must: (a) comply
with the terms of this License to the maximum extent possible; and (b) describe the limitations and
the code they affect. Such description must be placed in a text file included with all distributions
of the Covered Software under this License. Except to the extent prohibited by statute or
regulation, such description must be sufficiently detailed for a recipient of ordinary skill to be
able to understand it.
5. Termination
--------------
5.1. The rights granted under this License will terminate automatically
if You fail to comply with any of its terms. However, if You become
compliant, then the rights granted under this License from a particular
Contributor are reinstated (a) provisionally, unless and until such
Contributor explicitly and finally terminates Your grants, and (b) on an
ongoing basis, if such Contributor fails to notify You of the
non-compliance by some reasonable means prior to 60 days after You have
come back into compliance. Moreover, Your grants from a particular
Contributor are reinstated on an ongoing basis if such Contributor
notifies You of the non-compliance by some reasonable means, this is the
first time You have received notice of non-compliance with this License
from such Contributor, and You become compliant prior to 30 days after
Your receipt of the notice.
---
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions,
counter-claims, and cross-claims) alleging that a Contributor Version
directly or indirectly infringes any patent, then the rights granted to
You by any and all Contributors for the Covered Software under Section
2.1 of this License shall terminate.
5.1. The rights granted under this License will terminate automatically if You fail to comply with
any of its terms. However, if You become compliant, then the rights granted under this License from
a particular Contributor are reinstated (a) provisionally, unless and until such Contributor
explicitly and finally terminates Your grants, and (b) on an ongoing basis, if such Contributor
fails to notify You of the non-compliance by some reasonable means prior to 60 days after You have
come back into compliance. Moreover, Your grants from a particular Contributor are reinstated on an
ongoing basis if such Contributor notifies You of the non-compliance by some reasonable means, this
is the first time You have received notice of non-compliance with this License from such
Contributor, and You become compliant prior to 30 days after Your receipt of the notice.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all
end user license agreements (excluding distributors and resellers) which
have been validly granted by You or Your distributors under this License
prior to termination shall survive termination.
5.2. If You initiate litigation against any entity by asserting a patent infringement claim
(excluding declaratory judgment actions, counter-claims, and cross-claims) alleging that a
Contributor Version directly or indirectly infringes any patent, then the rights granted to You by
any and all Contributors for the Covered Software under Section 2.1 of this License shall terminate.
************************************************************************
* *
* 6. Disclaimer of Warranty *
* ------------------------- *
* *
* Covered Software is provided under this License on an "as is" *
* basis, without warranty of any kind, either expressed, implied, or *
* statutory, including, without limitation, warranties that the *
* Covered Software is free of defects, merchantable, fit for a *
* particular purpose or non-infringing. The entire risk as to the *
* quality and performance of the Covered Software is with You. *
* Should any Covered Software prove defective in any respect, You *
* (not any Contributor) assume the cost of any necessary servicing, *
* repair, or correction. This disclaimer of warranty constitutes an *
* essential part of this License. No use of any Covered Software is *
* authorized under this License except under this disclaimer. *
* *
************************************************************************
5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user license agreements
(excluding distributors and resellers) which have been validly granted by You or Your distributors
under this License prior to termination shall survive termination.
************************************************************************
* *
* 7. Limitation of Liability *
* -------------------------- *
* *
* Under no circumstances and under no legal theory, whether tort *
* (including negligence), contract, or otherwise, shall any *
* Contributor, or anyone who distributes Covered Software as *
* permitted above, be liable to You for any direct, indirect, *
* special, incidental, or consequential damages of any character *
* including, without limitation, damages for lost profits, loss of *
* goodwill, work stoppage, computer failure or malfunction, or any *
* and all other commercial damages or losses, even if such party *
* shall have been informed of the possibility of such damages. This *
* limitation of liability shall not apply to liability for death or *
* personal injury resulting from such party's negligence to the *
* extent applicable law prohibits such limitation. Some *
* jurisdictions do not allow the exclusion or limitation of *
* incidental or consequential damages, so this exclusion and *
* limitation may not apply to You. *
* *
************************************************************************
---
-
-
-
6. Disclaimer of Warranty *
- ------------------------- *
-
-
- Covered Software is provided under this License on an "as is" *
- basis, without warranty of any kind, either expressed, implied, or *
- statutory, including, without limitation, warranties that the *
- Covered Software is free of defects, merchantable, fit for a *
- particular purpose or non-infringing. The entire risk as to the *
- quality and performance of the Covered Software is with You. *
- Should any Covered Software prove defective in any respect, You *
- (not any Contributor) assume the cost of any necessary servicing, *
- repair, or correction. This disclaimer of warranty constitutes an *
- essential part of this License. No use of any Covered Software is *
- authorized under this License except under this disclaimer. *
-
-
---
---
-
-
-
7. Limitation of Liability *
- -------------------------- *
-
-
- Under no circumstances and under no legal theory, whether tort *
- (including negligence), contract, or otherwise, shall any *
- Contributor, or anyone who distributes Covered Software as *
- permitted above, be liable to You for any direct, indirect, *
- special, incidental, or consequential damages of any character *
- including, without limitation, damages for lost profits, loss of *
- goodwill, work stoppage, computer failure or malfunction, or any *
- and all other commercial damages or losses, even if such party *
- shall have been informed of the possibility of such damages. This *
- limitation of liability shall not apply to liability for death or *
- personal injury resulting from such party's negligence to the *
- extent applicable law prohibits such limitation. Some *
- jurisdictions do not allow the exclusion or limitation of *
- incidental or consequential damages, so this exclusion and *
- limitation may not apply to You. *
-
-
---
8. Litigation
-------------
Any litigation relating to this License may be brought only in the
courts of a jurisdiction where the defendant maintains its principal
place of business and such litigation shall be governed by laws of that
jurisdiction, without reference to its conflict-of-law provisions.
Nothing in this Section shall prevent a party's ability to bring
cross-claims or counter-claims.
---
Any litigation relating to this License may be brought only in the courts of a jurisdiction where
the defendant maintains its principal place of business and such litigation shall be governed by
laws of that jurisdiction, without reference to its conflict-of-law provisions. Nothing in this
Section shall prevent a party's ability to bring cross-claims or counter-claims.
9. Miscellaneous
----------------
This License represents the complete agreement concerning the subject
matter hereof. If any provision of this License is held to be
unenforceable, such provision shall be reformed only to the extent
necessary to make it enforceable. Any law or regulation which provides
that the language of a contract shall be construed against the drafter
shall not be used to construe this License against a Contributor.
---
This License represents the complete agreement concerning the subject matter hereof. If any
provision of this License is held to be unenforceable, such provision shall be reformed only to the
extent necessary to make it enforceable. Any law or regulation which provides that the language of a
contract shall be construed against the drafter shall not be used to construe this License against a
Contributor.
10. Versions of the License
---------------------------
---
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
Mozilla Foundation is the license steward. Except as provided in Section 10.3, no one other than the
license steward has the right to modify or publish new versions of this License. Each version will
be given a distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version
of the License under which You originally received the Covered Software,
or under the terms of any subsequent version published by the license
steward.
You may distribute the Covered Software under the terms of the version of the License under which
You originally received the Covered Software, or under the terms of any subsequent version published
by the license steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a
modified version of this License if you rename the license and remove
any references to the name of the license steward (except to note that
such modified license differs from this License).
If you create software not governed by this License, and you want to create a new license for such
software, you may create and use a modified version of this License if you rename the license and
remove any references to the name of the license steward (except to note that such modified license
differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary
Licenses
10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses
If You choose to distribute Source Code Form that is Incompatible With
Secondary Licenses under the terms of this version of the License, the
notice described in Exhibit B of this License must be attached.
If You choose to distribute Source Code Form that is Incompatible With Secondary Licenses under the
terms of this version of the License, the notice described in Exhibit B of this License must be
attached.
Exhibit A - Source Code Form License Notice
-------------------------------------------
## Exhibit A - Source Code Form License Notice
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
This Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of
the MPL was not distributed with this file, You can obtain one at http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular
file, then You may include the notice in a location (such as a LICENSE
file in a relevant directory) where a recipient would be likely to look
for such a notice.
If it is not possible or desirable to put the notice in a particular file, then You may include the
notice in a location (such as a LICENSE file in a relevant directory) where a recipient would be
likely to look for such a notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - "Incompatible With Secondary Licenses" Notice
---------------------------------------------------------
This Source Code Form is "Incompatible With Secondary Licenses", as
defined by the Mozilla Public License, v. 2.0.
## Exhibit B - "Incompatible With Secondary Licenses" Notice
This Source Code Form is "Incompatible With Secondary Licenses", as defined by the Mozilla Public
License, v. 2.0.

View file

@ -3,6 +3,7 @@ IMAGE_VERSION ?= devel
CONTAINER_TOOL_ARGS ?=
IMAGE_ARCH ?= "linux/amd64,linux/arm64"
CONTAINER_BUILD_ARGS ?=
MARKDOWN_FORMAT_ARGS ?= --options-line-width=100
# Example of using redis with sccache
# --build-arg "SCCACHE_REDIS=redis://redis.dev.blackhats.net.au:6379"
CONTAINER_TOOL ?= docker
@ -139,6 +140,10 @@ test/pykanidm/mypy:
test/pykanidm: ## run the test suite (mypy/pylint/pytest) for the kanidm python module
test/pykanidm: test/pykanidm/pytest test/pykanidm/mypy test/pykanidm/pylint
.PHONY: test/doc/format
test/doc/format: ## Format docs and the Kanidm book
find . -type f -name \*.md -exec deno fmt --check $(MARKDOWN_FORMAT_ARGS) "{}" +
########################################################################
.PHONY: doc
@ -146,6 +151,10 @@ doc: ## Build the rust documentation locally
doc:
cargo doc --document-private-items
.PHONY: doc/format
doc/format: ## Format docs and the Kanidm book
find . -type f -name \*.md -exec deno fmt $(MARKDOWN_FORMAT_ARGS) "{}" +
.PHONY: book
book: ## Build the Kanidm book
book:

View file

@ -6,30 +6,31 @@
## About
Kanidm is a simple and secure identity management platform, which provides services to allow
other systems and application to authenticate against. The project aims for the highest levels
of reliability, security and ease of use.
Kanidm is a simple and secure identity management platform, which provides services to allow other
systems and application to authenticate against. The project aims for the highest levels of
reliability, security and ease of use.
The goal of this project is to be a complete identity management provider, covering the broadest
possible set of requirements and integrations. You should not need any other components (like Keycloak)
when you use Kanidm. We want to create a project that will be suitable for everything
from personal home deployments, to the largest enterprise needs.
possible set of requirements and integrations. You should not need any other components (like
Keycloak) when you use Kanidm. We want to create a project that will be suitable for everything from
personal home deployments, to the largest enterprise needs.
To achieve this we rely heavily on strict defaults, simple configuration, and self-healing components.
To achieve this we rely heavily on strict defaults, simple configuration, and self-healing
components.
The project is still growing and some areas are developing at a fast pace. The core of the server
however is reliable and we make all effort to ensure upgrades will always work.
Kanidm supports:
* Oauth2/OIDC Authentication provider for web SSO
* Read only LDAPS gateway
* Linux/Unix integration (with offline authentication)
* SSH key distribution to Linux/Unix systems
* RADIUS for network authentication
* Passkeys / Webauthn for secure cryptographic authentication
* A self service web ui
* Complete CLI tooling for administration
- Oauth2/OIDC Authentication provider for web SSO
- Read only LDAPS gateway
- Linux/Unix integration (with offline authentication)
- SSH key distribution to Linux/Unix systems
- RADIUS for network authentication
- Passkeys / Webauthn for secure cryptographic authentication
- A self service web ui
- Complete CLI tooling for administration
If you want to host your own centralised authentication service, then Kanidm is for you!
@ -40,7 +41,8 @@ If you want to deploy Kanidm to see what it can do, you should read the Kanidm b
- [Kanidm book (Latest stable)](https://kanidm.github.io/kanidm/stable/)
- [Kanidm book (Latest commit)](https://kanidm.github.io/kanidm/master/)
We also publish [support guidelines](https://github.com/kanidm/kanidm/blob/master/project_docs/RELEASE_AND_SUPPORT.md)
We also publish
[support guidelines](https://github.com/kanidm/kanidm/blob/master/project_docs/RELEASE_AND_SUPPORT.md)
for what the project will support.
## Code of Conduct / Ethics
@ -54,8 +56,8 @@ See our documentation on [rights and ethics]
## Getting in Contact / Questions
We have a [gitter community channel] where we can talk. Firstyear is also happy to
answer questions via email, which can be found on their github profile.
We have a [gitter community channel] where we can talk. Firstyear is also happy to answer questions
via email, which can be found on their github profile.
[gitter community channel]: https://gitter.im/kanidm/community
@ -63,29 +65,29 @@ answer questions via email, which can be found on their github profile.
### LLDAP
[LLDAP](https://github.com/nitnelave/lldap) is a similar project aiming for a small and easy to administer
LDAP server with a web administration portal. Both projects use the [Kanidm LDAP bindings](https://github.com/kanidm/ldap3), and have
many similar ideas.
[LLDAP](https://github.com/nitnelave/lldap) is a similar project aiming for a small and easy to
administer LDAP server with a web administration portal. Both projects use the
[Kanidm LDAP bindings](https://github.com/kanidm/ldap3), and have many similar ideas.
The primary benefit of Kanidm over LLDAP is that Kanidm offers a broader set of "built in" features
like Oauth2 and OIDC. To use these from LLDAP you need an external portal like Keycloak, where in Kanidm
they are "built in". However that is also a strength of LLDAP is that is offers "less" which may make
it easier to administer and deploy for you.
like Oauth2 and OIDC. To use these from LLDAP you need an external portal like Keycloak, where in
Kanidm they are "built in". However that is also a strength of LLDAP is that is offers "less" which
may make it easier to administer and deploy for you.
If Kanidm is too complex for your needs, you should check out LLDAP as a smaller alternative. If you
want a project which has a broader feature set out of the box, then Kanidm might be a better fit.
### 389-ds / OpenLDAP
Both 389-ds and OpenLDAP are generic LDAP servers. This means they only provide LDAP and you need
to bring your own IDM configuration on top.
Both 389-ds and OpenLDAP are generic LDAP servers. This means they only provide LDAP and you need to
bring your own IDM configuration on top.
If you need the highest levels of customisation possible from your LDAP deployment, then these are
probably better alternatives. If you want a service that is easier to setup and focused on IDM, then
Kanidm is a better choice.
Kanidm was originally inspired by many elements of both 389-ds and OpenLDAP. Already Kanidm is as fast
as (or faster than) 389-ds for performance and scaling.
Kanidm was originally inspired by many elements of both 389-ds and OpenLDAP. Already Kanidm is as
fast as (or faster than) 389-ds for performance and scaling.
### FreeIPA
@ -101,15 +103,14 @@ Kanidm is probably for you.
## Developer Getting Started
If you want to develop on the server, there is a getting started [guide for developers]. IDM
is a diverse topic and we encourage contributions of many kinds in the project, from people of
all backgrounds.
If you want to develop on the server, there is a getting started [guide for developers]. IDM is a
diverse topic and we encourage contributions of many kinds in the project, from people of all
backgrounds.
[guide for developers]: https://kanidm.github.io/kanidm/master/DEVELOPER_README.html
## What does Kanidm mean?
The original project name was rsidm while it was a thought experiment. Now that it's growing
and developing, we gave it a better project name. Kani is Japanese for "crab". Rust's mascot is a crab.
The original project name was rsidm while it was a thought experiment. Now that it's growing and
developing, we gave it a better project name. Kani is Japanese for "crab". Rust's mascot is a crab.
IDM is the common industry term for identity management services.

View file

@ -1,4 +1,3 @@
<p align="center">
<img src="https://raw.githubusercontent.com/kanidm/kanidm/master/artwork/logo-small.png" width="20%" height="auto" />
</p>
@ -9,247 +8,244 @@ To get started, see the [kanidm book]
# Feedback
We value your feedback! First, please see our [code of conduct]. If you
have questions please join our [gitter community channel] so that we
can help. If you find a bug or issue, we'd love you to report it to our
[issue tracker].
We value your feedback! First, please see our [code of conduct]. If you have questions please join
our [gitter community channel] so that we can help. If you find a bug or issue, we'd love you to
report it to our [issue tracker].
# Release Notes
## 2022-11-01 - Kanidm 1.1.0-alpha10
This is the tenth alpha series release of the Kanidm Identity Management
project. Alpha releases are to help get feedback and ideas from the community
on how we can continue to make this project better for a future supported release.
This is the tenth alpha series release of the Kanidm Identity Management project. Alpha releases are
to help get feedback and ideas from the community on how we can continue to make this project better
for a future supported release.
The project is shaping up very nicely, and a beta will be coming soon!
### Upgrade Note!
This version will *require* TLS on all servers, even if behind a load balancer or
TLS terminating proxy. You should be ready for this change when you upgrade to the
latest version.
This version will _require_ TLS on all servers, even if behind a load balancer or TLS terminating
proxy. You should be ready for this change when you upgrade to the latest version.
### Release Highlights
* Management and tracking of authenticated sessions
* Make upgrade migrations more robust when upgrading over multiple versions
* Add support for service account tokens via ldap for extended read permissions
* Unix password management in web ui for posix accounts
* Support internal dynamic group entries
* Allow selection of name/spn in oidc claims
* Admin UI wireframes and basic elements
* TLS enforced as a requirement for all servers
* Support API service account tokens
* Make name rules stricter due to issues found in production
* Improve Oauth2 PKCE testing
* Add support for new password import hashes
* Allow configuration of trusting x forward for headers
* Components for account permission elevation modes
* Make pam\_unix more robust in high latency environments
* Add proc macros for test cases
* Improve authentication requests with cookie/token seperation
* Cleanup of expired authentication sessions
* Improved administration of password badlists
- Management and tracking of authenticated sessions
- Make upgrade migrations more robust when upgrading over multiple versions
- Add support for service account tokens via ldap for extended read permissions
- Unix password management in web ui for posix accounts
- Support internal dynamic group entries
- Allow selection of name/spn in oidc claims
- Admin UI wireframes and basic elements
- TLS enforced as a requirement for all servers
- Support API service account tokens
- Make name rules stricter due to issues found in production
- Improve Oauth2 PKCE testing
- Add support for new password import hashes
- Allow configuration of trusting x forward for headers
- Components for account permission elevation modes
- Make pam\_unix more robust in high latency environments
- Add proc macros for test cases
- Improve authentication requests with cookie/token seperation
- Cleanup of expired authentication sessions
- Improved administration of password badlists
## 2022-08-02 - Kanidm 1.1.0-alpha9
This is the ninth alpha series release of the Kanidm Identity Management
project. Alpha releases are to help get feedback and ideas from the community
on how we can continue to make this project better for a future supported release.
This is the ninth alpha series release of the Kanidm Identity Management project. Alpha releases are
to help get feedback and ideas from the community on how we can continue to make this project better
for a future supported release.
The project is shaping up very nicely, and a beta will be coming soon!
### Release Highlights
* Inclusion of a Python3 API library
* Improve orca usability
* Improved content security hashes of js/wasm elements
* Performance improvements in builds
* Windows development and service support
* WebUI polish and improvements
* Consent is remembered in oauth2 improving access flows
* Replication changelog foundations
* Compression middleware for static assests to reduce load times
* User on boarding now possible with self service credential reset
* TOTP and Webauthn/Passkey support in self service credential reset
* CTAP2+ support in Webauthn via CLI
* Radius supports EAP TLS identities in addition to EAP PEAP
- Inclusion of a Python3 API library
- Improve orca usability
- Improved content security hashes of js/wasm elements
- Performance improvements in builds
- Windows development and service support
- WebUI polish and improvements
- Consent is remembered in oauth2 improving access flows
- Replication changelog foundations
- Compression middleware for static assests to reduce load times
- User on boarding now possible with self service credential reset
- TOTP and Webauthn/Passkey support in self service credential reset
- CTAP2+ support in Webauthn via CLI
- Radius supports EAP TLS identities in addition to EAP PEAP
## 2022-05-01 - Kanidm 1.1.0-alpha8
This is the eighth alpha series release of the Kanidm Identity Management
project. Alpha releases are to help get feedback and ideas from the community
on how we can continue to make this project better for a future supported release.
This is the eighth alpha series release of the Kanidm Identity Management project. Alpha releases
are to help get feedback and ideas from the community on how we can continue to make this project
better for a future supported release.
### Release Highlights
* Foundations for cryptographic trusted device authentication
* Foundations for new user onboarding and credential reset
* Improve acis for administration of radius secrets
* Simplify initial server setup related to domain naming
* Improve authentication performance during high load
* Developer documentation improvements
* Resolve issues with client tool outputs not being displayed
* Show more errors on api failures
* Extend the features of account person set
* Link pam with pkg-config allowing more portable builds
* Allow self-service email addresses to be delegated
* Highlight that the WebUI is in alpha to prevent confusion
* Remove sync only client paths
- Foundations for cryptographic trusted device authentication
- Foundations for new user onboarding and credential reset
- Improve acis for administration of radius secrets
- Simplify initial server setup related to domain naming
- Improve authentication performance during high load
- Developer documentation improvements
- Resolve issues with client tool outputs not being displayed
- Show more errors on api failures
- Extend the features of account person set
- Link pam with pkg-config allowing more portable builds
- Allow self-service email addresses to be delegated
- Highlight that the WebUI is in alpha to prevent confusion
- Remove sync only client paths
## 2022-01-01 - Kanidm 1.1.0-alpha7
This is the seventh alpha series release of the Kanidm Identity Management
project. Alpha releases are to help get feedback and ideas from the community
on how we can continue to make this project better for a future supported release.
This is the seventh alpha series release of the Kanidm Identity Management project. Alpha releases
are to help get feedback and ideas from the community on how we can continue to make this project
better for a future supported release.
### Release Highlights
* Oauth2 scope to group mappings
* Webauthn subdomain support
* Oauth2 rfc7662 token introspection
* Basic OpenID Connect support
* Improve performance of domain rename
* Refactor of entry value internals to improve performance
* Addition of email address attributes
* Web UI improvements for Oauth2
- Oauth2 scope to group mappings
- Webauthn subdomain support
- Oauth2 rfc7662 token introspection
- Basic OpenID Connect support
- Improve performance of domain rename
- Refactor of entry value internals to improve performance
- Addition of email address attributes
- Web UI improvements for Oauth2
## 2021-10-01 - Kanidm 1.1.0-alpha6
This is the sixth alpha series release of the Kanidm Identity Management
project. Alpha releases are to help get feedback and ideas from the community
on how we can continue to make this project better for a future supported release.
This is the sixth alpha series release of the Kanidm Identity Management project. Alpha releases are
to help get feedback and ideas from the community on how we can continue to make this project better
for a future supported release.
It's also a special release as Kanidm has just turned 3 years old! Thank you all
for helping to bring the project this far! 🎉 🦀
It's also a special release as Kanidm has just turned 3 years old! Thank you all for helping to
bring the project this far! 🎉 🦀
### Release Highlights
* Support backup codes as MFA in case of lost TOTP/Webauthn
* Dynamic menus on CLI for usernames when multiple sessions exist
* Dynamic menus on CLI for auth factors when choices exist
* Better handle missing resources for web ui elements at server startup
* Add WAL checkpointing to improve disk usage
* Oauth2 user interface flows for simple authorisation scenarioes
* Improve entry memory usage based on valueset rewrite
* Allow online backups to be scheduled and taken
* Reliability improvements for unixd components with missing sockets
* Error message improvements for humans
* Improve client address logging for auditing
* Add strict HTTP resource headers for incoming/outgoing requests
* Replace rustls with openssl for HTTPS endpoint
* Remove auditscope in favour of the new tracing logging subsystem
* Reduce server memory usage with entry tracking improvements
* Improvements to performance with high cache sizes
* Session tokens persist over a session restart
- Support backup codes as MFA in case of lost TOTP/Webauthn
- Dynamic menus on CLI for usernames when multiple sessions exist
- Dynamic menus on CLI for auth factors when choices exist
- Better handle missing resources for web ui elements at server startup
- Add WAL checkpointing to improve disk usage
- Oauth2 user interface flows for simple authorisation scenarioes
- Improve entry memory usage based on valueset rewrite
- Allow online backups to be scheduled and taken
- Reliability improvements for unixd components with missing sockets
- Error message improvements for humans
- Improve client address logging for auditing
- Add strict HTTP resource headers for incoming/outgoing requests
- Replace rustls with openssl for HTTPS endpoint
- Remove auditscope in favour of the new tracing logging subsystem
- Reduce server memory usage with entry tracking improvements
- Improvements to performance with high cache sizes
- Session tokens persist over a session restart
## 2021-07-07 - Kanidm 1.1.0-alpha5
This is the fifth alpha series release of the Kanidm Identity Management
project. Alpha releases are to help get feedback and ideas from the community
on how we can continue to make this project better for a future supported release.
This is the fifth alpha series release of the Kanidm Identity Management project. Alpha releases are
to help get feedback and ideas from the community on how we can continue to make this project better
for a future supported release.
### Release Highlights
* Fix a major defect in how backup/restore worked
* Improve query performance by caching partial queries
* Clarity of error messages and user communication
* Password badlist caching
* Orca, a kanidm and ldap load testing system
* TOTP usability improvements
* Oauth2 foundations
* CLI tool session management improvements
* Default shell falls back if the requested shell is not found
* Optional backup codes in case of lost MFA device
* Statistical analysis of indexes to improve query optimisation
* Handle broken TOTP authenticator apps
- Fix a major defect in how backup/restore worked
- Improve query performance by caching partial queries
- Clarity of error messages and user communication
- Password badlist caching
- Orca, a kanidm and ldap load testing system
- TOTP usability improvements
- Oauth2 foundations
- CLI tool session management improvements
- Default shell falls back if the requested shell is not found
- Optional backup codes in case of lost MFA device
- Statistical analysis of indexes to improve query optimisation
- Handle broken TOTP authenticator apps
## 2021-04-01 - Kanidm 1.1.0-alpha4
This is the fourth alpha series release of the Kanidm Identity Management
project. Alpha releases are to help get feedback and ideas from the community
on how we can continue to make this project better for a future supported release.
This is the fourth alpha series release of the Kanidm Identity Management project. Alpha releases
are to help get feedback and ideas from the community on how we can continue to make this project
better for a future supported release.
### Release Highlights
* Performance Improvements
* TOTP CLI enrollment
* Jemalloc in main server instead of system allocator
* Command line completion
* TLS file handling improvements
* Webauthn authentication and enrollment on CLI
* Add db vacuum task
* Unix tasks daemon that automatically creates home directories
* Support for sk-ecdsa public ssh keys
* Badlist checked at login to determine account compromise
* Minor Fixes for attribute display
- Performance Improvements
- TOTP CLI enrollment
- Jemalloc in main server instead of system allocator
- Command line completion
- TLS file handling improvements
- Webauthn authentication and enrollment on CLI
- Add db vacuum task
- Unix tasks daemon that automatically creates home directories
- Support for sk-ecdsa public ssh keys
- Badlist checked at login to determine account compromise
- Minor Fixes for attribute display
## 2021-01-01 - Kanidm 1.1.0-alpha3
This is the third alpha series release of the Kanidm Identity Management
project. Alpha releases are to help get feedback and ideas from the community
on how we can continue to make this project better for a future supported release.
This is the third alpha series release of the Kanidm Identity Management project. Alpha releases are
to help get feedback and ideas from the community on how we can continue to make this project better
for a future supported release.
### Release Highlights
* Account "valid from" and "expiry" times.
* Rate limiting and softlocking of account credentials to prevent bruteforcing.
* Foundations of webauthn and multiple credential support.
* Rewrite of json authentication protocol components.
* Unixd will cache "non-existant" items to improve nss/pam latency.
- Account "valid from" and "expiry" times.
- Rate limiting and softlocking of account credentials to prevent bruteforcing.
- Foundations of webauthn and multiple credential support.
- Rewrite of json authentication protocol components.
- Unixd will cache "non-existant" items to improve nss/pam latency.
## 2020-10-01 - Kanidm 1.1.0-alpha2
This is the second alpha series release of the Kanidm Identity Management
project. Alpha releases are to help get feedback and ideas from the community
on how we can continue to make this project better for a future supported release.
This is the second alpha series release of the Kanidm Identity Management project. Alpha releases
are to help get feedback and ideas from the community on how we can continue to make this project
better for a future supported release.
### Release Highlights
* SIMD key lookups in container builds for datastructures
* Server and Client hardening warnings for running users and file permissions
* Search limits and denial of unindexed searches to prevent denial-of-service
* Dynamic Rounds for PBKDF2 based on CPU performance
* Radius module upgraded to python 3
* On-login PW upgrade, allowing weaker hashes to be re-computed to stronger variants on login.
* Replace actix with tide and async
* Reduction in memory footprint during searches
* Change authentication from cookies to auth-bearer tokens
- SIMD key lookups in container builds for datastructures
- Server and Client hardening warnings for running users and file permissions
- Search limits and denial of unindexed searches to prevent denial-of-service
- Dynamic Rounds for PBKDF2 based on CPU performance
- Radius module upgraded to python 3
- On-login PW upgrade, allowing weaker hashes to be re-computed to stronger variants on login.
- Replace actix with tide and async
- Reduction in memory footprint during searches
- Change authentication from cookies to auth-bearer tokens
## 2020-07-01 - Kanidm 1.1.0-alpha1
This is the first alpha series release of the Kanidm Identity Management
project. Alpha releases are to help get feedback and ideas from the community
on how we can continue to make this project better for a future supported release.
This is the first alpha series release of the Kanidm Identity Management project. Alpha releases are
to help get feedback and ideas from the community on how we can continue to make this project better
for a future supported release.
It would not be possible to create a project like this, without the contributions
and help of many people. I would especially like to thank:
It would not be possible to create a project like this, without the contributions and help of many
people. I would especially like to thank:
* Pando85
* Alberto Planas (aplanas)
* Jake (slipperyBishop)
* Charelle (Charcol)
* Leigh (excitedleigh)
* Jamie (JJJollyjim)
* Triss Healy (NiryaAestus)
* Samuel Cabrero (scabrero)
* Jim McDonough
- Pando85
- Alberto Planas (aplanas)
- Jake (slipperyBishop)
- Charelle (Charcol)
- Leigh (excitedleigh)
- Jamie (JJJollyjim)
- Triss Healy (NiryaAestus)
- Samuel Cabrero (scabrero)
- Jim McDonough
### Release Highlights
* A working identity management server, including database
* RADIUS authentication and docker images
* Pam and Nsswitch resolvers for Linux/Unix authentication
* SSH public key distribution
* LDAP server front end for legacy applications
* Password badlisting and quality checking
* Memberof and reverse group management with referential integrity
* Recycle Bin
* Performance analysis tools
- A working identity management server, including database
- RADIUS authentication and docker images
- Pam and Nsswitch resolvers for Linux/Unix authentication
- SSH public key distribution
- LDAP server front end for legacy applications
- Password badlisting and quality checking
- Memberof and reverse group management with referential integrity
- Recycle Bin
- Performance analysis tools
[issue tracker]: https://github.com/kanidm/kanidm/issues
[gitter community channel]: https://gitter.im/kanidm/community
[code of conduct]: https://github.com/kanidm/kanidm/blob/master/CODE_OF_CONDUCT.md
[kanidm book]: https://kanidm.github.io/kanidm/stable/

View file

@ -1,18 +1,21 @@
# Security Policy
Thanks for taking the time to engage with the project! We believe in the concept of [coordinated disclosure](https://en.wikipedia.org/wiki/Coordinated_vulnerability_disclosure) and currently expect a 60 day grace period for resolution of any outstanding issues.
Thanks for taking the time to engage with the project! We believe in the concept of
[coordinated disclosure](https://en.wikipedia.org/wiki/Coordinated_vulnerability_disclosure) and
currently expect a 60 day grace period for resolution of any outstanding issues.
## Supported Versions
As Kanidm is in early Alpha and under heavy development, only the most recent release and current HEAD of the main branch will be supported.
As Kanidm is in early Alpha and under heavy development, only the most recent release and current
HEAD of the main branch will be supported.
## Reporting a Vulnerability
Please log a security-template issue on Github and directly contact one of the core team members via email:
- [William](mailto:william@blackhats.net.au)
- [James](mailto:james+kanidm@terminaloutcomes.com)
We will endeavour to respond to your request as soon as possible, no bounties are available, but acknowledgement will be made if you like.
Please log a security-template issue on Github and directly contact one of the core team members via
email:
- [William](mailto:william@blackhats.net.au)
- [James](mailto:james+kanidm@terminaloutcomes.com)
We will endeavour to respond to your request as soon as possible, no bounties are available, but
acknowledgement will be made if you like.

View file

@ -1,6 +1,4 @@
About these artworks
--------------------
## About these artworks
The original artworks were commissioned and produced by Jesse Irwin (tw: @wizardfortress).
@ -9,4 +7,3 @@ The christmas logo was donated and produced by @ateneatla ( https://github.com/a
They are all very much appreciated!
All artworks are licensed as CC-BY-NC-ND.

View file

@ -1,161 +1,143 @@
## Architectural Overview
Architectural Overview
----------------------
Kanidm has a number of components and layers that make it up. As this project is continually
evolving, if you have questions or notice discrepancies with this document please contact William
(Firstyear) at any time.
Kanidm has a number of components and layers that make it up. As this project
is continually evolving, if you have questions or notice discrepancies
with this document please contact William (Firstyear) at any time.
## Tools
Tools
-----
Kanidm Tools are a set of command line clients that are intended to help administrators deploy,
interact with, and support a Kanidm server installation. These tools may also be used for servers or
machines to authenticate and identify users. This is the "human interaction" part of the server from
a CLI perspective.
Kanidm Tools are a set of command line clients that are intended to help
administrators deploy, interact with, and support a Kanidm server installation.
These tools may also be used for servers or machines to authenticate and
identify users. This is the "human interaction" part of the server from a
CLI perspective.
## Clients
Clients
-------
The `kanidm` client is a reference implementation of the client library, that others may consume or
interact with to communicate with a Kanidm server instance. The tools above use this client library
for all of its actions. This library is intended to encapsulate some high level logic as an
abstraction over the REST API.
The `kanidm` client is a reference implementation of the client library, that
others may consume or interact with to communicate with a Kanidm server instance.
The tools above use this client library for all of its actions. This library
is intended to encapsulate some high level logic as an abstraction over the REST API.
## Proto
Proto
-----
The `kanidm` proto is a set of structures that are used by the REST and raw API's for HTTP
communication. These are intended to be a reference implementation of the on-the-wire protocol, but
importantly these are also how the server represents its communication. This makes this the
authorative source of protocol layouts with regard to REST or raw communication.
The `kanidm` proto is a set of structures that are used by the REST and raw API's
for HTTP communication. These are intended to be a reference implementation of the on-the-wire protocol, but importantly these are also how the server represents its communication. This makes this the authorative source of protocol layouts
with regard to REST or raw communication.
## Kanidmd (main server)
Kanidmd (main server)
---------------------
Kanidmd is intended to have minimal (thin) client tools, where the server itself contains most logic
for operations, transformations, and routing of requests to their relevant datatypes. As a result,
the `kanidmd` section is the largest component of the project as it implements nearly everything
required for IDM functionality to exist.
Kanidmd is intended to have minimal (thin) client tools, where the server itself
contains most logic for operations, transformations, and routing of requests to
their relevant datatypes. As a result, the `kanidmd` section is the largest component
of the project as it implements nearly everything required for IDM functionality to exist.
# Search
Search
======
Search is the "hard worker" of the server, intended to be a fast path with minimal overhead
so that clients can acquire data as quickly as possible. The server follows the below pattern.
Search is the "hard worker" of the server, intended to be a fast path with minimal overhead so that
clients can acquire data as quickly as possible. The server follows the below pattern.
![Search flow diagram](diagrams/search-flow.png)
(1) All incoming requests are from a client on the left. These are either REST
requests, or a structured protocol request via the raw interface. It's
interesting to note the raw request is almost identical to the queryserver
event types - where as REST requests we have to generate request messages that can
become events.
(1) All incoming requests are from a client on the left. These are either REST requests, or a
structured protocol request via the raw interface. It's interesting to note the raw request is
almost identical to the queryserver event types - where as REST requests we have to generate request
messages that can become events.
The frontend uses a webserver with a thread-pool to process and decode
network I/O operations concurrently. This then sends asynchronous messages
to a worker (actor) pool for handing.
The frontend uses a webserver with a thread-pool to process and decode network I/O operations
concurrently. This then sends asynchronous messages to a worker (actor) pool for handing.
(2) These search messages in the actors are transformed into "events" - a self
contained structure containing all relevant data related to the operation at hand.
This may be the event origin (a user or internal), the requested filter (query),
and perhaps even a list of attributes requested. These events are designed
to ensure correctness. When a search message is transformed to a search event, it
is checked by the schema to ensure that the request is valid and can be
satisfied securely.
(2) These search messages in the actors are transformed into "events" - a self contained structure
containing all relevant data related to the operation at hand. This may be the event origin (a user
or internal), the requested filter (query), and perhaps even a list of attributes requested. These
events are designed to ensure correctness. When a search message is transformed to a search event,
it is checked by the schema to ensure that the request is valid and can be satisfied securely.
As these workers are in a thread pool, it's important that these are concurrent and
do not lock or block - this concurrency is key to high performance and safety.
It's also worth noting that this is the level where read transactions are created
and commited - all operations are transactionally proctected from an early stage
to guarantee consistency of the operations.
As these workers are in a thread pool, it's important that these are concurrent and do not lock or
block - this concurrency is key to high performance and safety. It's also worth noting that this is
the level where read transactions are created and commited - all operations are transactionally
proctected from an early stage to guarantee consistency of the operations.
3. When the event is known to be consistent, it is then handed to the queryserver - the query server
begins a process of steps on the event to apply it and determine the results for the request.
This process involves further validation of the query, association of metadata to the query
for the backend, and then submission of the high-level query to the backend.
begins a process of steps on the event to apply it and determine the results for the request.
This process involves further validation of the query, association of metadata to the query for
the backend, and then submission of the high-level query to the backend.
4. The backend takes the request and begins the low-level processing to actually determine
a candidate set. The first step in query optimisation, to ensure we apply the query in the
most effecient manner. Once optimised, we then use the query to query indexes and create
a potential candidate set of identifiers for matching entries (5.). Once we have this
candidate id set, we then retrieve the relevant entries as our result candidate set (6.)
and return them (7.) to the backend.
4. The backend takes the request and begins the low-level processing to actually determine a
candidate set. The first step in query optimisation, to ensure we apply the query in the most
effecient manner. Once optimised, we then use the query to query indexes and create a potential
candidate set of identifiers for matching entries (5.). Once we have this candidate id set, we
then retrieve the relevant entries as our result candidate set (6.) and return them (7.) to the
backend.
8. The backend now deserialises the databases candidate entries into a higher level and
structured (and strongly typed) format that the query server knows how to operate on. These
are then sent back to the query server.
5. The backend now deserialises the databases candidate entries into a higher level and structured
(and strongly typed) format that the query server knows how to operate on. These are then sent
back to the query server.
9. The query server now applies access controls over what you can / can't see. This happens
in two phases. The first is to determine "which candidate entries you have the rights to
query and view" and the second is to determine "which attributes of each entry you have
the right to percieve". This seperation exists so that other parts of the server can
*impersonate* users and conduct searches on their behalf, but still internally operate
on the full entry without access controls limiting their scope of attributes we can view.
6. The query server now applies access controls over what you can / can't see. This happens in two
phases. The first is to determine "which candidate entries you have the rights to query and view"
and the second is to determine "which attributes of each entry you have the right to percieve".
This seperation exists so that other parts of the server can _impersonate_ users and conduct
searches on their behalf, but still internally operate on the full entry without access controls
limiting their scope of attributes we can view.
10. From the entries reduced set (ie access controls applied), we can then transform
each entry into it's protocol forms - where we transform each strong type into a string
representation for simpler processing for clients. These protoentries are returned to the
front end.
7. From the entries reduced set (ie access controls applied), we can then transform each entry into
it's protocol forms - where we transform each strong type into a string representation for
simpler processing for clients. These protoentries are returned to the front end.
11. Finally, the protoentries are now sent to the client in response to their request.
8. Finally, the protoentries are now sent to the client in response to their request.
Write
=====
# Write
The write path is similar to the search path, but has some subtle differences that are
worth paying attention to.
The write path is similar to the search path, but has some subtle differences that are worth paying
attention to.
.. image:: diagrams/write-flow.png
:width: 800
.. image:: diagrams/write-flow.png :width: 800
(1), (2) Like search, all client operations come from the REST or raw apis, and are transformed or
generated into messages. These messages are sent to a single write worker. There is only a single
write worker due to the use of copy-on-write structures in the server, limiting us to a single writer,
but allowing search transaction to proceed without blocking in parallel.
write worker due to the use of copy-on-write structures in the server, limiting us to a single
writer, but allowing search transaction to proceed without blocking in parallel.
(3) From the worker, the relevent event is created. This may be a "Create", "Modify" or "Delete" event.
The query server handles these slightly differently. In the create path, we take the set of entries
you wish to create as our candidate set. In modify or delete, we perform an impersonation search,
and use the set of entries within your read bounds to generate the candidate set. This candidate
set will now be used for the remainder of the writing operation.
(3) From the worker, the relevent event is created. This may be a "Create", "Modify" or "Delete"
event. The query server handles these slightly differently. In the create path, we take the set of
entries you wish to create as our candidate set. In modify or delete, we perform an impersonation
search, and use the set of entries within your read bounds to generate the candidate set. This
candidate set will now be used for the remainder of the writing operation.
It is at this point, we assert access controls over the candidate set and the changes you wish
to make. If you are not within rights to perform these operations the event returns an error.
It is at this point, we assert access controls over the candidate set and the changes you wish to
make. If you are not within rights to perform these operations the event returns an error.
(4) The entries are now sent to the pre-operation plugins for the relevant operation type. This allows
transformation of the candidate entries beyond the scope of your access controls, and to maintain
some elements of data consistency. For example one plugin prevents creation of system protected types
where another ensures that uuid exists on every entry.
(4) The entries are now sent to the pre-operation plugins for the relevant operation type. This
allows transformation of the candidate entries beyond the scope of your access controls, and to
maintain some elements of data consistency. For example one plugin prevents creation of system
protected types where another ensures that uuid exists on every entry.
(5) These transformed entries are now returned to the query server.
(6) The backend is sent the list of entries for writing. Indexes are generated (7) as required based
on the new or modified entries, and the entries themself are written (8) into the core db tables. This
operation returns a result (9) to the backend, which is then filtered up to the query server (10)
on the new or modified entries, and the entries themself are written (8) into the core db tables.
This operation returns a result (9) to the backend, which is then filtered up to the query server
(10)
(11) Provided all operations to this point have been successful, we now apply post write plugins which
may enforce or generate different properties in the transaction. This is similar to the pre plugins,
but allows different operations. For example, a post plugin ensurs uuid reference types are
consistent and valid across the set of changes in the database. The most critical is memberof,
which generates reverse reference links from entries to their group memberships, enabling fast
rbac operations. These are done as post plugins because at this point internal searches can now
yield and see the modified entries that we have just added to the indexes and datatables, which
is important for consistency (and simplicity) especially when you consider batched operations.
(11) Provided all operations to this point have been successful, we now apply post write plugins
which may enforce or generate different properties in the transaction. This is similar to the pre
plugins, but allows different operations. For example, a post plugin ensurs uuid reference types are
consistent and valid across the set of changes in the database. The most critical is memberof, which
generates reverse reference links from entries to their group memberships, enabling fast rbac
operations. These are done as post plugins because at this point internal searches can now yield and
see the modified entries that we have just added to the indexes and datatables, which is important
for consistency (and simplicity) especially when you consider batched operations.
(12) Finally the result is returned up (13) through (14) the layers (15) to the client to
inform them of the success (or failure) of the operation.
(12) Finally the result is returned up (13) through (14) the layers (15) to the client to inform
them of the success (or failure) of the operation.
IDM
===
# IDM
TBD
Radius
-------
The radius components are intended to be minimal to support a common set of radius operations in
a container image that is simple to configure. If you require a custom configuration you should
use the python tools here and configure your own radius instance as required.
## Radius
The radius components are intended to be minimal to support a common set of radius operations in a
container image that is simple to configure. If you require a custom configuration you should use
the python tools here and configure your own radius instance as required.

View file

@ -1,6 +1,7 @@
# Domain Display Name
A human-facing string to use in places like web page titles, TOTP issuer codes, the Oauth authorisation server name etc.
On system creation, or if it hasn't been set, it'll default to `format!("Kanidm {}", domain_name)` so that you'll see `Kanidm idm.example.com` if your domain is `idm.example.com`.
A human-facing string to use in places like web page titles, TOTP issuer codes, the Oauth
authorisation server name etc.
On system creation, or if it hasn't been set, it'll default to `format!("Kanidm {}", domain_name)`
so that you'll see `Kanidm idm.example.com` if your domain is `idm.example.com`.

View file

@ -1,29 +1,26 @@
Indexing
--------
## Indexing
Indexing is deeply tied to the concept of filtering. Indexes exist to make the application of a
search term (filter) faster.
World without indexing
----------------------
## World without indexing
Almost all databases are built ontop of a key-value storage engine of some nature.
In our case we are using (feb 2019) sqlite and hopefully SLED in the future.
Almost all databases are built ontop of a key-value storage engine of some nature. In our case we
are using (feb 2019) sqlite and hopefully SLED in the future.
So our entries that contain sets of avas, these are serialised into a byte format
(feb 2019, json but soon cbor) and stored in a table of "id: entry". For example:
So our entries that contain sets of avas, these are serialised into a byte format (feb 2019, json
but soon cbor) and stored in a table of "id: entry". For example:
| ID | data |
|------|-----------------------------------------------------------------------------|
| 01 | `{ 'Entry': { 'name': ['name'], 'class': ['person'], 'uuid': ['...'] } }` |
| 02 | `{ 'Entry': { 'name': ['beth'], 'class': ['person'], 'uuid': ['...'] } }` |
| 03 | `{ 'Entry': { 'name': ['alan'], 'class': ['person'], 'uuid': ['...'] } }` |
| 04 | `{ 'Entry': { 'name': ['john'], 'class': ['person'], 'uuid': ['...'] } }` |
| 05 | `{ 'Entry': { 'name': ['kris'], 'class': ['person'], 'uuid': ['...'] } }` |
| ID | data |
| -- | ------------------------------------------------------------------------- |
| 01 | `{ 'Entry': { 'name': ['name'], 'class': ['person'], 'uuid': ['...'] } }` |
| 02 | `{ 'Entry': { 'name': ['beth'], 'class': ['person'], 'uuid': ['...'] } }` |
| 03 | `{ 'Entry': { 'name': ['alan'], 'class': ['person'], 'uuid': ['...'] } }` |
| 04 | `{ 'Entry': { 'name': ['john'], 'class': ['person'], 'uuid': ['...'] } }` |
| 05 | `{ 'Entry': { 'name': ['kris'], 'class': ['person'], 'uuid': ['...'] } }` |
The ID column is *private* to the backend implementation and is never revealed to the higher
level components. However the ID is very important to indexing :)
The ID column is _private_ to the backend implementation and is never revealed to the higher level
components. However the ID is very important to indexing :)
If we wanted to find `Eq(name, john)` here, what do we need to do? A full table scan is where we
perform:
@ -33,25 +30,24 @@ perform:
entry = deserialise(row)
entry.match_filter(...) // check Eq(name, john)
For a small database (maybe up to 20 objects), this is probably fine. But once you start to get
much larger this is really costly. We continually load, deserialise, check and free data that
is not relevant to the search. This is why full table scans of any database (sql, ldap, anything)
are so costly. It's really really scanning everything!
For a small database (maybe up to 20 objects), this is probably fine. But once you start to get much
larger this is really costly. We continually load, deserialise, check and free data that is not
relevant to the search. This is why full table scans of any database (sql, ldap, anything) are so
costly. It's really really scanning everything!
How does indexing work?
-----------------------
## How does indexing work?
Indexing is a pre-computed lookup table of what you *might* search in a specific format. Let's say
in our example we have an equality index on "name" as an attribute. Now in our backend we define
an extra table called "index_eq_name". Its contents would look like:
Indexing is a pre-computed lookup table of what you _might_ search in a specific format. Let's say
in our example we have an equality index on "name" as an attribute. Now in our backend we define an
extra table called "index_eq_name". Its contents would look like:
| index | idl (ID List) |
|-----------|---------------|
| alan | [03, ] |
| beth | [02, ] |
| john | [04, ] |
| kris | [05, ] |
| name | [01, ] |
| index | idl (ID List) |
| ----- | ------------- |
| alan | [03, ] |
| beth | [02, ] |
| john | [04, ] |
| kris | [05, ] |
| name | [01, ] |
So when we perform our search for Eq(name, john) again, we see name is indexed. We then perform:
@ -68,44 +64,44 @@ We can now take this back to our id2entry table and perform:
data = sqlite.do(SELECT * from id2entry where ID = 04)
```
The key-value engine only gives us the entry for john, and we have a match! If id2entry
had 1 million entries, a full table scan would be 1 million loads and compares - with the
index, it was 2 loads and one compare. That's 30000x faster (potentially ;) )!
The key-value engine only gives us the entry for john, and we have a match! If id2entry had 1
million entries, a full table scan would be 1 million loads and compares - with the index, it was 2
loads and one compare. That's 30000x faster (potentially ;) )!
To improve on this, if we had a query like Or(Eq(name, john), Eq(name, kris)) we can use our
indexes to speed this up.
To improve on this, if we had a query like Or(Eq(name, john), Eq(name, kris)) we can use our indexes
to speed this up.
We would query index_eq_name again, and we would perform the search for both john, and kris. Because this is an OR we then union the two idl's, and we would have:
We would query index_eq_name again, and we would perform the search for both john, and kris. Because
this is an OR we then union the two idl's, and we would have:
[04, 05,]
```
[04, 05,]
```
Now we just have to get entries 04,05 from id2entry, and we have our matching query. This means
filters are often applied as idl set operations.
Compressed ID lists
-------------------
## Compressed ID lists
In order to make idl loading faster, and the set operations faster there is an idl library
(developed by me, firstyear), which will be used for this. To read more see:
https://github.com/Firstyear/idlset
Filter Optimisation
-------------------
## Filter Optimisation
Filter optimisation begins to play an important role when we have indexes. If we indexed
something like `Pres(class)`, then the idl for that search is the set of all database
entries. Similar, if our database of 1 million entries has 250,000 `class=person`, then
`Eq(class, person)`, will have an idl containing 250,000 ids. Even with idl compression, this
is still a lot of data!
Filter optimisation begins to play an important role when we have indexes. If we indexed something
like `Pres(class)`, then the idl for that search is the set of all database entries. Similar, if our
database of 1 million entries has 250,000 `class=person`, then `Eq(class, person)`, will have an idl
containing 250,000 ids. Even with idl compression, this is still a lot of data!
There tend to be two types of searches against a directory like Kanidm.
* Broad searches
* Targetted single entry searches
- Broad searches
- Targetted single entry searches
For broad searches, filter optimising does little - we just have to load those large idls, and
use them. (Yes, loading the large idl and using it is still better than full table scan though!)
For broad searches, filter optimising does little - we just have to load those large idls, and use
them. (Yes, loading the large idl and using it is still better than full table scan though!)
However, for targeted searches, filter optimisation really helps.
@ -121,13 +117,13 @@ In this case with our database of 250,000 persons, our idl's would have:
And( idl[250,000 ids], idl(1 id))
```
Which means the result will always be the *single* id in the idl or *no* value
because it wasn't present.
Which means the result will always be the _single_ id in the idl or _no_ value because it wasn't
present.
We add a single concept to the server called the "filter test threshold". This is the
state in which a candidate set that is not completed operation, is shortcut, and we
then apply the filter in the manner of a full table scan to the partial set because
it will be faster than the index loading and testing.
We add a single concept to the server called the "filter test threshold". This is the state in which
a candidate set that is not completed operation, is shortcut, and we then apply the filter in the
manner of a full table scan to the partial set because it will be faster than the index loading and
testing.
When we have this test threshold, there exists two possibilities for this filter.
@ -135,70 +131,66 @@ When we have this test threshold, there exists two possibilities for this filter
And( idl[250,000 ids], idl(1 id))
```
We load 250,000 idl and then perform the intersection with the idl of 1 value, and
result in 1 or 0.
We load 250,000 idl and then perform the intersection with the idl of 1 value, and result in 1 or 0.
```
And( idl(1 id), idl[250,000 ids])
```
We load the single idl value for name, and then as we are below the test-threshold we
shortcut out and apply the filter to entry ID 1 - yielding a match or no match.
We load the single idl value for name, and then as we are below the test-threshold we shortcut out
and apply the filter to entry ID 1 - yielding a match or no match.
Notice in the second, by promoting the "smaller" idl, we were able to save the work of the idl load and intersection as our first equality of "name" was more targetted?
Notice in the second, by promoting the "smaller" idl, we were able to save the work of the idl load
and intersection as our first equality of "name" was more targetted?
Filter optimisation is about re-arranging these filters in the server using our insight to
data to provide faster searches and avoid indexes that are costly unless they are needed.
Filter optimisation is about re-arranging these filters in the server using our insight to data to
provide faster searches and avoid indexes that are costly unless they are needed.
In this case, we would *demote* any filter where Eq(class, ...) to the *end* of the And,
because it is highly likely to be less targetted than the other Eq types. Another example
would be promotion of Eq filters to the front of an And over a Sub term, wherh Sub indexes
tend to be larger and have longer IDLs.
In this case, we would _demote_ any filter where Eq(class, ...) to the _end_ of the And, because it
is highly likely to be less targetted than the other Eq types. Another example would be promotion of
Eq filters to the front of an And over a Sub term, wherh Sub indexes tend to be larger and have
longer IDLs.
Implementation Details and Notes
--------------------------------
## Implementation Details and Notes
Before we discuss the details of the states and update processes, we need to consider the index
types we require.
Index types
===========
# Index types
The standard index is a key-value, where the key is the lookup, and the value is the idl set
of the candidates. The examples follow the above.
The standard index is a key-value, where the key is the lookup, and the value is the idl set of the
candidates. The examples follow the above.
For us, we will format the table names as:
* idx_eq_<attrname>
* idx_sub_<attrname>
* idx_pres_<attrname>
- idx_eq_<attrname>
- idx_sub_<attrname>
- idx_pres_<attrname>
These will be string, blob for SQL. The string is the pkey.
We will have the Value's "to_index_str" emit the set of values. It's important
to remember this is a *set* of possible index emissions, where we could have multiple values
returned. This will be important with claims for credentials so that the claims can be indexed
correctly.
We will have the Value's "to_index_str" emit the set of values. It's important to remember this is a
_set_ of possible index emissions, where we could have multiple values returned. This will be
important with claims for credentials so that the claims can be indexed correctly.
We also require a special name to uuid, and uuid to name index. These are to accelerate
the name2uuid and uuid2name functions which are common in resolving on search. These will
be named in the tables as:
We also require a special name to uuid, and uuid to name index. These are to accelerate the
name2uuid and uuid2name functions which are common in resolving on search. These will be named in
the tables as:
* idx_name2uuid
* idx_uuid2name
- idx_name2uuid
- idx_uuid2name
They will be structured as string, string for both - where the uuid and name column matches
the correct direction, and is the primary key. We could use a single table, but if
we change to sled we need to split this, so we pre-empt this change and duplicate the data here.
They will be structured as string, string for both - where the uuid and name column matches the
correct direction, and is the primary key. We could use a single table, but if we change to sled we
need to split this, so we pre-empt this change and duplicate the data here.
Indexing States
===============
# Indexing States
* Reindex
- Reindex
A reindex is the only time when we create the tables needed for indexing. In all other phases
if we do not have the table for the insertion, we log the error, and move on, instructing in
the logs to reindex asap.
A reindex is the only time when we create the tables needed for indexing. In all other phases if we
do not have the table for the insertion, we log the error, and move on, instructing in the logs to
reindex asap.
Reindexing should be performed after we join a replication group, or when we "setup" the instance
for the first time. This means we need an "initial indexed" flag or similar.
@ -206,64 +198,67 @@ for the first time. This means we need an "initial indexed" flag or similar.
For all intents, a reindex is likely the same as "create" but just without replacing the entry. We
would just remove all the index tables before hand.
* Write operation index metadata
- Write operation index metadata
At the start of a write transaction, the schema passes us a map of the current attribute
index states so that on filter application or modification we are aware of what attrs are indexed. It is assumed that `name2uuid` and `uuid2name` are always indexed.
At the start of a write transaction, the schema passes us a map of the current attribute index
states so that on filter application or modification we are aware of what attrs are indexed. It is
assumed that `name2uuid` and `uuid2name` are always indexed.
* Search Index Metadata
- Search Index Metadata
When filters are resolved they are tagged by their indexed state to allow optimisation
to occur. We then process each filter element and their tag to determine the indexes
needed to built a candidate set. Once we reach threshold we return the partial candidate set,
and begin the `id2entry` process and the `entry_match_no_index` routine.
When filters are resolved they are tagged by their indexed state to allow optimisation to occur. We
then process each filter element and their tag to determine the indexes needed to built a candidate
set. Once we reach threshold we return the partial candidate set, and begin the `id2entry` process
and the `entry_match_no_index` routine.
`And` and `Or` terms have flags if they are partial or fully indexed, meaning we could have a
shortcut where if the outermost term is a full indexed term, then we can avoid the `entry_match_no_index` Scall.
shortcut where if the outermost term is a full indexed term, then we can avoid the
`entry_match_no_index` Scall.
* Create
- Create
This is one of the simplest steps. On create we iterate over the entries ava's and
referencing the index metadata of the transaction, we create the indexes as needed from
the values (before dbv conversion).
This is one of the simplest steps. On create we iterate over the entries ava's and referencing the
index metadata of the transaction, we create the indexes as needed from the values (before dbv
conversion).
* Delete
- Delete
Given the Entry to delete, we remove the ava's and id's from each set as needed. Generally this
will only be for tombstones, but we still should check the process works. Important to check will
be entries with and without names, ensuring the name2uuid/uuid2name is correctly changed, and
removal of all the other attributes.
Given the Entry to delete, we remove the ava's and id's from each set as needed. Generally this will
only be for tombstones, but we still should check the process works. Important to check will be
entries with and without names, ensuring the name2uuid/uuid2name is correctly changed, and removal
of all the other attributes.
* Modify
- Modify
This is the truly scary and difficult situation. The simple method would be to "delete" all indexes based on the pre-entry state, and then to create again. However the current design
of Entry and modification doesn't work like this as we only get the Entry to add.
This is the truly scary and difficult situation. The simple method would be to "delete" all indexes
based on the pre-entry state, and then to create again. However the current design of Entry and
modification doesn't work like this as we only get the Entry to add.
Most likely we will need to change modify to take the set of (pre, post) candidates as a pair
*OR* we have the entry store it's own pre-post internally. Given we already need to store the pre
/post entries in the txn, it's likely better to have a pairing of these, and that allows us to
then index replication metadata later as the entry will contain it's own changelog internally.
Most likely we will need to change modify to take the set of (pre, post) candidates as a pair _OR_
we have the entry store it's own pre-post internally. Given we already need to store the pre /post
entries in the txn, it's likely better to have a pairing of these, and that allows us to then index
replication metadata later as the entry will contain it's own changelog internally.
Given the pair, we then assert that they are the same entry (id). We can then use the
index metadata to generate an indexing diff between them, containing a set of index items
to remove (due to removal of the attr or value), and what to add (due to addition).
Given the pair, we then assert that they are the same entry (id). We can then use the index metadata
to generate an indexing diff between them, containing a set of index items to remove (due to removal
of the attr or value), and what to add (due to addition).
The major transformation cases for testing are:
* Add a multivalue (one)
* Add a multivalue (many)
* On a mulitvalue, add another value
* On multivalue, remove a value, but leave others
* Delete a multivalue
* Add a new single value
* Replace a single value
* Delete a single value
- Add a multivalue (one)
- Add a multivalue (many)
- On a mulitvalue, add another value
- On multivalue, remove a value, but leave others
- Delete a multivalue
- Add a new single value
- Replace a single value
- Delete a single value
We also need to check that modification of name correctly changes name2uuid and uuid2name.
* Recycle to Tombstone (removal of name)
* Change of UUID (may happen in repl conflict scenario)
* Change of name
* Change of name and uuid
- Recycle to Tombstone (removal of name)
- Change of UUID (may happen in repl conflict scenario)
- Change of name
- Change of name and uuid
Of course, these should work as above too.

View file

@ -1,20 +1,19 @@
# Examples of situations for consideration
## Ability to be forgotten
### Deletion is delete not flagging
When an account is deleted it must be truly deleted, not just flagged for future delete. Note
that some functionality like the recycle bin, we must keep the account details, but a recycle
bin purge does truly delete the account.
When an account is deleted it must be truly deleted, not just flagged for future delete. Note that
some functionality like the recycle bin, we must keep the account details, but a recycle bin purge
does truly delete the account.
## Self determination and autonomy
### Self name change
People should be able to change their own name at anytime. Consider divorce, leaving abusive partners
or other personal decisions around why a name change is relevant.
People should be able to change their own name at anytime. Consider divorce, leaving abusive
partners or other personal decisions around why a name change is relevant.
This is why names are self-service writeable at any time.
@ -22,15 +21,15 @@ This is why names are self-service writeable at any time.
### Cultural and Social awareness of name formats
All name fields should be case sensitive utf8 with no max or min length limit. This is
because names can take many forms such as.
All name fields should be case sensitive utf8 with no max or min length limit. This is because names
can take many forms such as.
* firstname middlename lastname
* firstname lastname
* firstname firstname lastname
* firstname lastname lastname
* firstname
* lastname firstname
- firstname middlename lastname
- firstname lastname
- firstname firstname lastname
- firstname lastname lastname
- firstname
- lastname firstname
And many many more that are not listed here. This is why our names are displayName as a freetext
UTF8 field, with case sensitivity and no limits.
@ -39,11 +38,7 @@ UTF8 field, with case sensitivity and no limits.
### Access to legalName field
legalName should only be on a "need to know" basis, and only collected if required. This is
to help people who may be stalked or harassed, or otherwise conscious of their privacy.
legalName should only be on a "need to know" basis, and only collected if required. This is to help
people who may be stalked or harassed, or otherwise conscious of their privacy.
## To use and access this software regardless of ability

View file

@ -1,18 +1,18 @@
# Statement of ethics and rights
Kanidm is a project that will store, process and present people's personal data. This means
we have a responsibility to respect the data of all people who could be using our system -
many who interact indirectly or do not have a choice in this platform.
Kanidm is a project that will store, process and present people's personal data. This means we have
a responsibility to respect the data of all people who could be using our system - many who interact
indirectly or do not have a choice in this platform.
## Rights of people
All people using this software should expect to have the right to:
* Self control over their data, including the ability to alter or delete at any time.
* Free from harmful discrimination of any kind
* Informed consent over control and privacy of their data, including access and understand data held and shared on their behalf
* To be able to use and access this software regardless of ability, culture or language.
- Self control over their data, including the ability to alter or delete at any time.
- Free from harmful discrimination of any kind
- Informed consent over control and privacy of their data, including access and understand data held
and shared on their behalf
- To be able to use and access this software regardless of ability, culture or language.
## More?

View file

@ -1,7 +1,7 @@
# Apache OAuth config example
This example is here mainly for devs to come up with super complicated ways to test the changes they're
making which affect OAuth things.
This example is here mainly for devs to come up with super complicated ways to test the changes
they're making which affect OAuth things.
## Example of how to run it
@ -10,7 +10,7 @@ OAUTH_HOSTNAME=test-oauth2.example.com \
KANIDM_HOSTNAME=test-kanidm.example.com \
KANIDM_CLIENT_SECRET=1234Hq5d1J5GG9VNae3bRMFGDVFR3bUyyXg3RPRSefJLNhee \
KANIDM_PORT=443 \
make
make
```
This'll build and run the docker container.

View file

@ -2,6 +2,3 @@
This directory contains developer and integration resources to assist with migrations from other
identity and access management services.

View file

@ -12,7 +12,7 @@ cargo doc --document-private-items --open --no-deps
### Rust Documentation
A list of links to the library documentation is at
A list of links to the library documentation is at
[kanidm.com/documentation](https://kanidm.com/documentation/).
### Minimum Supported Rust Version
@ -21,14 +21,13 @@ The MSRV is specified in the package `Cargo.toml` files.
### Build Profiles
Setting different developer profiles while building is done by setting the
environment
variable `KANIDM_BUILD_PROFILE` to one of the bare filename of the TOML files in
`/profiles`.
Setting different developer profiles while building is done by setting the environment variable
`KANIDM_BUILD_PROFILE` to one of the bare filename of the TOML files in `/profiles`.
For example, this will set the CPU flags to "none" and the location for the Web UI files to `/usr/share/kanidm/ui/pkg`:
For example, this will set the CPU flags to "none" and the location for the Web UI files to
`/usr/share/kanidm/ui/pkg`:
```shell
```bash
KANIDM_BUILD_PROFILE=release_suse_generic cargo build --release --bin kanidmd
```
@ -37,30 +36,37 @@ KANIDM_BUILD_PROFILE=release_suse_generic cargo build --release --bin kanidmd
#### MacOS
You will need [rustup](https://rustup.rs/) to install a Rust toolchain.
#### SUSE
You will need [rustup](https://rustup.rs/) to install a Rust toolchain. If
you're
using the Tumbleweed release, it's packaged in `zypper`.
You will need [rustup](https://rustup.rs/) to install a Rust toolchain. If you're using the
Tumbleweed release, it's packaged in `zypper`.
You will also need some system libraries to build this:
libudev-devel sqlite3-devel libopenssl-devel
```
libudev-devel sqlite3-devel libopenssl-devel
```
#### Fedora
You need to install the Rust toolchain packages:
rust cargo
```bash
rust cargo
```
You will also need some system libraries to build this:
systemd-devel sqlite-devel openssl-devel pam-devel
```
systemd-devel sqlite-devel openssl-devel pam-devel
```
Building the Web UI requires additional packages:
perl-FindBin perl-File-Compare rust-std-static-wasm32-unknown-unknown
```
perl-FindBin perl-File-Compare rust-std-static-wasm32-unknown-unknown
```
#### Ubuntu
@ -68,9 +74,9 @@ You need [rustup](https://rustup.rs/) to install a Rust toolchain.
You will also need some system libraries to build this, which can be installed by running:
```shell
```bash
sudo apt-get install libsqlite3-dev libudev-dev libssl-dev pkg-config libpam0g-dev
```
```
Tested with Ubuntu 20.04 and 22.04.
@ -78,68 +84,72 @@ Tested with Ubuntu 20.04 and 22.04.
You need [rustup](https://rustup.rs/) to install a Rust toolchain.
An easy way to grab the dependencies is to install [vcpkg](https://vcpkg.io/en/getting-started.html).
An easy way to grab the dependencies is to install
[vcpkg](https://vcpkg.io/en/getting-started.html).
This is how it works in the automated build:
1. Enable use of installed packages for the user system-wide:
```shell
```bash
vcpkg integrate install
```
2. Install the openssl dependency, which compiles it from source. This downloads all sorts of dependencies, including perl for the build.
```shell
2. Install the openssl dependency, which compiles it from source. This downloads all sorts of
dependencies, including perl for the build.
```bash
vcpkg install openssl:x64-windows-static-md
```
There's a powershell script in the root directory of the repository which, in concert with `openssl` will generate a config file and certs for testing.
There's a powershell script in the root directory of the repository which, in concert with `openssl`
will generate a config file and certs for testing.
### Get Involved
To get started, you'll need to fork or branch, and we'll merge based on pull
requests.
To get started, you'll need to fork or branch, and we'll merge based on pull requests.
If you are a contributor to the project, simply clone:
```shell
```bash
git clone git@github.com:kanidm/kanidm.git
```
If you are forking, then fork in GitHub and clone with:
```shell
```bash
git clone https://github.com/kanidm/kanidm.git
cd kanidm
git remote add myfork git@github.com:<YOUR USERNAME>/kanidm.git
```
Select an issue (always feel free to reach out to us for advice!), and create a
branch to start working:
Select an issue (always feel free to reach out to us for advice!), and create a branch to start
working:
```shell
```bash
git branch <feature-branch-name>
git checkout <feature-branch-name>
cargo test
```
When you are ready for review (even if the feature isn't complete and you just
want some advice):
When you are ready for review (even if the feature isn't complete and you just want some advice):
1. Run the test suite: `cargo test --workspace`
2. Ensure rust formatting standards are followed: `cargo fmt --check`
3. Try following the suggestions from clippy, after running `cargo clippy`.
This is not a blocker on us accepting your code!
3. Try following the suggestions from clippy, after running `cargo clippy`. This is not a blocker on
us accepting your code!
4. Then commit your changes:
```shell
```bash
git commit -m 'Commit message' change_file.rs ...
git push <myfork/origin> <feature-branch-name>
```
If you receive advice or make further changes, just keep commiting to the branch,
and pushing to your branch. When we are happy with the code, we'll merge in GitHub,
meaning you can now clean up your branch.
If you receive advice or make further changes, just keep commiting to the branch, and pushing to
your branch. When we are happy with the code, we'll merge in GitHub, meaning you can now clean up
your branch.
```
```bash
git checkout master
git pull
git branch -D <feature-branch-name>
@ -149,94 +159,107 @@ git branch -D <feature-branch-name>
If you are asked to rebase your change, follow these steps:
```
```bash
git checkout master
git pull
git checkout <feature-branch-name>
git rebase master
```
Then be sure to fix any merge issues or other comments as they arise. If you
have issues, you can always stop and reset with:
Then be sure to fix any merge issues or other comments as they arise. If you have issues, you can
always stop and reset with:
```
```bash
git rebase --abort
```
### Development Server Quickstart for Interactive Testing
After getting the code, you will need a rust environment. Please investigate
After getting the code, you will need a rust environment. Please investigate
[rustup](https://rustup.rs) for your platform to establish this.
Once you have the source code, you need encryption certificates to use with the server,
because without certificates, authentication will fail.
Once you have the source code, you need encryption certificates to use with the server, because
without certificates, authentication will fail.
We recommend using [Let's Encrypt](https://letsencrypt.org), but if this is not
possible, please use our insecure certificate tool (`insecure_generate_tls.sh`).
We recommend using [Let's Encrypt](https://letsencrypt.org), but if this is not possible, please use
our insecure certificate tool (`insecure_generate_tls.sh`).
__NOTE:__ Windows developers can use `insecure_generate_tls.ps1`, which puts everything (including a templated confi gfile) in `$TEMP\kanidm`. Please adjust paths below to suit.
**NOTE:** Windows developers can use `insecure_generate_tls.ps1`, which puts everything (including a
templated confi gfile) in `$TEMP\kanidm`. Please adjust paths below to suit.
The insecure certificate tool creates `/tmp/kanidm` and puts some self-signed certificates there.
You can now build and run the server with the commands below. It will use a database
in `/tmp/kanidm.db`.
You can now build and run the server with the commands below. It will use a database in
`/tmp/kanidm.db`.
Create the initial database and generate an `admin` username:
cargo run --bin kanidmd recover_account -c ./examples/insecure_server.toml admin
<snip>
Success - password reset to -> Et8QRJgQkMJu3v1AQxcbxRWW44qRUZPpr6BJ9fCGapAB9cT4
```bash
cargo run --bin kanidmd recover_account -c ./examples/insecure_server.toml admin
<snip>
Success - password reset to -> Et8QRJgQkMJu3v1AQxcbxRWW44qRUZPpr6BJ9fCGapAB9cT4
```
Record the password above, then run the server start command:
cd kanidmd/daemon
cargo run --bin kanidmd server -c ../../examples/insecure_server.toml
```bash
cd kanidmd/daemon
cargo run --bin kanidmd server -c ../../examples/insecure_server.toml
```
(The server start command is also a script in `kanidmd/daemon/run_insecure_dev_server.sh`)
In a new terminal, you can now build and run the client tools with:
cargo run --bin kanidm -- --help
cargo run --bin kanidm -- login -H https://localhost:8443 -D anonymous -C /tmp/kanidm/ca.pem
cargo run --bin kanidm -- self whoami -H https://localhost:8443 -D anonymous -C /tmp/kanidm/ca.pem
cargo run --bin kanidm -- login -H https://localhost:8443 -D admin -C /tmp/kanidm/ca.pem
cargo run --bin kanidm -- self whoami -H https://localhost:8443 -D admin -C /tmp/kanidm/ca.pem
```bash
cargo run --bin kanidm -- --help
cargo run --bin kanidm -- login -H https://localhost:8443 -D anonymous -C /tmp/kanidm/ca.pem
cargo run --bin kanidm -- self whoami -H https://localhost:8443 -D anonymous -C /tmp/kanidm/ca.pem
cargo run --bin kanidm -- login -H https://localhost:8443 -D admin -C /tmp/kanidm/ca.pem
cargo run --bin kanidm -- self whoami -H https://localhost:8443 -D admin -C /tmp/kanidm/ca.pem
```
### Raw actions
The server has a low-level stateful API you can use for more complex or advanced tasks on large numbers
of entries at once. Some examples are below, but generally we advise you to use the APIs or CLI tools. These are
very handy to "unbreak" something if you make a mistake however!
The server has a low-level stateful API you can use for more complex or advanced tasks on large
numbers of entries at once. Some examples are below, but generally we advise you to use the APIs or
CLI tools. These are very handy to "unbreak" something if you make a mistake however!
# Create from json (group or account)
kanidm raw create -H https://localhost:8443 -C ../insecure/ca.pem -D admin example.create.account.json
kanidm raw create -H https://localhost:8443 -C ../insecure/ca.pem -D idm_admin example.create.group.json
```bash
# Create from json (group or account)
kanidm raw create -H https://localhost:8443 -C ../insecure/ca.pem -D admin example.create.account.json
kanidm raw create -H https://localhost:8443 -C ../insecure/ca.pem -D idm_admin example.create.group.json
# Apply a json stateful modification to all entries matching a filter
kanidm raw modify -H https://localhost:8443 -C ../insecure/ca.pem -D admin '{"or": [ {"eq": ["name", "idm_person_account_create_priv"]}, {"eq": ["name", "idm_service_account_create_priv"]}, {"eq": ["name", "idm_account_write_priv"]}, {"eq": ["name", "idm_group_write_priv"]}, {"eq": ["name", "idm_people_write_priv"]}, {"eq": ["name", "idm_group_create_priv"]} ]}' example.modify.idm_admin.json
kanidm raw modify -H https://localhost:8443 -C ../insecure/ca.pem -D idm_admin '{"eq": ["name", "idm_admins"]}' example.modify.idm_admin.json
# Apply a json stateful modification to all entries matching a filter
kanidm raw modify -H https://localhost:8443 -C ../insecure/ca.pem -D admin '{"or": [ {"eq": ["name", "idm_person_account_create_priv"]}, {"eq": ["name", "idm_service_account_create_priv"]}, {"eq": ["name", "idm_account_write_priv"]}, {"eq": ["name", "idm_group_write_priv"]}, {"eq": ["name", "idm_people_write_priv"]}, {"eq": ["name", "idm_group_create_priv"]} ]}' example.modify.idm_admin.json
kanidm raw modify -H https://localhost:8443 -C ../insecure/ca.pem -D idm_admin '{"eq": ["name", "idm_admins"]}' example.modify.idm_admin.json
# Search and show the database representations
kanidm raw search -H https://localhost:8443 -C ../insecure/ca.pem -D admin '{"eq": ["name", "idm_admin"]}'
# Search and show the database representations
kanidm raw search -H https://localhost:8443 -C ../insecure/ca.pem -D admin '{"eq": ["name", "idm_admin"]}'
# Delete all entries matching a filter
kanidm raw delete -H https://localhost:8443 -C ../insecure/ca.pem -D idm_admin '{"eq": ["name", "test_account_delete_me"]}'
# Delete all entries matching a filter
kanidm raw delete -H https://localhost:8443 -C ../insecure/ca.pem -D idm_admin '{"eq": ["name", "test_account_delete_me"]}'
```
### Building the Web UI
__NOTE:__ There is a pre-packaged version of the Web UI at `/kanidmd_web_ui/pkg/`,
which can be used directly. This means you don't need to build the Web UI yourself.
**NOTE:** There is a pre-packaged version of the Web UI at `/kanidmd_web_ui/pkg/`, which can be used
directly. This means you don't need to build the Web UI yourself.
The Web UI uses Rust WebAssembly rather than Javascript. To build this you need
to set up the environment:
The Web UI uses Rust WebAssembly rather than Javascript. To build this you need to set up the
environment:
cargo install wasm-pack
```bash
cargo install wasm-pack
```
Then you are able to build the UI:
cd kanidmd_web_ui/
./build_wasm_dev.sh
```bash
cd kanidmd_web_ui/
./build_wasm_dev.sh
```
To build for release, run `build_wasm_release.sh`.
@ -246,49 +269,53 @@ The "developer" profile for kanidmd will automatically use the pkg output in thi
Build a container with the current branch using:
make <TARGET>
```bash
make <TARGET>
```
Check `make help` for a list of valid targets.
The following environment variables control the build:
|ENV variable|Definition|Default|
|-|-|-|
|`IMAGE_BASE`|Base location of the container image.|`kanidm`|
|`IMAGE_VERSION`|Determines the container's tag.|None|
|`CONTAINER_TOOL_ARGS`|Specify extra options for the container build tool.|None|
|`IMAGE_ARCH`|Passed to `--platforms` when the container is built.|`linux/amd64,linux/arm64`|
|`CONTAINER_BUILD_ARGS`|Override default ARG settings during the container build.|None|
|`CONTAINER_TOOL`|Use an alternative container build tool.|`docker`|
|`BOOK_VERSION`|Sets version used when building the documentation book.|`master`|
| ENV variable | Definition | Default |
| ---------------------- | --------------------------------------------------------- | ------------------------- |
| `IMAGE_BASE` | Base location of the container image. | `kanidm` |
| `IMAGE_VERSION` | Determines the container's tag. | None |
| `CONTAINER_TOOL_ARGS` | Specify extra options for the container build tool. | None |
| `IMAGE_ARCH` | Passed to `--platforms` when the container is built. | `linux/amd64,linux/arm64` |
| `CONTAINER_BUILD_ARGS` | Override default ARG settings during the container build. | None |
| `CONTAINER_TOOL` | Use an alternative container build tool. | `docker` |
| `BOOK_VERSION` | Sets version used when building the documentation book. | `master` |
#### Container Build Examples
Build a `kanidm` container using `podman`:
CONTAINER_TOOL=podman make build/kanidmd
```bash
CONTAINER_TOOL=podman make build/kanidmd
```
Build a `kanidm` container and use a redis build cache:
CONTAINER_BUILD_ARGS='--build-arg "SCCACHE_REDIS=redis://redis.dev.blackhats.net.au:6379"' make build/kanidmd
```bash
CONTAINER_BUILD_ARGS='--build-arg "SCCACHE_REDIS=redis://redis.dev.blackhats.net.au:6379"' make build/kanidmd
```
#### Automatically Built Containers
To speed up testing across platforms, we're leveraging GitHub actions to build
containers for test use.
To speed up testing across platforms, we're leveraging GitHub actions to build containers for test
use.
Whenever code is merged with the `master` branch of Kanidm, containers are automatically
built for `kanidmd` and `radius`. Sometimes they fail to build, but we'll try to
keep them avilable.
Whenever code is merged with the `master` branch of Kanidm, containers are automatically built for
`kanidmd` and `radius`. Sometimes they fail to build, but we'll try to keep them avilable.
To find information on the packages,
To find information on the packages,
[visit the Kanidm packages page](https://github.com/orgs/kanidm/packages?repo_name=kanidm).
An example command for pulling and running the radius container is below. You'll
need to
An example command for pulling and running the radius container is below. You'll need to
[authenticate with the GitHub container registry first](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry#authenticating-to-the-container-registry).
```shell
```bash
docker pull ghcr.io/kanidm/radius:devel
docker run --rm -it \
-v $(pwd)/kanidm:/data/kanidm \
@ -301,20 +328,20 @@ This assumes you have a `kanidm` client configuration file in the current workin
You'll need `mdbook` to build the book:
```shell
```bash
cargo install mdbook
```
To build it:
```shell
```bash
cd kanidm_book
mdbook build
```
Or to run a local webserver:
```shell
```bash
cd kanidm_book
mdbook serve
````
```

View file

@ -5,22 +5,22 @@ for these data. As a result, there are many concepts and important details to un
## Service Accounts vs Person Accounts
Kanidm seperates accounts into two types. Person accounts (or persons) are intended for use by humans
that will access the system in an interactive way. Service accounts are intended for use by computers
or services that need to identify themself to Kanidm. Generally a person or group of persons will
be responsible for and will manage service accounts. Because of this distinction these classes of
accounts have different properties and methods of authentication and management.
Kanidm seperates accounts into two types. Person accounts (or persons) are intended for use by
humans that will access the system in an interactive way. Service accounts are intended for use by
computers or services that need to identify themself to Kanidm. Generally a person or group of
persons will be responsible for and will manage service accounts. Because of this distinction these
classes of accounts have different properties and methods of authentication and management.
## Groups
Groups represent a collection of entities. This generally is a collection of persons or service accounts.
Groups are commonly used to assign privileges to the accounts that are members of a group. This allows
easier administration over larger systems where privileges can be assigned to groups in a logical
manner, and then only membership of the groups need administration, rather than needing to assign
privileges to each entity directly and uniquely.
Groups represent a collection of entities. This generally is a collection of persons or service
accounts. Groups are commonly used to assign privileges to the accounts that are members of a group.
This allows easier administration over larger systems where privileges can be assigned to groups in
a logical manner, and then only membership of the groups need administration, rather than needing to
assign privileges to each entity directly and uniquely.
Groups may also be nested, where a group can contain another group as a member. This allows hierarchies
to be created again for easier administration.
Groups may also be nested, where a group can contain another group as a member. This allows
hierarchies to be created again for easier administration.
## Default Accounts and Groups
@ -30,33 +30,29 @@ Identity Management (IDM) systems.
There are two builtin system administration accounts.
`admin` is the default service account which has privileges to configure and administer kanidm as a whole.
This account can manage access controls, schema, integrations and more. However the `admin` can not
manage persons by default to seperate the priviliges. As this is a service account is is intended
for limited use.
`admin` is the default service account which has privileges to configure and administer kanidm as a
whole. This account can manage access controls, schema, integrations and more. However the `admin`
can not manage persons by default to seperate the priviliges. As this is a service account is is
intended for limited use.
`idm_admin` is the default service account which has privileges to create persons and to manage these
accounts and groups. They can perform credential resets and more.
`idm_admin` is the default service account which has privileges to create persons and to manage
these accounts and groups. They can perform credential resets and more.
Both the `admin` and the `idm_admin` user should *NOT* be used for daily activities - they exist for initial
system configuration, and for disaster recovery scenarios. You should delegate permissions
Both the `admin` and the `idm_admin` user should _NOT_ be used for daily activities - they exist for
initial system configuration, and for disaster recovery scenarios. You should delegate permissions
as required to named user accounts instead.
The majority of the builtin groups are privilige groups that provide rights over Kanidm
administrative actions. These include groups for account management, person management (personal
and sensitive data), group management, and more.
administrative actions. These include groups for account management, person management (personal and
sensitive data), group management, and more.
## Recovering the Initial Admin Accounts
By default the `admin` and `idm_admin` accounts have no password, and can not be accessed. They need
to be "recovered" from the server that is running the kanidmd server.
{{#template
templates/kani-warning.md
imagepath=images
title=Warning!
text=The server must not be running at this point, as it requires exclusive access to the database.
}}
{{#template templates/kani-warning.md imagepath=images title=Warning! text=The server must not be
running at this point, as it requires exclusive access to the database. }}
```shell
kanidmd recover_account admin -c /etc/kanidm/server.toml
@ -66,9 +62,9 @@ kanidmd recover_account admin -c /etc/kanidm/server.toml
To do this with Docker, you'll need to stop the existing container and use the "command" argument to
access the kanidmd binary.
```shell
```bash
docker run --rm -it \
-v/tmp/kanidm:/data\
-v/tmp/kanidm:/data \
--name kanidmd \
--hostname kanidmd \
kanidm/server:latest \
@ -80,7 +76,7 @@ After the recovery is complete the server can be started again.
Once you have access to the admin account, it is able to reset the credentials of the `idm_admin`
account.
```shell
```bash
kanidm login -D admin
kanidm service-account credential generate -D admin idm_admin
# Success: wJX...
@ -90,10 +86,10 @@ These accounts will be used through the remainder of this document for managing
## Viewing Default Groups
You should take some time to inspect the default groups which are related to
default permissions. These can be viewed with:
You should take some time to inspect the default groups which are related to default permissions.
These can be viewed with:
```
```bash
kanidm group list
kanidm group get <name>
```
@ -102,7 +98,7 @@ kanidm group get <name>
By default `idm_admin` has the privileges to create new persons in the system.
```shell
```bash
kanidm login --name idm_admin
kanidm person create demo_user "Demonstration User" --name idm_admin
kanidm person get demo_user --name idm_admin
@ -115,37 +111,37 @@ kanidm group list_members demo_group --name idm_admin
You can also use anonymous to view accounts and groups - note that you won't see certain fields due
to the limits of the access control anonymous access profile.
```
```bash
kanidm login --name anonymous
kanidm person get demo_user --name anonymous
```
Kanidm allows person accounts to include human related attributes, such as their legal name and email address.
Kanidm allows person accounts to include human related attributes, such as their legal name and
email address.
Initially, a person does not have these attributes. If desired, a person may be modified to have these attributes.
Initially, a person does not have these attributes. If desired, a person may be modified to have
these attributes.
```shell
```bash
# Note, both the --legalname and --mail flags may be omitted
kanidm person update demo_user --legalname "initial name" --mail "initial@email.address"
```
{{#template
templates/kani-warning.md
imagepath=images
title=Warning!
text=Persons may change their own displayname, name, and legal name at any time. You MUST NOT use these values as primary keys in external systems. You MUST use the `uuid` attribute present on all entries as an external primary key.
}}
{{#template templates/kani-warning.md imagepath=images title=Warning! text=Persons may change their
own displayname, name, and legal name at any time. You MUST NOT use these values as primary keys in
external systems. You MUST use the `uuid` attribute present on all entries as an external primary
key. }}
## Resetting Person Account Credentials
Members of the `idm_account_manage_priv` group have the rights to manage person and service
accounts security and login aspects. This includes resetting account credentials.
Members of the `idm_account_manage_priv` group have the rights to manage person and service accounts
security and login aspects. This includes resetting account credentials.
You can perform a password reset on the demo_user, for example as the idm_admin user, who is
a default member of this group. The lines below prefixed with `#` are the interactive credential
You can perform a password reset on the demo_user, for example as the idm_admin user, who is a
default member of this group. The lines below prefixed with `#` are the interactive credential
update interface.
```shell
```bash
kanidm person credential update demo_user --name idm_admin
# spn: demo_user@idm.example.com
# Name: Demonstration User
@ -172,39 +168,39 @@ kanidm self whoami --name demo_user
The `admin` service account can be used to create service accounts.
```shell
```bash
kanidm service-account create demo_service "Demonstration Service" --name admin
kanidm service-account get demo_service --name admin
```
## Using API Tokens with Service Accounts
Service accounts can have api tokens generated and associated with them. These tokens can be used for
identification of the service account, and for granting extended access rights where the service
Service accounts can have api tokens generated and associated with them. These tokens can be used
for identification of the service account, and for granting extended access rights where the service
account may previously have not had the access. Additionally service accounts can have expiry times
and other auditing information attached.
To show api tokens for a service account:
```shell
```bash
kanidm service-account api-token status --name admin ACCOUNT_ID
kanidm service-account api-token status --name admin demo_service
```
By default api tokens are issued to be "read only", so they are unable to make changes on behalf of the
service account they represent. To generate a new read only api token:
By default api tokens are issued to be "read only", so they are unable to make changes on behalf of
the service account they represent. To generate a new read only api token:
```shell
```bash
kanidm service-account api-token generate --name admin ACCOUNT_ID LABEL [EXPIRY]
kanidm service-account api-token generate --name admin demo_service "Test Token"
kanidm service-account api-token generate --name admin demo_service "Test Token" 2020-09-25T11:22:02+10:00
```
If you wish to issue a token that is able to make changes on behalf
of the service account, you must add the "--rw" flag during the generate command. It is recommended you
only add --rw when the api-token is performing writes to Kanidm.
If you wish to issue a token that is able to make changes on behalf of the service account, you must
add the "--rw" flag during the generate command. It is recommended you only add --rw when the
api-token is performing writes to Kanidm.
```shell
```bash
kanidm service-account api-token generate --name admin ACCOUNT_ID LABEL [EXPIRY] --rw
kanidm service-account api-token generate --name admin demo_service "Test Token" --rw
kanidm service-account api-token generate --name admin demo_service "Test Token" 2020-09-25T11:22:02+10:00 --rw
@ -213,7 +209,7 @@ kanidm service-account api-token generate --name admin demo_service "Test Token"
To destroy (revoke) an api token you will need it's token id. This can be shown with the "status"
command.
```shell
```bash
kanidm service-account api-token destroy --name admin ACCOUNT_ID TOKEN_ID
kanidm service-account api-token destroy --name admin demo_service 4de2a4e9-e06a-4c5e-8a1b-33f4e7dd5dc7
```
@ -221,7 +217,7 @@ kanidm service-account api-token destroy --name admin demo_service 4de2a4e9-e06a
Api tokens can also be used to gain extended search permissions with LDAP. To do this you can bind
with a dn of `dn=token` and provide the api token in the password.
```shell
```bash
ldapwhoami -H ldaps://URL -x -D "dn=token" -w "TOKEN"
ldapwhoami -H ldaps://idm.example.com -x -D "dn=token" -w "..."
# u: demo_service@idm.example.com
@ -229,18 +225,15 @@ ldapwhoami -H ldaps://idm.example.com -x -D "dn=token" -w "..."
## Resetting Service Account Credentials (Deprecated)
{{#template
templates/kani-warning.md
imagepath=images
text=Api Tokens are a better method to manage credentials for service accounts, and passwords may be removed in the future!
}}
{{#template templates/kani-warning.md imagepath=images text=Api Tokens are a better method to manage
credentials for service accounts, and passwords may be removed in the future! }}
Service accounts can not have their credentials interactively updated in the same manner as
persons. Service accounts may only have server side generated high entropy passwords.
Service accounts can not have their credentials interactively updated in the same manner as persons.
Service accounts may only have server side generated high entropy passwords.
To re-generate this password to an account
```shell
```bash
kanidm service-account credential generate demo_service --name admin
```
@ -253,7 +246,7 @@ Kanidm makes all group membership determinations by inspecting an entry's "membe
An example can be easily shown with:
```shell
```bash
kanidm group create group_1 --name idm_admin
kanidm group create group_2 --name idm_admin
kanidm person create nest_example "Nesting Account Example" --name idm_admin
@ -264,8 +257,8 @@ kanidm person get nest_example --name anonymous
## Account Validity
Kanidm supports accounts that are only able to authenticate between a pair of dates and times; the "valid
from" and "expires" timestamps define these points in time.
Kanidm supports accounts that are only able to authenticate between a pair of dates and times; the
"valid from" and "expires" timestamps define these points in time.
This can be displayed with:
@ -276,11 +269,12 @@ This can be displayed with:
These datetimes are stored in the server as UTC, but presented according to your local system time
to aid correct understanding of when the events will occur.
To set the values, an account with account management permission is required (for example, idm_admin).
To set the values, an account with account management permission is required (for example,
idm_admin).
You may set these time and date values in any timezone you wish (such as your local timezone), and the
server will transform these to UTC. These time values are in iso8601 format, and you should specify this
as:
You may set these time and date values in any timezone you wish (such as your local timezone), and
the server will transform these to UTC. These time values are in iso8601 format, and you should
specify this as:
```
YYYY-MM-DDThh:mm:ssZ+-hh:mm
@ -289,74 +283,77 @@ Year-Month-Day T hour:minutes:seconds Z +- timezone offset
Set the earliest time the account can start authenticating:
```shell
```bash
kanidm person validity begin_from demo_user '2020-09-25T11:22:04+00:00' --name idm_admin
```
Set the expiry or end date of the account:
```shell
```bash
kanidm person validity expire_at demo_user '2020-09-25T11:22:04+00:00' --name idm_admin
```
To unset or remove these values the following can be used, where `any|clear` means you may use either `any` or `clear`.
To unset or remove these values the following can be used, where `any|clear` means you may use
either `any` or `clear`.
```shell
```bash
kanidm person validity begin_from demo_user any|clear --name idm_admin
kanidm person validity expire_at demo_user never|clear --name idm_admin
```
To "lock" an account, you can set the expire_at value to the past, or unix epoch. Even in the situation
where the "valid from" is *after* the expire_at, the expire_at will be respected.
To "lock" an account, you can set the expire_at value to the past, or unix epoch. Even in the
situation where the "valid from" is _after_ the expire_at, the expire_at will be respected.
kanidm person validity expire_at demo_user 1970-01-01T00:00:00+00:00 --name idm_admin
```
kanidm person validity expire_at demo_user 1970-01-01T00:00:00+00:00 --name idm_admin
```
These validity settings impact all authentication functions of the account (kanidm, ldap, radius).
### Allowing people accounts to change their mail attribute
By default, Kanidm allows an account to change some attributes, but not their
mail address.
By default, Kanidm allows an account to change some attributes, but not their mail address.
Adding the user to the `idm_people_self_write_mail` group, as shown
below, allows the user to edit their own mail.
Adding the user to the `idm_people_self_write_mail` group, as shown below, allows the user to edit
their own mail.
kanidm group add_members idm_people_self_write_mail_priv demo_user --name idm_admin
```
kanidm group add_members idm_people_self_write_mail_priv demo_user --name idm_admin
```
## Why Can't I Change admin With idm_admin?
As a security mechanism there is a distinction between "accounts" and "high permission
accounts". This is to help prevent elevation attacks, where say a member of a
service desk could attempt to reset the password of idm_admin or admin, or even a member of
HR or System Admin teams to move laterally.
As a security mechanism there is a distinction between "accounts" and "high permission accounts".
This is to help prevent elevation attacks, where say a member of a service desk could attempt to
reset the password of idm_admin or admin, or even a member of HR or System Admin teams to move
laterally.
Generally, membership of a "privilege" group that ships with Kanidm, such as:
* idm_account_manage_priv
* idm_people_read_priv
* idm_schema_manage_priv
* many more ...
- idm_account_manage_priv
- idm_people_read_priv
- idm_schema_manage_priv
- many more ...
...indirectly grants you membership to "idm_high_privilege". If you are a member of
this group, the standard "account" and "people" rights groups are NOT able to
alter, read or manage these accounts. To manage these accounts higher rights
are required, such as those held by the admin account are required.
...indirectly grants you membership to "idm_high_privilege". If you are a member of this group, the
standard "account" and "people" rights groups are NOT able to alter, read or manage these accounts.
To manage these accounts higher rights are required, such as those held by the admin account are
required.
Further, groups that are considered "idm_high_privilege" can NOT be managed
by the standard "idm_group_manage_priv" group.
Further, groups that are considered "idm_high_privilege" can NOT be managed by the standard
"idm_group_manage_priv" group.
Management of high privilege accounts and groups is granted through the
the "hp" variants of all privileges. A non-conclusive list:
Management of high privilege accounts and groups is granted through the the "hp" variants of all
privileges. A non-conclusive list:
* idm_hp_account_read_priv
* idm_hp_account_manage_priv
* idm_hp_account_write_priv
* idm_hp_group_manage_priv
* idm_hp_group_write_priv
- idm_hp_account_read_priv
- idm_hp_account_manage_priv
- idm_hp_account_write_priv
- idm_hp_group_manage_priv
- idm_hp_group_write_priv
Membership of any of these groups should be considered to be equivalent to
system administration rights in the directory, and by extension, over all network
resources that trust Kanidm.
Membership of any of these groups should be considered to be equivalent to system administration
rights in the directory, and by extension, over all network resources that trust Kanidm.
All groups that are flagged as "idm_high_privilege" should be audited and
monitored to ensure that they are not altered.
All groups that are flagged as "idm_high_privilege" should be audited and monitored to ensure that
they are not altered.

View file

@ -1,7 +1,5 @@
# Administration Tasks
This chapter describes some of the routine administration tasks for running
a Kanidm server, such as making backups and restoring from backups, testing
server configuration, reindexing, verifying data consistency, and renaming
your domain.
This chapter describes some of the routine administration tasks for running a Kanidm server, such as
making backups and restoring from backups, testing server configuration, reindexing, verifying data
consistency, and renaming your domain.

View file

@ -1,49 +1,52 @@
# Backup and Restore
With any Identity Management (IDM) software, it's important you have the capability to restore in
case of a disaster - be that physical damage or a mistake. Kanidm supports backup
and restore of the database with three methods.
case of a disaster - be that physical damage or a mistake. Kanidm supports backup and restore of the
database with three methods.
## Method 1 - Automatic Backup
Automatic backups can be generated online by a `kanidmd server` instance
by including the `[online_backup]` section in the `server.toml`.
This allows you to run regular backups, defined by a cron schedule, and maintain
the number of backup versions to keep. An example is located in
Automatic backups can be generated online by a `kanidmd server` instance by including the
`[online_backup]` section in the `server.toml`. This allows you to run regular backups, defined by a
cron schedule, and maintain the number of backup versions to keep. An example is located in
[examples/server.toml](https://github.com/kanidm/kanidm/blob/master/examples/server.toml).
## Method 2 - Manual Backup
This method uses the same process as the automatic process, but is manually invoked. This can
be useful for pre-upgrade backups
This method uses the same process as the automatic process, but is manually invoked. This can be
useful for pre-upgrade backups
To take the backup (assuming our docker environment) you first need to stop the instance:
docker stop <container name>
docker run --rm -i -t -v kanidmd:/data -v kanidmd_backups:/backup \
kanidm/server:latest /sbin/kanidmd database backup -c /data/server.toml \
/backup/kanidm.backup.json
docker start <container name>
```bash
docker stop <container name>
docker run --rm -i -t -v kanidmd:/data -v kanidmd_backups:/backup \
kanidm/server:latest /sbin/kanidmd database backup -c /data/server.toml \
/backup/kanidm.backup.json
docker start <container name>
```
You can then restart your instance. DO NOT modify the backup.json as it may introduce
data errors into your instance.
You can then restart your instance. DO NOT modify the backup.json as it may introduce data errors
into your instance.
To restore from the backup:
docker stop <container name>
docker run --rm -i -t -v kanidmd:/data -v kanidmd_backups:/backup \
kanidm/server:latest /sbin/kanidmd database restore -c /data/server.toml \
/backup/kanidm.backup.json
docker start <container name>
```bash
docker stop <container name>
docker run --rm -i -t -v kanidmd:/data -v kanidmd_backups:/backup \
kanidm/server:latest /sbin/kanidmd database restore -c /data/server.toml \
/backup/kanidm.backup.json
docker start <container name>
```
## Method 3 - Manual Database Copy
This is a simple backup of the data volume.
docker stop <container name>
# Backup your docker's volume folder
docker start <container name>
```bash
docker stop <container name>
# Backup your docker's volume folder
docker start <container name>
```
Restoration is the reverse process.

View file

@ -1,15 +1,12 @@
# Choosing a Domain Name
Through out this book, Kanidm will make reference to a "domain name". This is your
chosen DNS domain name that you intend to use for Kanidm. Choosing this domain name however
is not simple as there are a number of considerations you need to be careful of.
Through out this book, Kanidm will make reference to a "domain name". This is your chosen DNS domain
name that you intend to use for Kanidm. Choosing this domain name however is not simple as there are
a number of considerations you need to be careful of.
{{#template
templates/kani-warning.md
imagepath=images/
title=Take note!
text=Incorrect choice of the domain name may have security impacts on your Kanidm instance, not limited to credential phishing, theft, session leaks and more. It is critical you follow the advice in this chapter.
}}
{{#template templates/kani-warning.md imagepath=images/ title=Take note! text=Incorrect choice of
the domain name may have security impacts on your Kanidm instance, not limited to credential
phishing, theft, session leaks and more. It is critical you follow the advice in this chapter. }}
## Considerations
@ -18,78 +15,79 @@ is not simple as there are a number of considerations you need to be careful of.
It is recommended you use a domain name within a domain that you own. While many examples list
`example.com` throughout this book, it is not recommended to use this outside of testing. Another
example of risky domain to use is `local`. While it seems appealing to use these, because you do not
have unique ownership of these domains, if you move your machine to a foreign network, it is possible
you may leak credentials or other cookies to these domains. TLS in a majority of cases can and will
protect you from such leaks however, but it should not always be relied upon as a sole line of defence.
have unique ownership of these domains, if you move your machine to a foreign network, it is
possible you may leak credentials or other cookies to these domains. TLS in a majority of cases can
and will protect you from such leaks however, but it should not always be relied upon as a sole line
of defence.
Failure to use a unique domain you own, may allow DNS hijacking or other credential leaks in some circumstances.
Failure to use a unique domain you own, may allow DNS hijacking or other credential leaks in some
circumstances.
### Subdomains
Due to how web browsers and webauthn work, any matching domain name or subdomain of an effective domain
may have access to cookies within a browser session. An example is that `host.a.example.com` has access
to cookies from `a.example.com` and `example.com`.
Due to how web browsers and webauthn work, any matching domain name or subdomain of an effective
domain may have access to cookies within a browser session. An example is that `host.a.example.com`
has access to cookies from `a.example.com` and `example.com`.
For this reason your kanidm host (or hosts) should be on a unique subdomain, with no other services
registered under that subdomain. For example, consider `idm.example.com` as a subdomain for exclusive
use of kanidm. This is *inverse* to Active Directory which often has it's domain name selected to be
the parent (toplevel) domain (`example.com`).
registered under that subdomain. For example, consider `idm.example.com` as a subdomain for
exclusive use of kanidm. This is _inverse_ to Active Directory which often has it's domain name
selected to be the parent (toplevel) domain (`example.com`).
Failure to use a unique subdomain may allow cookies to leak to other entities within your domain, and
may allow webauthn to be used on entities you did not intend for which may or may not lead to some phishing
scenarioes.
Failure to use a unique subdomain may allow cookies to leak to other entities within your domain,
and may allow webauthn to be used on entities you did not intend for which may or may not lead to
some phishing scenarioes.
## Examples
### Good Domain Names
Consider we own `kanidm.com`. If we were to run geographical instances, and have testing environments
the following domain and hostnames could be used.
Consider we own `kanidm.com`. If we were to run geographical instances, and have testing
environments the following domain and hostnames could be used.
*production*
_production_
* origin: `https://idm.kanidm.com`
* domain name: `idm.kanidm.com`
* host names: `australia.idm.kanidm.com`, `newzealand.idm.kanidm.com`
- origin: `https://idm.kanidm.com`
- domain name: `idm.kanidm.com`
- host names: `australia.idm.kanidm.com`, `newzealand.idm.kanidm.com`
This allows us to have named geographical instances such as `https://australia.idm.kanidm.com` which
still works with webauthn and cookies which are transferable between instances.
It is critical no other hosts are registered under this domain name.
*testing*
_testing_
* origin: `https://idm.dev.kanidm.com`
* domain name: `idm.dev.kanidm.com`
* host names: `australia.idm.dev.kanidm.com`, `newzealand.idm.dev.kanidm.com`
- origin: `https://idm.dev.kanidm.com`
- domain name: `idm.dev.kanidm.com`
- host names: `australia.idm.dev.kanidm.com`, `newzealand.idm.dev.kanidm.com`
Note that due to the name being `idm.dev.kanidm.com` vs `idm.kanidm.com`, the testing instance is not
a subdomain of production, meaning the cookies and webauthn tokens can NOT be transferred between
them. This provides proper isolation between the instances.
Note that due to the name being `idm.dev.kanidm.com` vs `idm.kanidm.com`, the testing instance is
not a subdomain of production, meaning the cookies and webauthn tokens can NOT be transferred
between them. This provides proper isolation between the instances.
### Bad Domain Names
`idm.local` - This is a bad example as `.local` is an mDNS domain name suffix which means that client
machines if they visit another network *may* try to contact `idm.local` believing they are on their
usual network. If TLS verification were disabled, this would allow leaking of credentials.
`idm.local` - This is a bad example as `.local` is an mDNS domain name suffix which means that
client machines if they visit another network _may_ try to contact `idm.local` believing they are on
their usual network. If TLS verification were disabled, this would allow leaking of credentials.
`kanidm.com` - This is bad because the use of the top level domain means that any subdomain can
access the cookies issued by `kanidm.com`, effectively leaking them to all other hosts.
Second instance overlap:
*production*
_production_
* origin: `https://idm.kanidm.com`
* domain name: `idm.kanidm.com`
- origin: `https://idm.kanidm.com`
- domain name: `idm.kanidm.com`
*testing*
* origin: `https://dev.idm.kanidm.com`
* domain name: `dev.idm.kanidm.com`
While the production instance has a valid and well defined subdomain that doesn't conflict, because the
dev instance is a subdomain of production, it allows production cookies to leak to dev. Dev instances
may have weaker security controls in some cases which can then allow compromise of the production instance.
_testing_
- origin: `https://dev.idm.kanidm.com`
- domain name: `dev.idm.kanidm.com`
While the production instance has a valid and well defined subdomain that doesn't conflict, because
the dev instance is a subdomain of production, it allows production cookies to leak to dev. Dev
instances may have weaker security controls in some cases which can then allow compromise of the
production instance.

View file

@ -1,42 +1,55 @@
# Client tools
To interact with Kanidm as an administrator, you'll need to use our command
line tools. If you haven't installed them yet, [install them now](installing_client_tools.md).
To interact with Kanidm as an administrator, you'll need to use our command line tools. If you
haven't installed them yet, [install them now](installing_client_tools.md).
## Kanidm configuration
You can configure `kanidm` to help make commands simpler by modifying `~/.config/kanidm`
or `/etc/kanidm/config`.
You can configure `kanidm` to help make commands simpler by modifying `~/.config/kanidm` or
`/etc/kanidm/config`.
uri = "https://idm.example.com"
verify_ca = true|false
verify_hostnames = true|false
ca_path = "/path/to/ca.pem"
```toml
uri = "https://idm.example.com"
verify_ca = true|false
verify_hostnames = true|false
ca_path = "/path/to/ca.pem"
```
Once configured, you can test this with:
kanidm self whoami --name anonymous
```bash
kanidm self whoami --name anonymous
```
## Session Management
To authenticate as a user (for use with the command line), you need to use the `login` command
to establish a session token.
To authenticate as a user (for use with the command line), you need to use the `login` command to
establish a session token.
kanidm login --name USERNAME
kanidm login --name admin
```bash
kanidm login --name USERNAME
kanidm login --name admin
```
Once complete, you can use `kanidm` without re-authenticating for a period of time for administration.
Once complete, you can use `kanidm` without re-authenticating for a period of time for
administration.
You can list active sessions with:
kanidm session list
```bash
kanidm session list
```
Sessions will expire after a period of time (by default 1 hour). To remove these expired sessions
locally you can use:
kanidm session cleanup
```bash
kanidm session cleanup
```
To log out of a session:
kanidm logout --name USERNAME
kanidm logout --name admin
```bash
kanidm logout --name USERNAME
kanidm logout --name admin
```

View file

@ -2,51 +2,56 @@
## Reindexing
In some (rare) cases you may need to reindex.
Please note the server will sometimes reindex on startup as a result of the project
changing its internal schema definitions. This is normal and expected - you may never need
to start a reindex yourself as a result!
In some (rare) cases you may need to reindex. Please note the server will sometimes reindex on
startup as a result of the project changing its internal schema definitions. This is normal and
expected - you may never need to start a reindex yourself as a result!
You'll likely notice a need to reindex if you add indexes to schema and you see a message in
your logs such as:
You'll likely notice a need to reindex if you add indexes to schema and you see a message in your
logs such as:
Index EQUALITY name not found
Index {type} {attribute} not found
```
Index EQUALITY name not found
Index {type} {attribute} not found
```
This indicates that an index of type equality has been added for name, but the indexing process
has not been run. The server will continue to operate and the query execution code will correctly
process the query - however it will not be the optimal method of delivering the results as we need to
disregard this part of the query and act as though it's un-indexed.
This indicates that an index of type equality has been added for name, but the indexing process has
not been run. The server will continue to operate and the query execution code will correctly
process the query - however it will not be the optimal method of delivering the results as we need
to disregard this part of the query and act as though it's un-indexed.
Reindexing will resolve this by forcing all indexes to be recreated based on their schema
definitions (this works even though the schema is in the same database!)
docker stop <container name>
docker run --rm -i -t -v kanidmd:/data \
kanidm/server:latest /sbin/kanidmd reindex -c /data/server.toml
docker start <container name>
```bash
docker stop <container name>
docker run --rm -i -t -v kanidmd:/data \
kanidm/server:latest /sbin/kanidmd reindex -c /data/server.toml
docker start <container name>
```
Generally, reindexing is a rare action and should not normally be required.
## Vacuum
[Vacuuming](https://www.sqlite.org/lang_vacuum.html) is the process of reclaiming un-used pages
from the sqlite freelists, as well as performing some data reordering tasks that may make some
queries more efficient . It is recommended that you vacuum after a reindex is performed or
when you wish to reclaim space in the database file.
[Vacuuming](https://www.sqlite.org/lang_vacuum.html) is the process of reclaiming un-used pages from
the sqlite freelists, as well as performing some data reordering tasks that may make some queries
more efficient . It is recommended that you vacuum after a reindex is performed or when you wish to
reclaim space in the database file.
Vacuum is also able to change the pagesize of the database. After changing `db_fs_type` (which affects
pagesize) in server.toml, you must run a vacuum for this to take effect:
Vacuum is also able to change the pagesize of the database. After changing `db_fs_type` (which
affects pagesize) in server.toml, you must run a vacuum for this to take effect:
docker stop <container name>
docker run --rm -i -t -v kanidmd:/data \
kanidm/server:latest /sbin/kanidmd vacuum -c /data/server.toml
docker start <container name>
```bash
docker stop <container name>
docker run --rm -i -t -v kanidmd:/data \
kanidm/server:latest /sbin/kanidmd vacuum -c /data/server.toml
docker start <container name>
```
## Verification
The server ships with a number of verification utilities to ensure that data is consistent such
as referential integrity or memberof.
The server ships with a number of verification utilities to ensure that data is consistent such as
referential integrity or memberof.
Note that verification really is a last resort - the server does _a lot_ to prevent and self-heal
from errors at run time, so you should rarely if ever require this utility. This utility was
@ -54,11 +59,11 @@ developed to guarantee consistency during development!
You can run a verification with:
docker stop <container name>
docker run --rm -i -t -v kanidmd:/data \
kanidm/server:latest /sbin/kanidmd verify -c /data/server.toml
docker start <container name>
```bash
docker stop <container name>
docker run --rm -i -t -v kanidmd:/data \
kanidm/server:latest /sbin/kanidmd verify -c /data/server.toml
docker start <container name>
```
If you have errors, please contact the project to help support you to resolve these.

View file

@ -1,2 +1 @@
# Designs

View file

@ -1,22 +1,18 @@
# Access Profiles
Access Profiles
===============
Access Profiles (ACPs) are a way of expressing the set of actions which accounts are permitted to
perform on database records (`object`) in the system.
Access Profiles (ACPs) are a way of expressing the set of actions which accounts are
permitted to perform on database records (`object`) in the system.
As a result, there are specific requirements to what these can control and how they are expressed.
As a result, there are specific requirements to what these can control and how they are
expressed.
Access profiles define an action of `allow` or `deny`: `deny` has priority over `allow` and will
override even if applicable. They should only be created by system access profiles because certain
changes must be denied.
Access profiles define an action of `allow` or `deny`: `deny` has priority over `allow`
and will override even if applicable. They should only be created by system access profiles
because certain changes must be denied.
Access profiles are stored as entries and are dynamically loaded into a structure that is more
efficent for use at runtime. `Schema` and its transactions are a similar implementation.
Access profiles are stored as entries and are dynamically loaded into a structure that is
more efficent for use at runtime. `Schema` and its transactions are a similar implementation.
Search Requirements
-------------------
## Search Requirements
A search access profile must be able to limit:
@ -25,28 +21,28 @@ A search access profile must be able to limit:
An example:
> Alice should only be able to search for objects where the class is `person`
> and the object is a memberOf the group called "visible".
>
> Alice should only be able to see those the attribute `displayName` for those
> users (not their `legalName`), and their public `email`.
> Alice should only be able to search for objects where the class is `person` and the object is a
> memberOf the group called "visible".
>
> Alice should only be able to see those the attribute `displayName` for those users (not their
> `legalName`), and their public `email`.
Worded a bit differently. You need permission over the scope of entries, you need to be able
to read the attribute to filter on it, and you need to be able to read the attribute to recieve
it in the result entry.
Worded a bit differently. You need permission over the scope of entries, you need to be able to read
the attribute to filter on it, and you need to be able to read the attribute to recieve it in the
result entry.
If Alice searches for `(&(name=william)(secretdata=x))`, we should not allow this to
proceed because Alice doesn't have the rights to read secret data, so they should not be allowed
to filter on it. How does this work with two overlapping ACPs? For example: one that allows read
of name and description to class = group, and one that allows name to user. We don't want to
say `(&(name=x)(description=foo))` and it to be allowed, because we don't know the target class
of the filter. Do we "unmatch" all users because they have no access to the filter components? (Could
be done by inverting and putting in an AndNot of the non-matchable overlaps). Or do we just
filter our description from the users returned (But that implies they DID match, which is a disclosure).
If Alice searches for `(&(name=william)(secretdata=x))`, we should not allow this to proceed because
Alice doesn't have the rights to read secret data, so they should not be allowed to filter on it.
How does this work with two overlapping ACPs? For example: one that allows read of name and
description to class = group, and one that allows name to user. We don't want to say
`(&(name=x)(description=foo))` and it to be allowed, because we don't know the target class of the
filter. Do we "unmatch" all users because they have no access to the filter components? (Could be
done by inverting and putting in an AndNot of the non-matchable overlaps). Or do we just filter our
description from the users returned (But that implies they DID match, which is a disclosure).
More concrete:
```yaml
```
search {
action: allow
targetscope: Eq("class", "group")
@ -71,14 +67,14 @@ SearchRequest {
A potential defense is:
```yaml
```
acp class group: Pres(name) and Pres(desc) both in target attr, allow
acp class user: Pres(name) allow, Pres(desc) deny. Invert and Append
```
So the filter now is:
```yaml
```
And: {
AndNot: {
Eq("class", "user")
@ -94,7 +90,7 @@ This would now only allow access to the `name` and `description` of the class `g
If we extend this to a third, this would work. A more complex example:
```yaml
```
search {
action: allow
targetscope: Eq("class", "group")
@ -117,7 +113,7 @@ search {
Now we have a single user where we can read `description`. So the compiled filter above as:
```yaml
```
And: {
AndNot: {
Eq("class", "user")
@ -130,92 +126,92 @@ And: {
```
This would now be invalid, first, because we would see that `class=user` and `william` has no name
so that would be excluded also. We also may not even have "class=user" in the second ACP, so we can't
use subset filter matching to merge the two.
so that would be excluded also. We also may not even have "class=user" in the second ACP, so we
can't use subset filter matching to merge the two.
As a result, I think the only possible valid solution is to perform the initial filter, then determine
on the candidates if we *could* have have valid access to filter on all required attributes. IE
this means even with an index look up, we still are required to perform some filter application
on the candidates.
As a result, I think the only possible valid solution is to perform the initial filter, then
determine on the candidates if we _could_ have have valid access to filter on all required
attributes. IE this means even with an index look up, we still are required to perform some filter
application on the candidates.
I think this will mean on a possible candidate, we have to apply all ACP, then create a union of
the resulting targetattrs, and then compared that set into the set of attributes in the filter.
I think this will mean on a possible candidate, we have to apply all ACP, then create a union of the
resulting targetattrs, and then compared that set into the set of attributes in the filter.
This will be slow on large candidate sets (potentially), but could be sped up with parallelism, caching
or other methods. However, in the same step, we can also apply the step of extracting only the allowed
read target attrs, so this is a valuable exercise.
This will be slow on large candidate sets (potentially), but could be sped up with parallelism,
caching or other methods. However, in the same step, we can also apply the step of extracting only
the allowed read target attrs, so this is a valuable exercise.
Delete Requirements
-------------------
## Delete Requirements
A `delete` profile must contain the `content` and `scope` of a delete.
An example:
> Alice should only be able to delete objects where the `memberOf` is
> `purgeable`, and where they are not marked as `protected`.
> Alice should only be able to delete objects where the `memberOf` is `purgeable`, and where they
> are not marked as `protected`.
Create Requirements
-------------------
## Create Requirements
A `create` profile defines the following limits to what objects can be created, through the combination of filters and atttributes.
A `create` profile defines the following limits to what objects can be created, through the
combination of filters and atttributes.
An example:
An example:
> Alice should only be able to create objects where the `class` is `group`, and can
> only name the group, but they cannot add members to the group.
> Alice should only be able to create objects where the `class` is `group`, and can only name the
> group, but they cannot add members to the group.
An example of a content requirement could be something like "the value of an attribute must pass a regular expression filter".
This could limit a user to creating a group of any name, except where the group's name contains "admin".
This a contrived example which is also possible with filtering, but more complex requirements are possible.
An example of a content requirement could be something like "the value of an attribute must pass a
regular expression filter". This could limit a user to creating a group of any name, except where
the group's name contains "admin". This a contrived example which is also possible with filtering,
but more complex requirements are possible.
For example, we want to be able to limit the classes that someone *could* create on an object
For example, we want to be able to limit the classes that someone _could_ create on an object
because classes often are used in security rules.
Modify Requirements
-------------------
## Modify Requirements
A `modify` profile defines the following limits:
- a filter for which objects can be modified,
- a set of attributes which can be modified.
A `modify` profile defines a limit on the `modlist` actions.
A `modify` profile defines a limit on the `modlist` actions.
For example: you may only be allowed to ensure `presence` of a value. (Modify allowing purge, not-present, and presence).
For example: you may only be allowed to ensure `presence` of a value. (Modify allowing purge,
not-present, and presence).
Content requirements (see [Create Requirements](#create-requirements)) are out of scope at the moment.
Content requirements (see [Create Requirements](#create-requirements)) are out of scope at the
moment.
An example:
> Alice should only be able to modify a user's password if that user is a member of the
> students group.
> Alice should only be able to modify a user's password if that user is a member of the students
> group.
**Note:** `modify` does not imply `read` of the attribute. Care should be taken that we don't disclose
the current value in any error messages if the operation fails.
**Note:** `modify` does not imply `read` of the attribute. Care should be taken that we don't
disclose the current value in any error messages if the operation fails.
Targeting Requirements
-----------------------
## Targeting Requirements
The `target` of an access profile should be a filter defining the objects that this applies to.
The filter limit for the profiles of what they are acting on requires a single special operation
which is the concept of "targeting self".
which is the concept of "targeting self".
For example: we could define a rule that says "members of group X are allowed self-write to the `mobilePhoneNumber` attribute".
For example: we could define a rule that says "members of group X are allowed self-write to the
`mobilePhoneNumber` attribute".
An extension to the filter code could allow an extra filter enum of `self`, that would allow this
to operate correctly, and would consume the entry in the event as the target of "Self". This would
be best implemented as a compilation of `self -> eq(uuid, self.uuid)`.
An extension to the filter code could allow an extra filter enum of `self`, that would allow this to
operate correctly, and would consume the entry in the event as the target of "Self". This would be
best implemented as a compilation of `self -> eq(uuid, self.uuid)`.
Implementation Details
----------------------
## Implementation Details
CHANGE: Receiver should be a group, and should be single value/multivalue? Can *only* be a group.
CHANGE: Receiver should be a group, and should be single value/multivalue? Can _only_ be a group.
Example profiles:
```yaml
```
search {
action: allow
receiver: Eq("memberof", "admins")
@ -307,45 +303,43 @@ modify {
}
```
Formalised Schema
-----------------
## Formalised Schema
A complete schema would be:
### Attributes
| Name | Single/Multi | Type | Description |
| --- | --- | --- | |
| acp_allow | single value | bool | |
| acp_enable | single value | bool | This ACP is enabled |
| acp_receiver | single value | filter | ??? |
| acp_targetscope | single value | filter | ??? |
| acp_search_attr | multi value | utf8 case insense | A list of attributes that can be searched. |
| acp_create_class | multi value | utf8 case insense | Object classes in which an object can be created. |
| acp_create_attr | multi value | utf8 case insense | Attribute Entries that can be created. |
| acp_modify_removedattr | multi value | utf8 case insense | Modify if removed? |
| acp_modify_presentattr | multi value | utf8 case insense | ??? |
| acp_modify_class | multi value | utf8 case insense | ??? |
| Name | Single/Multi | Type | Description |
| ---------------------- | ------------ | ----------------- | ------------------------------------------------- |
| acp_allow | single value | bool | |
| acp_enable | single value | bool | This ACP is enabled |
| acp_receiver | single value | filter | ??? |
| acp_targetscope | single value | filter | ??? |
| acp_search_attr | multi value | utf8 case insense | A list of attributes that can be searched. |
| acp_create_class | multi value | utf8 case insense | Object classes in which an object can be created. |
| acp_create_attr | multi value | utf8 case insense | Attribute Entries that can be created. |
| acp_modify_removedattr | multi value | utf8 case insense | Modify if removed? |
| acp_modify_presentattr | multi value | utf8 case insense | ??? |
| acp_modify_class | multi value | utf8 case insense | ??? |
### Classes
| Name | Must Have | May Have |
| --- | --- | --- |
| access_control_profile | `[acp_receiver, acp_targetscope]` | `[description, acp_allow]` |
| access_control_search | `[acp_search_attr]` | |
| access_control_delete | | |
| Name | Must Have | May Have |
| ---------------------- | --------------------------------- | -------------------------------------------------------------------- |
| access_control_profile | `[acp_receiver, acp_targetscope]` | `[description, acp_allow]` |
| access_control_search | `[acp_search_attr]` | |
| access_control_delete | | |
| access_control_modify | | `[acp_modify_removedattr, acp_modify_presentattr, acp_modify_class]` |
| access_control_create | | `[acp_create_class, acp_create_attr]` |
| access_control_create | | `[acp_create_class, acp_create_attr]` |
**Important**: empty sets really mean empty sets!
**Important**: empty sets really mean empty sets!
The ACP code will assert that both `access_control_profile` *and* one of the `search/delete/modify/create`
classes exists on an ACP. An important factor of this design is now the ability to *compose*
multiple ACP's into a single entry allowing a `create/delete/modify` to exist! However, each one must
still list their respective actions to allow proper granularity.
The ACP code will assert that both `access_control_profile` _and_ one of the
`search/delete/modify/create` classes exists on an ACP. An important factor of this design is now
the ability to _compose_ multiple ACP's into a single entry allowing a `create/delete/modify` to
exist! However, each one must still list their respective actions to allow proper granularity.
"Search" Application
------------------
## "Search" Application
The set of access controls is checked, and the set where receiver matches the current identified
user is collected. These then are added to the users requested search as:
@ -359,8 +353,8 @@ required search profile filters, the outer `And` condition is nullified and no r
Once complete, in the translation of the entry -> proto_entry, each access control and its allowed
set of attrs has to be checked to determine what of that entry can be displayed. Consider there are
three entries, A, B, C. An ACI that allows read of "name" on A, B exists, and a read of "mail" on
B, C. The correct behaviour is then:
three entries, A, B, C. An ACI that allows read of "name" on A, B exists, and a read of "mail" on B,
C. The correct behaviour is then:
```
A: name
@ -369,11 +363,10 @@ C: mail
```
So this means that the `entry -> proto entry` part is likely the most expensive part of the access
control operation, but also one of the most important. It may be possible to compile to some kind
of faster method, but initially a simple version is needed.
control operation, but also one of the most important. It may be possible to compile to some kind of
faster method, but initially a simple version is needed.
"Delete" Application
------------------
## "Delete" Application
Delete is similar to search, however there is the risk that the user may say something like:
@ -381,8 +374,8 @@ Delete is similar to search, however there is the risk that the user may say som
Pres("class").
```
Were we to approach this like search, this would then have "every thing the identified user
is allowed to delete, is deleted". A consideration here is that `Pres("class")` would delete "all"
Were we to approach this like search, this would then have "every thing the identified user is
allowed to delete, is deleted". A consideration here is that `Pres("class")` would delete "all"
objects in the directory, but with the access control present, it would limit the deletion to the
set of allowed deletes.
@ -390,23 +383,24 @@ In a sense this is a correct behaviour - they were allowed to delete everything
delete. However, in another it's not valid: the request was broad and they were not allowed access
to delete everything they requested.
The possible abuse vector here is that an attacker could then use delete requests to enumerate the
existence of entries in the database that they do not have access to. This requires someone to have
The possible abuse vector here is that an attacker could then use delete requests to enumerate the
existence of entries in the database that they do not have access to. This requires someone to have
the delete privilege which in itself is very high level of access, so this risk may be minimal.
So the choices are:
1. Treat it like search and allow the user to delete what they are allowed to delete,
but ignore other objects
2. Deny the request because their delete was too broad, and they must specify a valid deletion request.
1. Treat it like search and allow the user to delete what they are allowed to delete, but ignore
other objects
2. Deny the request because their delete was too broad, and they must specify a valid deletion
request.
Option #2 seems more correct because the `delete` request is an explicit request, not a request where
you want partial results. Imagine someone wants to delete users A and B at the same time, but only
has access to A. They want this request to fail so they KNOW B was not deleted, rather than it
Option #2 seems more correct because the `delete` request is an explicit request, not a request
where you want partial results. Imagine someone wants to delete users A and B at the same time, but
only has access to A. They want this request to fail so they KNOW B was not deleted, rather than it
succeed and have B still exist with a partial delete status.
However, a possible issue is that Option #2 means that a delete request of
`And(Eq(attr, allowed_attribute), Eq(attr, denied))`, which is rejected may indicate presence of the
`And(Eq(attr, allowed_attribute), Eq(attr, denied))`, which is rejected may indicate presence of the
`denied` attribute. So option #1 may help in preventing a security risk of information disclosure.
<!-- TODO
@ -414,64 +408,60 @@ However, a possible issue is that Option #2 means that a delete request of
that would depend if the response was "invalid" in both cases, or "invalid" / "refused"
-->
This is also a concern for modification, where the modification attempt may or may not
fail depending on the entries and if you can/can't see them.
**IDEA:** You can only `delete`/`modify` within the read scope you have. If you can't
read it (based on the read rules of `search`), you can't `delete` it. This is in addition to the filter
rules of the `delete` applying as well. So performing a `delete` of `Pres(class)`, will only delete
in your `read` scope and will never disclose if you are denied access.
This is also a concern for modification, where the modification attempt may or may not fail
depending on the entries and if you can/can't see them.
**IDEA:** You can only `delete`/`modify` within the read scope you have. If you can't read it (based
on the read rules of `search`), you can't `delete` it. This is in addition to the filter rules of
the `delete` applying as well. So performing a `delete` of `Pres(class)`, will only delete in your
`read` scope and will never disclose if you are denied access.
<!-- TODO
@yaleman: This goes back to the commentary on Option #2 and feels icky like SQL's `DELETE FROM <table>` just deleting everything. It's more complex from the client - you have to search for a set of things to delete - then delete them.
Explicitly listing the objects you want to delete feels.... way less bad. This applies to modifies too.  😁
-->
"Create" Application
------------------
## "Create" Application
Create seems like the easiest to apply. Ensure that only the attributes in `createattr` are in the
`createevent`, ensure the classes only contain the set in `createclass`, then finally apply
`filter_no_index` to the entry to entry. If all of this passes, the create is allowed.
A key point is that there is no union of `create` ACI's - the WHOLE ACI must pass, not parts of
multiple. This means if a control say "allows creating group with member" and "allows creating
user with name", creating a group with `name` is not allowed - despite your ability to create
an entry with `name`, its classes don't match. This way, the administrator of the service can define
create controls with specific intent for how they will be used without the risk of two
controls causing unintended effects (`users` that are also `groups`, or allowing invalid values.
multiple. This means if a control say "allows creating group with member" and "allows creating user
with name", creating a group with `name` is not allowed - despite your ability to create an entry
with `name`, its classes don't match. This way, the administrator of the service can define create
controls with specific intent for how they will be used without the risk of two controls causing
unintended effects (`users` that are also `groups`, or allowing invalid values.
An important consideration is how to handle overlapping ACI. If two ACI *could* match the create
An important consideration is how to handle overlapping ACI. If two ACI _could_ match the create
should we enforce both conditions are upheld? Or only a single upheld ACI allows the create?
In some cases it may not be possible to satisfy both, and that would block creates. The intent
of the access profile is that "something like this CAN" be created, so I believe that provided
only a single control passes, the create should be allowed.
In some cases it may not be possible to satisfy both, and that would block creates. The intent of
the access profile is that "something like this CAN" be created, so I believe that provided only a
single control passes, the create should be allowed.
"Modify" Application
------------------
## "Modify" Application
Modify is similar to Create, however we specifically filter on the `modlist` action of `present`,
`removed` or `purged` with the action. The rules of create still apply; provided all requirements
of the modify are permitted, then it is allowed once at least one profile allows the change.
`removed` or `purged` with the action. The rules of create still apply; provided all requirements of
the modify are permitted, then it is allowed once at least one profile allows the change.
A key difference is that if the modify ACP lists multiple `presentattr` types, the modify request
is valid if it is only modifying one attribute. IE we say `presentattr: name, email`, but we
only attempt to modify `email`.
A key difference is that if the modify ACP lists multiple `presentattr` types, the modify request is
valid if it is only modifying one attribute. IE we say `presentattr: name, email`, but we only
attempt to modify `email`.
Considerations
--------------
## Considerations
* When should access controls be applied? During an operation, we only validate schema after
pre* Plugin application, so likely it has to be "at that point", to ensure schema-based
validity of the entries that are allowed to be changed.
* Self filter keyword should compile to `eq("uuid", "....")`. When do we do this and how?
* `memberof` could take `name` or `uuid`, we need to be able to resolve this correctly, but this is
- When should access controls be applied? During an operation, we only validate schema after pre*
Plugin application, so likely it has to be "at that point", to ensure schema-based validity of the
entries that are allowed to be changed.
- Self filter keyword should compile to `eq("uuid", "....")`. When do we do this and how?
- `memberof` could take `name` or `uuid`, we need to be able to resolve this correctly, but this is
likely an issue in `memberof` which needs to be addressed, ie `memberof uuid` vs `memberof attr`.
* Content controls in `create` and `modify` will be important to get right to avoid the security issues
of LDAP access controls. Given that `class` has special importance, it's only right to give it extra
consideration in these controls.
* In the future when `recyclebin` is added, a `re-animation` access profile should be created allowing
revival of entries given certain conditions of the entry we are attempting to revive. A service-desk user
should not be able to revive a deleted high-privilege user.
- Content controls in `create` and `modify` will be important to get right to avoid the security
issues of LDAP access controls. Given that `class` has special importance, it's only right to give
it extra consideration in these controls.
- In the future when `recyclebin` is added, a `re-animation` access profile should be created
allowing revival of entries given certain conditions of the entry we are attempting to revive. A
service-desk user should not be able to revive a deleted high-privilege user.

View file

@ -1,4 +1,3 @@
# Access Profiles Rework 2022
Access controls are critical for a project like Kanidm to determine who can access what on other
@ -10,69 +9,71 @@ a complete and useful IDM.
The original design of the access control system was intended to satisfy our need for flexibility,
but we have begun to discover a number of limitations. The design incorporating filter queries makes
them hard to administer as we have not often publicly talked about the filter language and how it
internally works. Because of their use of filters it is hard to see on an entry "what" access controls
will apply to entries, making it hard to audit without actually calling the ACP subsystem. Currently
the access control system has a large impact on performance, accounting for nearly 35% of the time taken
in a search operation.
internally works. Because of their use of filters it is hard to see on an entry "what" access
controls will apply to entries, making it hard to audit without actually calling the ACP subsystem.
Currently the access control system has a large impact on performance, accounting for nearly 35% of
the time taken in a search operation.
Additionally, the default access controls that we supply have started to run into limits and rough cases
due to changes as we have improved features. Some of this was due to limited design with user cases
in mind during development.
Additionally, the default access controls that we supply have started to run into limits and rough
cases due to changes as we have improved features. Some of this was due to limited design with user
cases in mind during development.
To resolve this a number of coordinating features need implementation to improve this situation. These
features will be documented *first*, and the use cases *second* with each use case linking to the
features that satisfy it.
To resolve this a number of coordinating features need implementation to improve this situation.
These features will be documented _first_, and the use cases _second_ with each use case linking to
the features that satisfy it.
## Required Features to Satisfy
### Refactor of default access controls
The current default privileges will need to be refactored to improve seperation of privilege
and improved delegation of finer access rights.
The current default privileges will need to be refactored to improve seperation of privilege and
improved delegation of finer access rights.
### Access profiles target specifiers instead of filters
Access profiles should target a list of groups for who the access profile applies to, and who recieves
the access it is granting.
Access profiles should target a list of groups for who the access profile applies to, and who
recieves the access it is granting.
Alternately an access profile could target "self" so that self-update rules can still be expressed.
An access profile could target an oauth2 definition for the purpose of allowing reads to members
of a set of scopes that can access the service.
An access profile could target an oauth2 definition for the purpose of allowing reads to members of
a set of scopes that can access the service.
The access profile receiver would be group based only. This allows specifying that "X group of members
can write self" meaning that any member of that group can write to themself and only themself.
The access profile receiver would be group based only. This allows specifying that "X group of
members can write self" meaning that any member of that group can write to themself and only
themself.
In the future we could also create different target/receiver specifiers to allow other extended management
and delegation scenarioes. This improves the situation making things more flexible from the current
filter system. It also may allow filters to be simplified to remove the SELF uuid resolve step in some cases.
In the future we could also create different target/receiver specifiers to allow other extended
management and delegation scenarioes. This improves the situation making things more flexible from
the current filter system. It also may allow filters to be simplified to remove the SELF uuid
resolve step in some cases.
### Filter based groups
These are groups who's members are dynamicly allocated based on a filter query. This allows a similar
level of dynamic group management as we have currently with access profiles, but with the additional
ability for them to be used outside of the access control context. This is the "bridge" allowing us to
move from filter based access controls to "group" targetted.
These are groups who's members are dynamicly allocated based on a filter query. This allows a
similar level of dynamic group management as we have currently with access profiles, but with the
additional ability for them to be used outside of the access control context. This is the "bridge"
allowing us to move from filter based access controls to "group" targetted.
A risk of filter based groups is "infinite churn" because of recursion. This can occur if you
had a rule such a "and not memberof = self" on a dynamic group. Because of this, filters on
dynamic groups may not use "memberof" unless they are internally provided by the kanidm project so
that we can vet these rules as correct and without creating infinite recursion scenarioes.
A risk of filter based groups is "infinite churn" because of recursion. This can occur if you had a
rule such a "and not memberof = self" on a dynamic group. Because of this, filters on dynamic groups
may not use "memberof" unless they are internally provided by the kanidm project so that we can vet
these rules as correct and without creating infinite recursion scenarioes.
### Access rules extracted to ACI entries on targets
The access control profiles are an excellent way to administer access where you can specific whom
has access to what, but it makes it harder for the reverse query which is "who has access to this
specific entity". Since this is needed for both search and auditing, by specifying our access profiles
in the current manner, but using them to generate ACE rules on the target entry will allow the search
and audit paths to answer the question of "who has access to this entity" much faster.
specific entity". Since this is needed for both search and auditing, by specifying our access
profiles in the current manner, but using them to generate ACE rules on the target entry will allow
the search and audit paths to answer the question of "who has access to this entity" much faster.
### Sudo Mode
A flag should exist on a session defining "sudo" mode which requires a special account policy membership
OR a re-authentication to enable. This sudo flag is a time window on a session token which can
allow/disallow certain behaviours. It would be necessary for all write paths to have access to this
value.
A flag should exist on a session defining "sudo" mode which requires a special account policy
membership OR a re-authentication to enable. This sudo flag is a time window on a session token
which can allow/disallow certain behaviours. It would be necessary for all write paths to have
access to this value.
### Account Policy
@ -84,13 +85,14 @@ mode and this enforces rules on session expiry.
### Default Roles / Seperation of Privilege
By default we attempt to seperate privileges so that "no single account" has complete authority
over the system.
By default we attempt to seperate privileges so that "no single account" has complete authority over
the system.
Satisfied by:
* Refactor of default access controls
* Filter based groups
* Sudo Mode
- Refactor of default access controls
- Filter based groups
- Sudo Mode
#### System Admin
@ -99,39 +101,39 @@ users or accounts.
The "admins" role is responsible to manage:
* The name of the domain
* Configuration of the servers and replication
* Management of external integrations (oauth2)
- The name of the domain
- Configuration of the servers and replication
- Management of external integrations (oauth2)
#### Service Account Admin
The role would be called "sa\_admins" and would be responsible for top level management of service
accounts, and delegating authority for service account administration to managing users.
* Create service accounts
* Delegate service account management to owners groups
* Migrate service accounts to persons
- Create service accounts
- Delegate service account management to owners groups
- Migrate service accounts to persons
The service account admin is capable of migrating service accounts to persons as it is "yielding"
control of the entity, rather than an idm admin "taking" the entity which may have security impacts.
#### Service Desk
This role manages a subset of persons. The helpdesk roles are precluded from modification of
"higher privilege" roles like service account, identity and system admins. This is due to potential
This role manages a subset of persons. The helpdesk roles are precluded from modification of "higher
privilege" roles like service account, identity and system admins. This is due to potential
privilege escalation attacks.
* Can create credential reset links
* Can lock and unlock accounts and their expiry.
- Can create credential reset links
- Can lock and unlock accounts and their expiry.
#### Idm Admin
This role manages identities, or more specifically person accounts. In addition in is a
"high privilege" service desk role and can manage high privilege users as well.
This role manages identities, or more specifically person accounts. In addition in is a "high
privilege" service desk role and can manage high privilege users as well.
* Create persons
* Modify and manage persons
* All roles of service desk for all persons
- Create persons
- Modify and manage persons
- All roles of service desk for all persons
### Self Write / Write Privilege
@ -146,19 +148,19 @@ authentication sessions as a result of this.
Satisfied by:
* Access profiles target specifiers instead of filters
* Sudo Mode
- Access profiles target specifiers instead of filters
- Sudo Mode
### Oauth2 Service Read (Nice to Have)
For ux/ui integration, being able to list oauth2 applications that are accessible to the user
would be a good feature. To limit "who" can see the oauth2 applications that an account can access
a way to "allow read" but by proxy of the related users of the oauth2 service. This will require
access controls to be able to interept the oauth2 config and provide rights based on that.
For ux/ui integration, being able to list oauth2 applications that are accessible to the user would
be a good feature. To limit "who" can see the oauth2 applications that an account can access a way
to "allow read" but by proxy of the related users of the oauth2 service. This will require access
controls to be able to interept the oauth2 config and provide rights based on that.
Satisfied by:
* Access profiles target specifiers instead of filters
- Access profiles target specifiers instead of filters
### Administration
@ -166,9 +168,9 @@ Access controls should be easier to manage and administer, and should be group b
filter based. This will make it easier for administrators to create and define their own access
rules.
* Refactor of default access controls
* Access profiles target specifiers instead of filters
* Filter based groups
- Refactor of default access controls
- Access profiles target specifiers instead of filters
- Filter based groups
### Service Account Access
@ -176,17 +178,15 @@ Service accounts should be able to be "delegated" administration, where a group
a service account. This should not require administrators to create unique access controls for each
service account, but a method to allow mapping of the service account to "who manages it".
* Sudo Mode
* Account Policy
* Access profiles target specifiers instead of filters
* Refactor of default access controls
- Sudo Mode
- Account Policy
- Access profiles target specifiers instead of filters
- Refactor of default access controls
### Auditing of Access
It should be easier to audit whom has access to what by inspecting the entry to view what can access
it.
* Access rules extracted to ACI entries on targets
* Access profiles target specifiers instead of filters
- Access rules extracted to ACI entries on targets
- Access profiles target specifiers instead of filters

View file

@ -1,69 +1,60 @@
# Oauth2 Application Listing
Oauth2 Application Listing
==========================
A feature of some other IDM systems is to also double as a portal to linked applications. This
allows a convinent access point for users to discover and access linked applications without having
to navigate to them manually. This naturally works quite well since it means that the user is
already authenticated, and the IDM becomes the single "gateway" to accessing other applications.
A feature of some other IDM systems is to also double as a portal to linked applications. This allows
a convinent access point for users to discover and access linked applications without having to
navigate to them manually. This naturally works quite well since it means that the user is already
authenticated, and the IDM becomes the single "gateway" to accessing other applications.
## How it should look
How it should look
------------------
- The user should ONLY see a list of applications they _can_ access
- The user should see a list of applications with "friendly" display names
- The list of applications _may_ have an icon/logo
- Clicking the application should take them to the location
* The user should ONLY see a list of applications they *can* access
* The user should see a list of applications with "friendly" display names
* The list of applications *may* have an icon/logo
* Clicking the application should take them to the location
## Access Control
The current design of the oauth2 resource servers (oauth2rs) is modeled around what the oauth2
protocol requires. This defines that in an oauth2 request, all of the requested scopes need be
granted else it can not proceed. The current design is:
Access Control
--------------
The current design of the oauth2 resource servers (oauth2rs) is modeled around what
the oauth2 protocol requires. This defines that in an oauth2 request, all of the requested
scopes need be granted else it can not proceed. The current design is:
* scope maps - a relation of groups to the set of scopes that they grant
* implicit scopes - a set of scopes granted to all persons
- scope maps - a relation of groups to the set of scopes that they grant
- implicit scopes - a set of scopes granted to all persons
While this works well for the oauth2 authorisation design, it doesn't work well from the kanidm side
for managing *our* knowledge of who is granted access to the application.
for managing _our_ knowledge of who is granted access to the application.
In order to limit who can see what applications we will need a new method to define who is allowed
access to the resource server on the kanidm side, while also preserving ouath2 semantics.
To fix this the current definition of scopes on oauth2 resource servers need to change.
* access scopes - a list of scopes (similar to implicit) that are used by the resource server for granting access to the resource.
* access members - a list of groups that are granted access
* supplementary scopes - definitions of scope maps that grant scopes which are not access related, but may provide extra details for the account using the resource
- access scopes - a list of scopes (similar to implicit) that are used by the resource server for
granting access to the resource.
- access members - a list of groups that are granted access
- supplementary scopes - definitions of scope maps that grant scopes which are not access related,
but may provide extra details for the account using the resource
By changing to this method this removes the arbitrary implicit scope/scope map rules, and clearly
defines the set of scopes that grant access to the application, while also allow extended scopes
to be sent that can attenuate the application behaviour. This also allows the access members reference
defines the set of scopes that grant access to the application, while also allow extended scopes to
be sent that can attenuate the application behaviour. This also allows the access members reference
to be used to generate knowledge on the kanidm side of "who can access this oauth2 resource". This
can be used to limit the listed applications to these oauth2 applications. In addition we can then
use these access members to create access controls to strictly limit who can see what oauth2 applications
to the admins of oauth2 applications, and the users of them.
use these access members to create access controls to strictly limit who can see what oauth2
applications to the admins of oauth2 applications, and the users of them.
To support this, we should allow dynamic groups to be created so that the 'implicit scope' behaviour
which allow all persons to access an application can be emulated by making all persons a member of access members.
which allow all persons to access an application can be emulated by making all persons a member of
access members.
Migration of the current scopes and implicit scopes is likely not possible with this change, so we
may have to delete these which will require admins to re-configure these permissions, but that is a better
option than allowing "too much" access.
may have to delete these which will require admins to re-configure these permissions, but that is a
better option than allowing "too much" access.
Display Names / Logos
---------------------
## Display Names / Logos
Display names already exist.
Logos will require upload and storage. A binary type exists in the db that can be used for storing
blobs, or we could store something like svg. I think it's too risky to "validate" images in these
uploads, so we could just store the blob and display it?

View file

@ -1,15 +1,13 @@
# REST Interface
{{#template
../../templates/kani-warning.md
imagepath=../../images/
title=Note!
text=Here begins some early notes on the REST interface - much better ones are in the repository's designs directory.
}}
{{#template\
../../templates/kani-warning.md imagepath=../../images/ title=Note! text=Here begins some early
notes on the REST interface - much better ones are in the repository's designs directory. }}
There's an endpoint at `/<api_version>/routemap` (for example, https://localhost/v1/routemap) which is based on the API routes as they get instantiated.
There's an endpoint at `/<api_version>/routemap` (for example, https://localhost/v1/routemap) which
is based on the API routes as they get instantiated.
It's *very, very, very* early work, and should not be considered stable at all.
It's _very, very, very_ early work, and should not be considered stable at all.
An example of some elements of the output is below:
@ -46,4 +44,4 @@ An example of some elements of the output is below:
}
]
}
```
```

View file

@ -1,24 +1,31 @@
# Scim and Migration Tooling
We need to be able to synchronise content from other directory or identity management systems.
To do this, we need the capability to have "pluggable" synchronisation drivers. This is because
not all deployments will be able to use our generic versions, or may have customisations they
wish to perform that are unique to them.
We need to be able to synchronise content from other directory or identity management systems. To do
this, we need the capability to have "pluggable" synchronisation drivers. This is because not all
deployments will be able to use our generic versions, or may have customisations they wish to
perform that are unique to them.
To achieve this we need a layer of seperation - This effectively becomes an "extract, transform,
load" process. In addition this process must be *stateful* where it can be run multiple times
or even continuously and it will bring kanidm into synchronisation.
load" process. In addition this process must be _stateful_ where it can be run multiple times or
even continuously and it will bring kanidm into synchronisation.
We refer to a "synchronisation" as meaning a complete successful extract, transform and load cycle.
There are three expected methods of using the synchronisation tools for Kanidm
* Kanidm as a "read only" portal allowing access to it's specific features and integrations. This is less of a migration, and more of a way to "feed" data into Kanidm without relying on it's internal administration features.
* "Big Bang" migration. This is where all the data from another IDM is synchronised in a single execution and applications are swapped to Kanidm. This is rare in larger deployments, but may be used in smaller sites.
* Gradual migration. This is where data is synchronised to Kanidm and then both the existing IDM and Kanidm co-exist. Applications gradually migrate to Kanidm. At some point a "final" synchronisation is performed where Kanidm 'gains authority' over all identity data and the existing IDM is disabled.
- Kanidm as a "read only" portal allowing access to it's specific features and integrations. This is
less of a migration, and more of a way to "feed" data into Kanidm without relying on it's internal
administration features.
- "Big Bang" migration. This is where all the data from another IDM is synchronised in a single
execution and applications are swapped to Kanidm. This is rare in larger deployments, but may be
used in smaller sites.
- Gradual migration. This is where data is synchronised to Kanidm and then both the existing IDM and
Kanidm co-exist. Applications gradually migrate to Kanidm. At some point a "final" synchronisation
is performed where Kanidm 'gains authority' over all identity data and the existing IDM is
disabled.
In these processes there may be a need to "reset" the synchronsied data. The diagram below shows the possible work flows which account for the above.
In these processes there may be a need to "reset" the synchronsied data. The diagram below shows the
possible work flows which account for the above.
┏━━━━━━━━━━━━━━━━━┓
┃ ┃
@ -45,9 +52,10 @@ In these processes there may be a need to "reset" the synchronsied data. The dia
Kanidm starts in a "detached" state from the extern IDM source.
For Kanidm as a "read only" application source the Initial synchronisation is performed followed by periodic
active (partial) synchronisations. At anytime a full initial synchronisation can re-occur to reset the data of the
provider. The provider can be reset and removed by a purge which reset's Kanidm to a detached state.
For Kanidm as a "read only" application source the Initial synchronisation is performed followed by
periodic active (partial) synchronisations. At anytime a full initial synchronisation can re-occur
to reset the data of the provider. The provider can be reset and removed by a purge which reset's
Kanidm to a detached state.
For a gradual migration, this process is the same as the read only application. However when ready
to perform the final cut over a final synchronisation is performed, which retains the data of the
@ -61,43 +69,43 @@ step required, where all data is loaded and then immediately granted authority t
### Extract
First a user must be able to retrieve their data from their supplying IDM source. Initially
we will target LDAP and systems with LDAP interfaces, but in the future there is no barrier
to supporting other transports.
First a user must be able to retrieve their data from their supplying IDM source. Initially we will
target LDAP and systems with LDAP interfaces, but in the future there is no barrier to supporting
other transports.
To achieve this, we initially provide synchronisation primitives in the
[ldap3 crate](https://github.com/kanidm/ldap3).
### Transform
This process will be custom developed by the user, or may have a generic driver that we provide.
Our generic tools may provide attribute mapping abilitys so that we can allow some limited
This process will be custom developed by the user, or may have a generic driver that we provide. Our
generic tools may provide attribute mapping abilitys so that we can allow some limited
customisation.
### Load
Finally to load the data into Kanidm, we will make a SCIM interface available. SCIM is a
"spiritual successor" to LDAP, and aligns with Kani's design. SCIM allows structured data
to be uploaded (unlike LDAP which is simply strings). Because of this SCIM will allow us to
expose more complex types that previously we have not been able to provide.
Finally to load the data into Kanidm, we will make a SCIM interface available. SCIM is a "spiritual
successor" to LDAP, and aligns with Kani's design. SCIM allows structured data to be uploaded
(unlike LDAP which is simply strings). Because of this SCIM will allow us to expose more complex
types that previously we have not been able to provide.
The largest benefit to SCIM's model is it's ability to perform "batched" operations, which work
with Kanidm's transactional model to ensure that during load events, that content is always valid
and correct.
The largest benefit to SCIM's model is it's ability to perform "batched" operations, which work with
Kanidm's transactional model to ensure that during load events, that content is always valid and
correct.
## Configuring a Synchronisation Provider in Kanidm
Kanidm has a strict transactional model with full ACID compliance. Attempting to create an external
model that needs to interoperate with Kanidm's model and ensure both are compliant is fraught with
danger. As a result, Kanidm sync providers *should* be stateless, acting only as an ETL bridge.
danger. As a result, Kanidm sync providers _should_ be stateless, acting only as an ETL bridge.
Additionally syncproviders need permissions to access and write to content in Kanidm, so it also
necessitates Kanidm being aware of the sync relationship.
For this reason a syncprovider is a derivative of a service account, which also allows storage of
the *state* of the synchronisation operation. An example of this is that LDAP syncrepl provides a
cookie defining the "state" of what has been "consumed up to" by the ETL bridge. During the
load phase the modified entries *and* the cookie are persisted. This means that if the operation fails
the _state_ of the synchronisation operation. An example of this is that LDAP syncrepl provides a
cookie defining the "state" of what has been "consumed up to" by the ETL bridge. During the load
phase the modified entries _and_ the cookie are persisted. This means that if the operation fails
the cookie also rolls back allowing a retry of the sync. If it suceeds the next sync knows that
kanidm is in the correct state. Graphically:
@ -119,16 +127,16 @@ kanidm is in the correct state. Graphically:
│ │ │ │◀─────Result───────│ │
└────────────┘ └────────────┘ └────────────┘
At any point the operation *may* fail, so by locking the state with the upload of entries this
At any point the operation _may_ fail, so by locking the state with the upload of entries this
guarantees correct upload has suceeded and persisted. A success really means it!
## SCIM
### Authentication to the endpoint
This will be based on Kanidm's existing authentication infrastructure, allowing service accounts
to use bearer tokens. These tokens will internally bind that changes from the account MUST contain
the associated state identifier (cookie).
This will be based on Kanidm's existing authentication infrastructure, allowing service accounts to
use bearer tokens. These tokens will internally bind that changes from the account MUST contain the
associated state identifier (cookie).
### Batch Operations
@ -153,26 +161,27 @@ source is the authority on the information.
## Internal Batch Update Operation Phases
We have to consider in our batch updates that there are multiple stages of the update. This is because
we need to consider that at any point the lifecycle of a presented entry may change within a single
batch. Because of this, we have to treat the operation differently within kanidm to ensure a consistent outcome.
We have to consider in our batch updates that there are multiple stages of the update. This is
because we need to consider that at any point the lifecycle of a presented entry may change within a
single batch. Because of this, we have to treat the operation differently within kanidm to ensure a
consistent outcome.
Additionally we have to "fail fast". This means that on any conflict the sync will abort and the administrator
must intervene.
Additionally we have to "fail fast". This means that on any conflict the sync will abort and the
administrator must intervene.
To understand why we chose this, we have to look at what happens in a "soft fail" condition.
In this example we have an account named X and a group named Y. The group contains X as a member.
When we submit this for an initial sync, or after the account X is created, if we had a "soft" fail
during the import of the account, we would reject it from being added to Kanidm but would then continue
with the synchronisation. Then the group Y would be imported. Since the member pointing to X would
not be valid, it would be silently removed.
during the import of the account, we would reject it from being added to Kanidm but would then
continue with the synchronisation. Then the group Y would be imported. Since the member pointing to
X would not be valid, it would be silently removed.
At this point we would have group Y imported, but it has no members and the account X would not
have been imported. The administrator may intervene and fix the account X to allow sync to proceed. However
this would not repair the missing group membership. To repair the group membership a change to group Y
would need to be triggered to also sync the group status.
At this point we would have group Y imported, but it has no members and the account X would not have
been imported. The administrator may intervene and fix the account X to allow sync to proceed.
However this would not repair the missing group membership. To repair the group membership a change
to group Y would need to be triggered to also sync the group status.
Since the admin may not be aware of this, it would silently mean the membership is missing.
@ -182,8 +191,8 @@ group Y would sync and the membership would be intact.
### Phase 1 - Validation of Update State
In this phase we need to assert that the batch operation can proceed and is consistent with the expectations
we have of the server's state.
In this phase we need to assert that the batch operation can proceed and is consistent with the
expectations we have of the server's state.
Assert the token provided is valid, and contains the correct access requirements.
@ -199,31 +208,32 @@ Retrieve the sync\_authority value from the sync entry.
### Phase 2 - Entry Location, Creation and Authority
In this phase we are ensuring that all the entries within the operation are within the control of
this sync domain. We also ensure that entries we intend to act upon exist with our authority
markers such that the subsequent operations are all "modifications" rather than mixed create/modify
this sync domain. We also ensure that entries we intend to act upon exist with our authority markers
such that the subsequent operations are all "modifications" rather than mixed create/modify
For each entry in the sync request, if an entry with that uuid exists retrieve it.
* If an entry exists in the database, assert that it's sync\_parent\_uuid is the same as our agreements.
* If there is no sync\_parent\_uuid or the sync\_parent\_uuid does not match, reject the operation.
- If an entry exists in the database, assert that it's sync\_parent\_uuid is the same as our
agreements.
- If there is no sync\_parent\_uuid or the sync\_parent\_uuid does not match, reject the
operation.
* If no entry exists in the database, create a "stub" entry with our sync\_parent\_uuid
* Create the entry immediately, and then retrieve it.
- If no entry exists in the database, create a "stub" entry with our sync\_parent\_uuid
- Create the entry immediately, and then retrieve it.
### Phase 3 - Entry Assertion
Remove all attributes in the sync that are overlapped with our sync\_authority value.
For all uuids in the entry present set
Assert their attributes match what was synced in.
Resolve types that need resolving (name2uuid, externalid2uuid)
For all uuids in the entry present set Assert their attributes match what was synced in. Resolve
types that need resolving (name2uuid, externalid2uuid)
Write all
### Phase 4 - Entry Removal
For all uuids in the delete\_uuids set:
if their sync\_parent\_uuid matches ours, assert they are deleted (recycled).
For all uuids in the delete\_uuids set: if their sync\_parent\_uuid matches ours, assert they are
deleted (recycled).
### Phase 5 - Commit
@ -232,14 +242,3 @@ Write the updated "state" from the request to\_state to our current state of the
Write an updated "authority" value to the agreement of what attributes we can change.
Commit the txn.

View file

@ -15,11 +15,14 @@ TODO: a lot of things.
Setting up a dev environment can be a little complex because of the mono-repo.
1. Install poetry: `python -m pip install poetry`. This is what we use to manage the packages, and allows you to set up virtual python environments easier.
2. Build the base environment. From within the `pykanidm` directory, run: `poetry install` This'll set up a virtual environment and install all the required packages (and development-related ones)
1. Install poetry: `python -m pip install poetry`. This is what we use to manage the packages, and
allows you to set up virtual python environments easier.
2. Build the base environment. From within the `pykanidm` directory, run: `poetry install` This'll
set up a virtual environment and install all the required packages (and development-related ones)
3. Start editing!
Most IDEs will be happier if you open the kanidm_rlm_python or pykanidm directories as the base you are working from, rather than the kanidm repository root, so they can auto-load integrations etc.
Most IDEs will be happier if you open the kanidm_rlm_python or pykanidm directories as the base you
are working from, rather than the kanidm repository root, so they can auto-load integrations etc.
## Building the documentation

View file

@ -2,40 +2,48 @@
Setting up a dev environment has some extra complexity due to the mono-repo design.
1. Install poetry: `python -m pip install poetry`. This is what we use to manage the packages, and allows you to set up virtual python environments easier.
1. Install poetry: `python -m pip install poetry`. This is what we use to manage the packages, and
allows you to set up virtual python environments easier.
2. Build the base environment. From within the kanidm_rlm_python directory, run: `poetry install`
3. Install the `kanidm` python library: `poetry run python -m pip install ../pykanidm`
4. Start editing!
Most IDEs will be happier if you open the `kanidm_rlm_python` or `pykanidm` directories as the base you are working from, rather than the `kanidm` repository root, so they can auto-load integrations etc.
Most IDEs will be happier if you open the `kanidm_rlm_python` or `pykanidm` directories as the base
you are working from, rather than the `kanidm` repository root, so they can auto-load integrations
etc.
## Running a test RADIUS container
From the root directory of the Kanidm repository:
1. Build the container - this'll give you a container image called `kanidm/radius` with the tag `devel`:
1. Build the container - this'll give you a container image called `kanidm/radius` with the tag
`devel`:
```shell
```bash
make build/radiusd
```
2. Once the process has completed, check the container exists in your docker environment:
```shell
```bash
➜ docker image ls kanidm/radius
REPOSITORY TAG IMAGE ID CREATED SIZE
kanidm/radius devel 5dabe894134c About a minute ago 622MB
```
*Note:* If you're just looking to play with a pre-built container, images are also automatically built based on the development branch and available at `ghcr.io/kanidm/radius:devel`
3. Generate some self-signed certificates by running the script - just hit enter on all the prompts if you don't want to customise them. This'll put the files in `/tmp/kanidm`:
_Note:_ If you're just looking to play with a pre-built container, images are also automatically
built based on the development branch and available at `ghcr.io/kanidm/radius:devel`
```shell
3. Generate some self-signed certificates by running the script - just hit enter on all the prompts
if you don't want to customise them. This'll put the files in `/tmp/kanidm`:
```bash
./insecure_generate_tls.sh
```
4. Run the container:
```shell
4. Run the container:
```bash
cd kanidm_rlm_python && ./run_radius_container.sh
```
@ -46,7 +54,7 @@ You can pass the following environment variables to `run_radius_container.sh` to
For example:
```shell
```bash
IMAGE=ghcr.io/kanidm/radius:devel \
CONFIG_FILE=~/.config/kanidm \
./run_radius_container.sh
@ -54,13 +62,14 @@ IMAGE=ghcr.io/kanidm/radius:devel \
## Testing authentication
Authentication can be tested through the client.localhost Network Access Server (NAS) configuration with:
Authentication can be tested through the client.localhost Network Access Server (NAS) configuration
with:
```shell
```bash
docker exec -i -t radiusd radtest \
<username> badpassword \
127.0.0.1 10 testing123
docker exec -i -t radiusd radtest \
<username> <radius show_secret value here> \
127.0.0.1 10 testing123

View file

@ -1,33 +1,39 @@
# Rename the domain
There are some cases where you may need to rename the domain. You should have configured
this initially in the setup, however you may have a situation where a business is changing
name, merging, or other needs which may prompt this needing to be changed.
There are some cases where you may need to rename the domain. You should have configured this
initially in the setup, however you may have a situation where a business is changing name, merging,
or other needs which may prompt this needing to be changed.
> **WARNING:** This WILL break ALL u2f/webauthn tokens that have been enrolled, which MAY cause
> accounts to be locked out and unrecoverable until further action is taken. DO NOT CHANGE
> the domain name unless REQUIRED and have a plan on how to manage these issues.
> accounts to be locked out and unrecoverable until further action is taken. DO NOT CHANGE the
> domain name unless REQUIRED and have a plan on how to manage these issues.
> **WARNING:** This operation can take an extensive amount of time as ALL accounts and groups
> in the domain MUST have their Security Principal Names (SPNs) regenerated. This WILL also cause
> a large delay in replication once the system is restarted.
> **WARNING:** This operation can take an extensive amount of time as ALL accounts and groups in the
> domain MUST have their Security Principal Names (SPNs) regenerated. This WILL also cause a large
> delay in replication once the system is restarted.
You should make a backup before proceeding with this operation.
When you have a created a migration plan and strategy on handling the invalidation of webauthn,
you can then rename the domain.
When you have a created a migration plan and strategy on handling the invalidation of webauthn, you
can then rename the domain.
First, stop the instance.
docker stop <container name>
```bash
docker stop <container name>
```
Second, change `domain` and `origin` in `server.toml`.
Third, trigger the database domain rename process.
docker run --rm -i -t -v kanidmd:/data \
kanidm/server:latest /sbin/kanidmd domain rename -c /data/server.toml
```bash
docker run --rm -i -t -v kanidmd:/data \
kanidm/server:latest /sbin/kanidmd domain rename -c /data/server.toml
```
Finally, you can now start your instance again.
docker start <container name>
```bash
docker start <container name>
```

View file

@ -2,247 +2,259 @@
Guard your Kubernetes ingress with Kanidm authentication and authorization.
## Prerequisites
We recommend you have the following before continuing:
- [Kanidm](../installing_the_server.html)
- [Kanidm](../installing_the_server.html)
- [Kubernetes v1.23 or above](https://docs.k0sproject.io/v1.23.6+k0s.2/install/)
- [Nginx Ingress](https://kubernetes.github.io/ingress-nginx/deploy/)
- A fully qualified domain name with an A record pointing to your k8s ingress.
- [CertManager with a Cluster Issuer installed.](https://cert-manager.io/docs/installation/)
## Instructions
1. Create a Kanidm account and group:
1. Create a Kanidm account. Please see the section [Creating Accounts](../accounts_and_groups.md).
1. Give the account a password. Please see the section [Resetting Account Credentials](../accounts_and_groups.md).
2. Make the account a person. Please see the section [People Accounts](../accounts_and_groups.md).
3. Create a Kanidm group. Please see the section [Creating Accounts](../accounts_and_groups.md).
4. Add the account you created to the group you create. Please see the section [Creating Accounts](../accounts_and_groups.md).
1. Create a Kanidm account. Please see the section
[Creating Accounts](../accounts_and_groups.md).
1. Give the account a password. Please see the section
[Resetting Account Credentials](../accounts_and_groups.md).
1. Make the account a person. Please see the section
[People Accounts](../accounts_and_groups.md).
1. Create a Kanidm group. Please see the section [Creating Accounts](../accounts_and_groups.md).
1. Add the account you created to the group you create. Please see the section
[Creating Accounts](../accounts_and_groups.md).
2. Create a Kanidm OAuth2 resource:
1. Create the OAuth2 resource for your domain. Please see the section [Create the Kanidm Configuration](../integrations/oauth2.md).
2. Add a scope mapping from the resource you created to the group you create with the openid, profile, and email scopes. Please see the section [Create the Kanidm Configuration](../integrations/oauth2.md).
1. Create the OAuth2 resource for your domain. Please see the section
[Create the Kanidm Configuration](../integrations/oauth2.md).
2. Add a scope mapping from the resource you created to the group you create with the openid,
profile, and email scopes. Please see the section
[Create the Kanidm Configuration](../integrations/oauth2.md).
3. Create a `Cookie Secret` to for the placeholder `<COOKIE_SECRET>` in step 4:
```shell
docker run -ti --rm python:3-alpine python -c 'import secrets,base64; print(base64.b64encode(base64.b64encode(secrets.token_bytes(16))).decode("utf-8"));'
```
4. Create a file called `k8s.kanidm-nginx-auth-example.yaml` with the block below. Replace every `<string>` (drop the `<>`) with appropriate values:
```shell
docker run -ti --rm python:3-alpine python -c 'import secrets,base64; print(base64.b64encode(base64.b64encode(secrets.token_bytes(16))).decode("utf-8"));'
```
4. Create a file called `k8s.kanidm-nginx-auth-example.yaml` with the block below. Replace every
`<string>` (drop the `<>`) with appropriate values:
1. `<FQDN>`: The fully qualified domain name with an A record pointing to your k8s ingress.
2. `<KANIDM_FQDN>`: The fully qualified domain name of your Kanidm deployment.
3. `<COOKIE_SECRET>`: The output from step 3.
4. `<OAUTH2_RS_NAME>`: Please see the output from step 2.1 or [get](../integrations/oauth2.md) the OAuth2 resource you create from that step.
5. `<OAUTH2_RS_BASIC_SECRET>`: Please see the output from step 2.1 or [get](../integrations/oauth2.md) the OAuth2 resource you create from that step.
4. `<OAUTH2_RS_NAME>`: Please see the output from step 2.1 or [get](../integrations/oauth2.md)
the OAuth2 resource you create from that step.
5. `<OAUTH2_RS_BASIC_SECRET>`: Please see the output from step 2.1 or
[get](../integrations/oauth2.md) the OAuth2 resource you create from that step.
This will deploy the following to your cluster:
- [modem7/docker-starwars](https://github.com/modem7/docker-starwars) - An example web site.
- [OAuth2 Proxy](https://oauth2-proxy.github.io/oauth2-proxy/) - A OAuth2 proxy is used as an OAuth2 client with NGINX [Authentication Based on Subrequest Result](https://docs.nginx.com/nginx/admin-guide/security-controls/configuring-subrequest-authentication/).
This will deploy the following to your cluster:
- [modem7/docker-starwars](https://github.com/modem7/docker-starwars) - An example web site.
- [OAuth2 Proxy](https://oauth2-proxy.github.io/oauth2-proxy/) - A OAuth2 proxy is used as an
OAuth2 client with NGINX
[Authentication Based on Subrequest Result](https://docs.nginx.com/nginx/admin-guide/security-controls/configuring-subrequest-authentication/).
```yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: kanidm-example
labels:
pod-security.kubernetes.io/enforce: restricted
```yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: kanidm-example
labels:
pod-security.kubernetes.io/enforce: restricted
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: kanidm-example
name: website
labels:
app: website
spec:
revisionHistoryLimit: 1
replicas: 1
selector:
matchLabels:
app: website
template:
metadata:
labels:
app: website
spec:
containers:
- name: website
image: modem7/docker-starwars
imagePullPolicy: Always
ports:
- containerPort: 8080
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: kanidm-example
name: website
labels:
app: website
spec:
revisionHistoryLimit: 1
replicas: 1
selector:
matchLabels:
app: website
template:
metadata:
labels:
app: website
spec:
containers:
- name: website
image: modem7/docker-starwars
imagePullPolicy: Always
ports:
- containerPort: 8080
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
---
apiVersion: v1
kind: Service
metadata:
namespace: kanidm-example
name: website
spec:
selector:
app: website
ports:
- protocol: TCP
port: 8080
targetPort: 8080
---
apiVersion: v1
kind: Service
metadata:
namespace: kanidm-example
name: website
spec:
selector:
app: website
ports:
- protocol: TCP
port: 8080
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: lets-encrypt-cluster-issuer
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri"
name: website
namespace: kanidm-example
spec:
ingressClassName: nginx
tls:
- hosts:
- <FQDN>
secretName: <FQDN>-ingress-tls # replace . with - in the hostname
rules:
- host: <FQDN>
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: website
port:
number: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: lets-encrypt-cluster-issuer
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri"
name: website
namespace: kanidm-example
spec:
ingressClassName: nginx
tls:
- hosts:
- <FQDN>
secretName: <FQDN>-ingress-tls # replace . with - in the hostname
rules:
- host: <FQDN>
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: website
port:
number: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
namespace: kanidm-example
spec:
replicas: 1
selector:
matchLabels:
k8s-app: oauth2-proxy
template:
metadata:
labels:
k8s-app: oauth2-proxy
spec:
containers:
- args:
- --provider=oidc
- --email-domain=*
- --upstream=file:///dev/null
- --http-address=0.0.0.0:4182
- --oidc-issuer-url=https://<KANIDM_FQDN>/oauth2/openid/<OAUTH2_RS_NAME>
- --code-challenge-method=S256
env:
- name: OAUTH2_PROXY_CLIENT_ID
value: <OAUTH2_RS_NAME>
- name: OAUTH2_PROXY_CLIENT_SECRET
value: <OAUTH2_RS_BASIC_SECRET>
- name: OAUTH2_PROXY_COOKIE_SECRET
value: <COOKIE_SECRET>
image: quay.io/oauth2-proxy/oauth2-proxy:latest
imagePullPolicy: Always
name: oauth2-proxy
ports:
- containerPort: 4182
protocol: TCP
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
namespace: kanidm-example
spec:
ports:
- name: http
port: 4182
protocol: TCP
targetPort: 4182
selector:
k8s-app: oauth2-proxy
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
namespace: kanidm-example
spec:
replicas: 1
selector:
matchLabels:
k8s-app: oauth2-proxy
template:
metadata:
labels:
k8s-app: oauth2-proxy
spec:
containers:
- args:
- --provider=oidc
- --email-domain=*
- --upstream=file:///dev/null
- --http-address=0.0.0.0:4182
- --oidc-issuer-url=https://<KANIDM_FQDN>/oauth2/openid/<OAUTH2_RS_NAME>
- --code-challenge-method=S256
env:
- name: OAUTH2_PROXY_CLIENT_ID
value: <OAUTH2_RS_NAME>
- name: OAUTH2_PROXY_CLIENT_SECRET
value: <OAUTH2_RS_BASIC_SECRET>
- name: OAUTH2_PROXY_COOKIE_SECRET
value: <COOKIE_SECRET>
image: quay.io/oauth2-proxy/oauth2-proxy:latest
imagePullPolicy: Always
name: oauth2-proxy
ports:
- containerPort: 4182
protocol: TCP
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
namespace: kanidm-example
spec:
ports:
- name: http
port: 4182
protocol: TCP
targetPort: 4182
selector:
k8s-app: oauth2-proxy
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: oauth2-proxy
namespace: kanidm-example
spec:
ingressClassName: nginx
rules:
- host: <FQDN>
http:
paths:
- path: /oauth2
pathType: Prefix
backend:
service:
name: oauth2-proxy
port:
number: 4182
tls:
- hosts:
- <FQDN>
secretName: <FQDN>-ingress-tls # replace . with - in the hostname
```
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: oauth2-proxy
namespace: kanidm-example
spec:
ingressClassName: nginx
rules:
- host: <FQDN>
http:
paths:
- path: /oauth2
pathType: Prefix
backend:
service:
name: oauth2-proxy
port:
number: 4182
tls:
- hosts:
- <FQDN>
secretName: <FQDN>-ingress-tls # replace . with - in the hostname
```
5. Apply the configuration by running the following command:
```shell
kubectl apply -f k8s.kanidm-nginx-auth-example.yaml
```
```bash
kubectl apply -f k8s.kanidm-nginx-auth-example.yaml
```
6. Check your deployment succeeded by running the following commands:
```shell
kubectl -n kanidm-example get all
kubectl -n kanidm-example get ingress
kubectl -n kanidm-example get Certificate
```
```bash
kubectl -n kanidm-example get all
kubectl -n kanidm-example get ingress
kubectl -n kanidm-example get Certificate
```
You may use kubectl's describe and log for troubleshooting. If there are ingress errors see the Ingress NGINX documentation's [troubleshooting page](https://kubernetes.github.io/ingress-nginx/troubleshooting/). If there are certificate errors see the CertManger documentation's [troubleshooting page](https://cert-manager.io/docs/faq/troubleshooting/).
Once it has finished deploying, you will be able to access it at `https://<FQDN>` which will prompt you for authentication.
You may use kubectl's describe and log for troubleshooting. If there are ingress errors see the
Ingress NGINX documentation's
[troubleshooting page](https://kubernetes.github.io/ingress-nginx/troubleshooting/). If there are
certificate errors see the CertManger documentation's
[troubleshooting page](https://cert-manager.io/docs/faq/troubleshooting/).
Once it has finished deploying, you will be able to access it at `https://<FQDN>` which will
prompt you for authentication.
## Cleaning Up
1. Remove the resources create for this example from k8s:
```shell
kubectl delete namespace kanidm-example
```
```bash
kubectl delete namespace kanidm-example
```
2. Remove the objects created for this example from Kanidm:
1. Delete the account created in section Instructions step 1.
2. Delete the group created in section Instructions step 2.
3. Delete the OAuth2 resource created in section Instructions step 3.
## References
1. [NGINX Ingress Controller: External OAUTH Authentication](https://kubernetes.github.io/ingress-nginx/examples/auth/oauth-external-auth/)

View file

@ -1,11 +1,11 @@
# Frequently Asked Questions
... or ones we think people *might* ask.
... or ones we think people _might_ ask.
## Why disallow HTTP (without TLS) between my load balancer and Kanidm?
Because Kanidm is one of the keys to a secure network, and insecure connections
to them are not best practice.
Because Kanidm is one of the keys to a secure network, and insecure connections to them are not best
practice.
Please refer to [Why TLS?](why_tls.md) for a longer explanation.
@ -15,11 +15,13 @@ It's [a rust thing](https://rustacean.net).
## Will you implement -insert protocol here-
Probably, on an infinite time-scale! As long as it's not Kerberos. Or involves SSL or STARTTLS. Please log an issue and start the discussion!
Probably, on an infinite time-scale! As long as it's not Kerberos. Or involves SSL or STARTTLS.
Please log an issue and start the discussion!
## Why do the crabs have knives?
Don't [ask](https://www.youtube.com/watch?v=0QaAKi0NFkA). They just [do](https://www.youtube.com/shorts/WizH5ae9ozw).
Don't [ask](https://www.youtube.com/watch?v=0QaAKi0NFkA). They just
[do](https://www.youtube.com/shorts/WizH5ae9ozw).
## Why won't you take this FAQ thing seriously?

View file

@ -1,33 +1,46 @@
# Glossary
This is a glossary of terms used through out this book. While we make every effort to
explains terms and acronyms when they are used, this may be a useful reference if something
feels unknown to you.
This is a glossary of terms used through out this book. While we make every effort to explains terms
and acronyms when they are used, this may be a useful reference if something feels unknown to you.
## Domain Names
* domain - This is the domain you "own". It is the highest level entity. An example would be `example.com` (since you do not own `.com`).
* subdomain - A subdomain is a domain name space under the domain. A subdomains of `example.com` are `a.example.com` and `b.example.com`. Each subdomain can have further subdomains.
* domain name - This is any named entity within your domain or its subdomains. This is the umbrella term, referring to all entities in the domain. `example.com`, `a.example.com`, `host.example.com` are all valid domain names with the domain `example.com`.
* origin - An origin defines a URL with a protocol scheme, optional port number and domain name components. An example is `https://host.example.com`
* effective domain - This is the extracted domain name from an origin excluding port and scheme.
- domain - This is the domain you "own". It is the highest level entity. An example would be
`example.com` (since you do not own `.com`).
- subdomain - A subdomain is a domain name space under the domain. A subdomains of `example.com` are
`a.example.com` and `b.example.com`. Each subdomain can have further subdomains.
- domain name - This is any named entity within your domain or its subdomains. This is the umbrella
term, referring to all entities in the domain. `example.com`, `a.example.com`, `host.example.com`
are all valid domain names with the domain `example.com`.
- origin - An origin defines a URL with a protocol scheme, optional port number and domain name
components. An example is `https://host.example.com`
- effective domain - This is the extracted domain name from an origin excluding port and scheme.
## Accounts
* trust - A trust is when two Kanidm domains have a relationship to each other where accounts can be used between the domains. The domains retain their administration boundaries, but allow cross authentication.
* replication - This is the process where two or more Kanidm servers in a domain can synchronise their database content.
* UAT - User Authentication Token. This is a token issue by Kanidm to an account after it has authenticated.
* SPN - Security Principal Name. This is a name of an account comprising it's name and domain name. This allows distinction between accounts with identical names over a trust boundary
- trust - A trust is when two Kanidm domains have a relationship to each other where accounts can be
used between the domains. The domains retain their administration boundaries, but allow cross
authentication.
- replication - This is the process where two or more Kanidm servers in a domain can synchronise
their database content.
- UAT - User Authentication Token. This is a token issue by Kanidm to an account after it has
authenticated.
- SPN - Security Principal Name. This is a name of an account comprising it's name and domain name.
This allows distinction between accounts with identical names over a trust boundary
## Internals
* entity, object, entry - Any item in the database. Generally these terms are interchangeable, but internally they are referred to as Entry.
* account - An entry that may authenticate to the server, generally allowing extended permissions and actions to be undertaken.
- entity, object, entry - Any item in the database. Generally these terms are interchangeable, but
internally they are referred to as Entry.
- account - An entry that may authenticate to the server, generally allowing extended permissions
and actions to be undertaken.
### Access Control
* privilege - An expression of what actions an account may perform if granted
* target - The entries that will be affected by a privilege
* receiver - The entries that will be able to use a privilege
* acp - an Access Control Profile which defines a set of privileges that are granted to receivers to affect target entries.
* role - A term used to express a group that is the receiver of an access control profile allowing it's members to affect the target entries.
- privilege - An expression of what actions an account may perform if granted
- target - The entries that will be affected by a privilege
- receiver - The entries that will be able to use a privilege
- acp - an Access Control Profile which defines a set of privileges that are granted to receivers to
affect target entries.
- role - A term used to express a group that is the receiver of an access control profile allowing
it's members to affect the target entries.

View file

@ -1,50 +1,56 @@
# Installing Client Tools
> **NOTE** As this project is in a rapid development phase, running different
release versions will likely present incompatibilities. Ensure you're running
matching release versions of client and server binaries. If you have any issues,
check that you are running the latest software.
> **NOTE** As this project is in a rapid development phase, running different release versions will
> likely present incompatibilities. Ensure you're running matching release versions of client and
> server binaries. If you have any issues, check that you are running the latest software.
## From packages
Kanidm currently is packaged for the following systems:
* OpenSUSE Tumbleweed
* OpenSUSE Leap 15.3/15.4
* MacOS
* Arch Linux
* NixOS
* Fedora 36
* CentOS Stream 9
- OpenSUSE Tumbleweed
- OpenSUSE Leap 15.3/15.4
- MacOS
- Arch Linux
- NixOS
- Fedora 36
- CentOS Stream 9
The `kanidm` client has been built and tested from Windows, but is not (yet) packaged routinely.
### OpenSUSE Tumbleweed
Kanidm has been part of OpenSUSE Tumbleweed since October 2020. You can install
the clients with:
Kanidm has been part of OpenSUSE Tumbleweed since October 2020. You can install the clients with:
zypper ref
zypper in kanidm-clients
```bash
zypper ref
zypper in kanidm-clients
```
### OpenSUSE Leap 15.3/15.4
Using zypper you can add the Kanidm leap repository with:
zypper ar -f obs://network:idm network_idm
```bash
zypper ar -f obs://network:idm network_idm
```
Then you need to refresh your metadata and install the clients.
zypper ref
zypper in kanidm-clients
```bash
zypper ref
zypper in kanidm-clients
```
### MacOS - Brew
[Homebrew](https://brew.sh/) allows addition of third party repositories for installing tools. On
MacOS you can use this to install the Kanidm tools.
brew tap kanidm/kanidm
brew install kanidm
```bash
brew tap kanidm/kanidm
brew install kanidm
```
### Arch Linux
@ -56,60 +62,69 @@ MacOS you can use this to install the Kanidm tools.
### Fedora / Centos Stream
{{#template
templates/kani-warning.md
imagepath=images
title=Take Note!
text=Kanidm frequently uses new Rust versions and features, however Fedora and Centos frequently are behind in Rust releases. As a result, they may not always have the latest Kanidm versions available.
}}
{{#template templates/kani-warning.md imagepath=images title=Take Note! text=Kanidm frequently uses
new Rust versions and features, however Fedora and Centos frequently are behind in Rust releases. As
a result, they may not always have the latest Kanidm versions available. }}
Fedora has limited support through the development repository. You need to add the repository
metadata into the correct directory:
# Fedora
wget https://download.opensuse.org/repositories/network:/idm/Fedora_36/network:idm.repo
# Centos Stream 9
wget https://download.opensuse.org/repositories/network:/idm/CentOS_9_Stream/network:idm.repo
```bash
# Fedora
wget https://download.opensuse.org/repositories/network:/idm/Fedora_36/network:idm.repo
# Centos Stream 9
wget https://download.opensuse.org/repositories/network:/idm/CentOS_9_Stream/network:idm.repo
```
You can then install with:
dnf install kanidm-clients
```bash
dnf install kanidm-clients
```
## Cargo
The tools are available as a cargo download if you have a rust tool chain available. To install
rust you should follow the documentation for [rustup](https://rustup.rs/). These will be installed
into your home directory. To update these, re-run the install command with the new version.
The tools are available as a cargo download if you have a rust tool chain available. To install rust
you should follow the documentation for [rustup](https://rustup.rs/). These will be installed into
your home directory. To update these, re-run the install command with the new version.
cargo install --version 1.1.0-alpha.10 kanidm_tools
```bash
cargo install --version 1.1.0-alpha.10 kanidm_tools
```
## Tools Container
In some cases if your distribution does not have native kanidm-client support, and you can't access
cargo for the install for some reason, you can use the cli tools from a docker container instead.
docker pull kanidm/tools:latest
docker run --rm -i -t \
-v /etc/kanidm/config:/etc/kanidm/config:ro \
-v ~/.config/kanidm:/home/kanidm/.config/kanidm:ro \
-v ~/.cache/kanidm_tokens:/home/kanidm/.cache/kanidm_tokens \
kanidm/tools:latest \
/sbin/kanidm --help
```bash
docker pull kanidm/tools:latest
docker run --rm -i -t \
-v /etc/kanidm/config:/etc/kanidm/config:ro \
-v ~/.config/kanidm:/home/kanidm/.config/kanidm:ro \
-v ~/.cache/kanidm_tokens:/home/kanidm/.cache/kanidm_tokens \
kanidm/tools:latest \
/sbin/kanidm --help
```
If you have a ca.pem you may need to bind mount this in as required.
> **TIP** You can alias the docker run command to make the tools easier to access such as:
alias kanidm="docker run ..."
```bash
alias kanidm="docker run ..."
```
## Checking that the tools work
Now you can check your instance is working. You may need to provide a CA certificate for verification
with the -C parameter:
Now you can check your instance is working. You may need to provide a CA certificate for
verification with the -C parameter:
kanidm login --name anonymous
kanidm self whoami -H https://localhost:8443 --name anonymous
kanidm self whoami -C ../path/to/ca.pem -H https://localhost:8443 --name anonymous
```bash
kanidm login --name anonymous
kanidm self whoami -H https://localhost:8443 --name anonymous
kanidm self whoami -C ../path/to/ca.pem -H https://localhost:8443 --name anonymous
```
Now you can take some time to look at what commands are available - please
[ask for help at any time](https://github.com/kanidm/kanidm#getting-in-contact--questions).

View file

@ -1,6 +1,3 @@
# Installing the Server
This chapter will describe how to plan, configure, deploy and update your Kanidm instances.

View file

@ -1,66 +1,58 @@
# LDAP
While many applications can support external authentication and identity services through
Oauth2, not all services can.
Lightweight Directory Access Protocol (LDAP) has been the "lingua franca" of
authentication for many years, with almost every application in the world being
able to search and bind to LDAP. As many organisations still rely on LDAP, Kanidm
can host a read-only LDAP interface for these legacy applications.
{{#template
../templates/kani-warning.md
imagepath=../images
title=Warning!
text=The LDAP server in Kanidm is not a fully RFC-compliant LDAP server. This is intentional, as Kanidm wants to cover the common use cases - simple bind and search.
}}
While many applications can support external authentication and identity services through Oauth2,
not all services can. Lightweight Directory Access Protocol (LDAP) has been the "lingua franca" of
authentication for many years, with almost every application in the world being able to search and
bind to LDAP. As many organisations still rely on LDAP, Kanidm can host a read-only LDAP interface
for these legacy applications.
{{#template\
../templates/kani-warning.md imagepath=../images title=Warning! text=The LDAP server in Kanidm is
not a fully RFC-compliant LDAP server. This is intentional, as Kanidm wants to cover the common use
cases - simple bind and search. }}
## What is LDAP
LDAP is a protocol to read data from a directory of information. It is not
a server, but a way to communicate to a server. There are many famous LDAP
implementations such as Active Directory, 389 Directory Server, DSEE,
FreeIPA, and many others. Because it is a standard, applications can use
an LDAP client library to authenticate users to LDAP, given "one account" for
many applications - an IDM just like Kanidm!
LDAP is a protocol to read data from a directory of information. It is not a server, but a way to
communicate to a server. There are many famous LDAP implementations such as Active Directory, 389
Directory Server, DSEE, FreeIPA, and many others. Because it is a standard, applications can use an
LDAP client library to authenticate users to LDAP, given "one account" for many applications - an
IDM just like Kanidm!
## Data Mapping
Kanidm cannot be mapped 100% to LDAP's objects. This is because LDAP
types are simple key-values on objects which are all UTF8 strings (or subsets
thereof) based on validation (matching) rules. Kanidm internally implements complex
data types such as tagging on SSH keys, or multi-value credentials. These can not
be represented in LDAP.
Kanidm cannot be mapped 100% to LDAP's objects. This is because LDAP types are simple key-values on
objects which are all UTF8 strings (or subsets thereof) based on validation (matching) rules. Kanidm
internally implements complex data types such as tagging on SSH keys, or multi-value credentials.
These can not be represented in LDAP.
Many of the structures in Kanidm do not correlate closely to LDAP. For example
Kanidm only has a GID number, where LDAP's schemas define both a UID number and a
GID number.
Many of the structures in Kanidm do not correlate closely to LDAP. For example Kanidm only has a GID
number, where LDAP's schemas define both a UID number and a GID number.
Entries in the database also have a specific name in LDAP, related to their path
in the directory tree. Kanidm is a flat model, so we have to emulate some tree-like
elements, and ignore others.
Entries in the database also have a specific name in LDAP, related to their path in the directory
tree. Kanidm is a flat model, so we have to emulate some tree-like elements, and ignore others.
For this reason, when you search the LDAP interface, Kanidm will make some mapping decisions.
* The Kanidm domain name is used to generate the DN of the suffix.
* The domain\_info object becomes the suffix root.
* All other entries are direct subordinates of the domain\_info for DN purposes.
* Distinguished Names (DNs) are generated from the spn, name, or uuid attribute.
* Bind DNs can be remapped and rewritten, and may not even be a DN during bind.
* The '\*' and '+' operators can not be used in conjuction with attribute lists in searches.
- The Kanidm domain name is used to generate the DN of the suffix.
- The domain\_info object becomes the suffix root.
- All other entries are direct subordinates of the domain\_info for DN purposes.
- Distinguished Names (DNs) are generated from the spn, name, or uuid attribute.
- Bind DNs can be remapped and rewritten, and may not even be a DN during bind.
- The '\*' and '+' operators can not be used in conjuction with attribute lists in searches.
These decisions were made to make the path as simple and effective as possible,
relying more on the Kanidm query and filter system than attempting to generate a tree-like
representation of data. As almost all clients can use filters for entry selection
we don't believe this is a limitation for the consuming applications.
These decisions were made to make the path as simple and effective as possible, relying more on the
Kanidm query and filter system than attempting to generate a tree-like representation of data. As
almost all clients can use filters for entry selection we don't believe this is a limitation for the
consuming applications.
## Security
### TLS
StartTLS is not supported due to security risks. LDAPS is the only secure method
of communicating to any LDAP server. Kanidm, when configured with certificates, will
use them for LDAPS (and will not listen on a plaintext LDAP port).
StartTLS is not supported due to security risks. LDAPS is the only secure method of communicating to
any LDAP server. Kanidm, when configured with certificates, will use them for LDAPS (and will not
listen on a plaintext LDAP port).
### Writes
@ -69,60 +61,67 @@ contains. As a result, writes are rejected for all users via the LDAP interface.
### Access Controls
LDAP only supports password authentication. As LDAP is used heavily in POSIX environments
the LDAP bind for any DN will use its configured posix password.
LDAP only supports password authentication. As LDAP is used heavily in POSIX environments the LDAP
bind for any DN will use its configured posix password.
As the POSIX password is not equivalent in strength to the primary credentials of Kanidm
(which may be multi-factor authentication, MFA), the LDAP bind does not grant
rights to elevated read permissions. All binds have the permissions of "Anonymous"
even if the anonymous account is locked.
As the POSIX password is not equivalent in strength to the primary credentials of Kanidm (which may
be multi-factor authentication, MFA), the LDAP bind does not grant rights to elevated read
permissions. All binds have the permissions of "Anonymous" even if the anonymous account is locked.
The exception is service accounts which can use api-tokens during an LDAP bind for elevated
read permissions.
The exception is service accounts which can use api-tokens during an LDAP bind for elevated read
permissions.
## Server Configuration
To configure Kanidm to provide LDAP, add the argument to the `server.toml` configuration:
ldapbindaddress = "127.0.0.1:3636"
```toml
ldapbindaddress = "127.0.0.1:3636"
```
You should configure TLS certificates and keys as usual - LDAP will re-use the Web
server TLS material.
You should configure TLS certificates and keys as usual - LDAP will re-use the Web server TLS
material.
## Showing LDAP Entries and Attribute Maps
By default Kanidm is limited in what attributes are generated or remapped into
LDAP entries. However, the server internally contains a map of extended attribute
mappings for application specific requests that must be satisfied.
By default Kanidm is limited in what attributes are generated or remapped into LDAP entries.
However, the server internally contains a map of extended attribute mappings for application
specific requests that must be satisfied.
An example is that some applications expect and require a 'CN' value, even though Kanidm does not
provide it. If the application is unable to be configured to accept "name" it may be necessary
to use Kanidm's mapping feature. Currently these are compiled into the server, so you may need to open
provide it. If the application is unable to be configured to accept "name" it may be necessary to
use Kanidm's mapping feature. Currently these are compiled into the server, so you may need to open
an issue with your requirements for attribute maps.
To show what attribute maps exists for an entry you can use the attribute search term '+'.
# To show Kanidm attributes
ldapsearch ... -x '(name=admin)' '*'
# To show all attribute maps
ldapsearch ... -x '(name=admin)' '+'
```bash
# To show Kanidm attributes
ldapsearch ... -x '(name=admin)' '*'
# To show all attribute maps
ldapsearch ... -x '(name=admin)' '+'
```
Attributes that are in the map can be requested explicitly, and this can be combined with requesting
Kanidm native attributes.
ldapsearch ... -x '(name=admin)' cn objectClass displayname memberof
```bash
ldapsearch ... -x '(name=admin)' cn objectClass displayname memberof
```
## Service Accounts
If you have [issued api tokens for a service account](../accounts_and_groups.html#using-api-tokens-with-service-accounts)
If you have
[issued api tokens for a service account](../accounts_and_groups.html#using-api-tokens-with-service-accounts)
they can be used to gain extended read permissions for those service accounts.
Api tokens can also be used to gain extended search permissions with LDAP. To do this you can bind
with a dn of `dn=token` and provide the api token in the password.
> **NOTE** The `dn=token` keyword is guaranteed to not be used by any other entry, which is why it was chosen as the keyword to initiate api token binds.
> **NOTE** The `dn=token` keyword is guaranteed to not be used by any other entry, which is why it
> was chosen as the keyword to initiate api token binds.
```shell
```bash
ldapwhoami -H ldaps://URL -x -D "dn=token" -w "TOKEN"
ldapwhoami -H ldaps://idm.example.com -x -D "dn=token" -w "..."
# u: demo_service@idm.example.com
@ -130,62 +129,72 @@ ldapwhoami -H ldaps://idm.example.com -x -D "dn=token" -w "..."
## Example
Given a default install with domain "example.com" the configured LDAP DN will be "dc=example,dc=com".
Given a default install with domain "example.com" the configured LDAP DN will be
"dc=example,dc=com".
# from server.toml
ldapbindaddress = "[::]:3636"
```toml
# from server.toml
ldapbindaddress = "[::]:3636"
```
This can be queried with:
LDAPTLS_CACERT=ca.pem ldapsearch \
-H ldaps://127.0.0.1:3636 \
-b 'dc=example,dc=com' \
-x '(name=test1)'
```bash
LDAPTLS_CACERT=ca.pem ldapsearch \
-H ldaps://127.0.0.1:3636 \
-b 'dc=example,dc=com' \
-x '(name=test1)'
# test1@example.com, example.com
dn: spn=test1@example.com,dc=example,dc=com
objectclass: account
objectclass: memberof
objectclass: object
objectclass: person
displayname: Test User
memberof: spn=group240@example.com,dc=example,dc=com
name: test1
spn: test1@example.com
entryuuid: 22a65b6c-80c8-4e1a-9b76-3f3afdff8400
# test1@example.com, example.com
dn: spn=test1@example.com,dc=example,dc=com
objectclass: account
objectclass: memberof
objectclass: object
objectclass: person
displayname: Test User
memberof: spn=group240@example.com,dc=example,dc=com
name: test1
spn: test1@example.com
entryuuid: 22a65b6c-80c8-4e1a-9b76-3f3afdff8400
```
It is recommended that client applications filter accounts that can login with `(class=account)`
and groups with `(class=group)`. If possible, group membership is defined in RFC2307bis or
Active Directory style. This means groups are determined from the "memberof" attribute which
contains a DN to a group.
It is recommended that client applications filter accounts that can login with `(class=account)` and
groups with `(class=group)`. If possible, group membership is defined in RFC2307bis or Active
Directory style. This means groups are determined from the "memberof" attribute which contains a DN
to a group.
LDAP binds can use any unique identifier of the account. The following are all valid bind DNs for
the object listed above (if it was a POSIX account, that is).
ldapwhoami ... -x -D 'name=test1'
ldapwhoami ... -x -D 'spn=test1@example.com'
ldapwhoami ... -x -D 'test1@example.com'
ldapwhoami ... -x -D 'test1'
ldapwhoami ... -x -D '22a65b6c-80c8-4e1a-9b76-3f3afdff8400'
ldapwhoami ... -x -D 'spn=test1@example.com,dc=example,dc=com'
ldapwhoami ... -x -D 'name=test1,dc=example,dc=com'
```bash
ldapwhoami ... -x -D 'name=test1'
ldapwhoami ... -x -D 'spn=test1@example.com'
ldapwhoami ... -x -D 'test1@example.com'
ldapwhoami ... -x -D 'test1'
ldapwhoami ... -x -D '22a65b6c-80c8-4e1a-9b76-3f3afdff8400'
ldapwhoami ... -x -D 'spn=test1@example.com,dc=example,dc=com'
ldapwhoami ... -x -D 'name=test1,dc=example,dc=com'
```
Most LDAP clients are very picky about TLS, and can be very hard to debug or display errors.
For example these commands:
Most LDAP clients are very picky about TLS, and can be very hard to debug or display errors. For
example these commands:
ldapsearch -H ldaps://127.0.0.1:3636 -b 'dc=example,dc=com' -x '(name=test1)'
ldapsearch -H ldap://127.0.0.1:3636 -b 'dc=example,dc=com' -x '(name=test1)'
ldapsearch -H ldap://127.0.0.1:3389 -b 'dc=example,dc=com' -x '(name=test1)'
```bash
ldapsearch -H ldaps://127.0.0.1:3636 -b 'dc=example,dc=com' -x '(name=test1)'
ldapsearch -H ldap://127.0.0.1:3636 -b 'dc=example,dc=com' -x '(name=test1)'
ldapsearch -H ldap://127.0.0.1:3389 -b 'dc=example,dc=com' -x '(name=test1)'
```
All give the same error:
ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1)
```
ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1)
```
This is despite the fact:
* The first command is a certificate validation error.
* The second is a missing LDAPS on a TLS port.
* The third is an incorrect port.
- The first command is a certificate validation error.
- The second is a missing LDAPS on a TLS port.
- The third is an incorrect port.
To diagnose errors like this, you may need to add "-d 1" to your LDAP commands or client.

View file

@ -1,107 +1,106 @@
# OAuth2
OAuth is a web authorisation protocol that allows "single sign on". It's key to note
OAuth only provides authorisation, as the protocol in its default forms
do not provide identity or authentication information. All that Oauth2 provides is
information that an entity is authorised for the requested resources.
OAuth is a web authorisation protocol that allows "single sign on". It's key to note OAuth only
provides authorisation, as the protocol in its default forms do not provide identity or
authentication information. All that Oauth2 provides is information that an entity is authorised for
the requested resources.
OAuth can tie into extensions allowing an identity provider to reveal information
about authorised sessions. This extends OAuth from an authorisation only system
to a system capable of identity and authorisation. Two primary methods of this
exist today: RFC7662 token introspection, and OpenID connect.
OAuth can tie into extensions allowing an identity provider to reveal information about authorised
sessions. This extends OAuth from an authorisation only system to a system capable of identity and
authorisation. Two primary methods of this exist today: RFC7662 token introspection, and OpenID
connect.
## How Does OAuth2 Work?
A user wishes to access a service (resource, resource server). The resource
server does not have an active session for the client, so it redirects to the
authorisation server (Kanidm) to determine if the client should be allowed to proceed, and
has the appropriate permissions (scopes) for the requested resources.
A user wishes to access a service (resource, resource server). The resource server does not have an
active session for the client, so it redirects to the authorisation server (Kanidm) to determine if
the client should be allowed to proceed, and has the appropriate permissions (scopes) for the
requested resources.
The authorisation server checks the current session of the user and may present
a login flow if required. Given the identity of the user known to the authorisation
sever, and the requested scopes, the authorisation server makes a decision if it
allows the authorisation to proceed. The user is then prompted to consent to the
authorisation from the authorisation server to the resource server as some identity
information may be revealed by granting this consent.
The authorisation server checks the current session of the user and may present a login flow if
required. Given the identity of the user known to the authorisation sever, and the requested scopes,
the authorisation server makes a decision if it allows the authorisation to proceed. The user is
then prompted to consent to the authorisation from the authorisation server to the resource server
as some identity information may be revealed by granting this consent.
If successful and consent given, the user is redirected back to the resource server with an
If successful and consent given, the user is redirected back to the resource server with an
authorisation code. The resource server then contacts the authorisation server directly with this
code and exchanges it for a valid token that may be provided to the user's browser.
The resource server may then optionally contact the token introspection endpoint of the
authorisation server about the provided OAuth token, which yields extra metadata about the identity
that holds the token from the authorisation. This metadata may include identity information,
but also may include extended metadata, sometimes refered to as "claims". Claims are
information bound to a token based on properties of the session that may allow
the resource server to make extended authorisation decisions without the need
to contact the authorisation server to arbitrate.
The resource server may then optionally contact the token introspection endpoint of the
authorisation server about the provided OAuth token, which yields extra metadata about the identity
that holds the token from the authorisation. This metadata may include identity information, but
also may include extended metadata, sometimes refered to as "claims". Claims are information bound
to a token based on properties of the session that may allow the resource server to make extended
authorisation decisions without the need to contact the authorisation server to arbitrate.
It's important to note that OAuth2 at its core is an authorisation system which has layered
identity-providing elements on top.
### Resource Server
This is the server that a user wants to access. Common examples could be Nextcloud, a wiki,
or something else. This is the system that "needs protecting" and wants to delegate authorisation
This is the server that a user wants to access. Common examples could be Nextcloud, a wiki, or
something else. This is the system that "needs protecting" and wants to delegate authorisation
decisions to Kanidm.
It's important for you to know *how* your resource server supports OAuth2. For example, does it
It's important for you to know _how_ your resource server supports OAuth2. For example, does it
support RFC 7662 token introspection or does it rely on OpenID connect for identity information?
Does the resource server support PKCE S256?
In general Kanidm requires that your resource server supports:
* HTTP basic authentication to the authorisation server
* PKCE S256 code verification to prevent certain token attack classes
* OIDC only - JWT ES256 for token signatures
- HTTP basic authentication to the authorisation server
- PKCE S256 code verification to prevent certain token attack classes
- OIDC only - JWT ES256 for token signatures
Kanidm will expose its OAuth2 APIs at the following URLs:
* user auth url: `https://idm.example.com/ui/oauth2`
* api auth url: `https://idm.example.com/oauth2/authorise`
* token url: `https://idm.example.com/oauth2/token`
* rfc7662 token introspection url: `https://idm.example.com/oauth2/token/introspect`
* rfc7009 token revoke url: `https://idm.example.com/oauth2/token/revoke`
- user auth url: `https://idm.example.com/ui/oauth2`
- api auth url: `https://idm.example.com/oauth2/authorise`
- token url: `https://idm.example.com/oauth2/token`
- rfc7662 token introspection url: `https://idm.example.com/oauth2/token/introspect`
- rfc7009 token revoke url: `https://idm.example.com/oauth2/token/revoke`
OpenID Connect discovery - you need to substitute your OAuth2 client id in the following
urls:
OpenID Connect discovery - you need to substitute your OAuth2 client id in the following urls:
* OpenID connect issuer uri: `https://idm.example.com/oauth2/openid/:client\_id:/`
* OpenID connect discovery: `https://idm.example.com/oauth2/openid/:client\_id:/.well-known/openid-configuration`
- OpenID connect issuer uri: `https://idm.example.com/oauth2/openid/:client\_id:/`
- OpenID connect discovery:
`https://idm.example.com/oauth2/openid/:client\_id:/.well-known/openid-configuration`
For manual OpenID configuration:
* OpenID connect userinfo: `https://idm.example.com/oauth2/openid/:client\_id:/userinfo`
* token signing public key: `https://idm.example.com/oauth2/openid/:client\_id:/public\_key.jwk`
- OpenID connect userinfo: `https://idm.example.com/oauth2/openid/:client\_id:/userinfo`
- token signing public key: `https://idm.example.com/oauth2/openid/:client\_id:/public\_key.jwk`
### Scope Relationships
For an authorisation to proceed, the resource server will request a list of scopes, which are
unique to that resource server. For example, when a user wishes to login to the admin panel
of the resource server, it may request the "admin" scope from Kanidm for authorisation. But when
a user wants to login, it may only request "access" as a scope from Kanidm.
For an authorisation to proceed, the resource server will request a list of scopes, which are unique
to that resource server. For example, when a user wishes to login to the admin panel of the resource
server, it may request the "admin" scope from Kanidm for authorisation. But when a user wants to
login, it may only request "access" as a scope from Kanidm.
As each resource server may have its own scopes and understanding of these, Kanidm isolates
scopes to each resource server connected to Kanidm. Kanidm has two methods of granting scopes to accounts (users).
As each resource server may have its own scopes and understanding of these, Kanidm isolates scopes
to each resource server connected to Kanidm. Kanidm has two methods of granting scopes to accounts
(users).
The first is scope mappings. These provide a set of scopes if a user is a member of a specific
group within Kanidm. This allows you to create a relationship between the scopes of a resource
server, and the groups/roles in Kanidm which can be specific to that resource server.
The first is scope mappings. These provide a set of scopes if a user is a member of a specific group
within Kanidm. This allows you to create a relationship between the scopes of a resource server, and
the groups/roles in Kanidm which can be specific to that resource server.
For an authorisation to proceed, all scopes requested by the resource server must be available in the
final scope set that is granted to the account.
For an authorisation to proceed, all scopes requested by the resource server must be available in
the final scope set that is granted to the account.
The second is supplemental scope mappings. These function the same as scope maps where membership
of a group provides a set of scopes to the account. However these scopes are NOT consulted during
The second is supplemental scope mappings. These function the same as scope maps where membership of
a group provides a set of scopes to the account. However these scopes are NOT consulted during
authorisation decisions made by Kanidm. These scopes exists to allow optional properties to be
provided (such as personal information about a subset of accounts to be revealed) or so that the resource server
may make it's own authorisation decisions based on the provided scopes.
provided (such as personal information about a subset of accounts to be revealed) or so that the
resource server may make it's own authorisation decisions based on the provided scopes.
This use of scopes is the primary means to control who can access what resources. These access decisions
can take place either on Kanidm or the resource server.
This use of scopes is the primary means to control who can access what resources. These access
decisions can take place either on Kanidm or the resource server.
For example, if you have a resource server that always requests a scope of "read", then users
with scope maps that supply the read scope will be allowed by Kanidm to proceed to the resource server.
For example, if you have a resource server that always requests a scope of "read", then users with
scope maps that supply the read scope will be allowed by Kanidm to proceed to the resource server.
Kanidm can then provide the supplementary scopes into provided tokens, so that the resource server
can use these to choose if it wishes to display UI elements. If a user has a supplemental "admin"
scope, then that user may be able to access an administration panel of the resource server. In this
@ -112,201 +111,232 @@ the resource server.
### Create the Kanidm Configuration
After you have understood your resource server requirements you first need to configure Kanidm.
By default members of "system\_admins" or "idm\_hp\_oauth2\_manage\_priv" are able to create or
manage OAuth2 resource server integrations.
After you have understood your resource server requirements you first need to configure Kanidm. By
default members of "system\_admins" or "idm\_hp\_oauth2\_manage\_priv" are able to create or manage
OAuth2 resource server integrations.
You can create a new resource server with:
kanidm system oauth2 create <name> <displayname> <origin>
kanidm system oauth2 create nextcloud "Nextcloud Production" https://nextcloud.example.com
```bash
kanidm system oauth2 create <name> <displayname> <origin>
kanidm system oauth2 create nextcloud "Nextcloud Production" https://nextcloud.example.com
```
You can create a scope map with:
kanidm system oauth2 update_scope_map <name> <kanidm_group_name> [scopes]...
kanidm system oauth2 update_scope_map nextcloud nextcloud_admins admin
```bash
kanidm system oauth2 update_scope_map <name> <kanidm_group_name> [scopes]...
kanidm system oauth2 update_scope_map nextcloud nextcloud_admins admin
```
{{#template
../templates/kani-warning.md
imagepath=../images
title=WARNING
text=If you are creating an OpenID Connect (OIDC) resource server you <b>MUST</b> provide a scope map named <code>openid</code>. Without this, OpenID clients <b>WILL NOT WORK</b>
}}
{{#template ../templates/kani-warning.md imagepath=../images title=WARNING text=If you are creating
an OpenID Connect (OIDC) resource server you
<b>MUST</b> provide a scope map named <code>openid</code>. Without this, OpenID clients <b>WILL NOT
WORK</b> }}
> **HINT**
> OpenID connect allows a number of scopes that affect the content of the resulting
> authorisation token. If one of the following scopes are requested by the OpenID client,
> then the associated claims may be added to the authorisation token. It is not guaranteed
> that all of the associated claims will be added.
>
> * profile - (name, family\_name, given\_name, middle\_name, nickname, preferred\_username, profile, picture, website, gender, birthdate, zoneinfo, locale, and updated\_at)
> * email - (email, email\_verified)
> * address - (address)
> * phone - (phone\_number, phone\_number\_verified)
> **HINT** OpenID connect allows a number of scopes that affect the content of the resulting
> authorisation token. If one of the following scopes are requested by the OpenID client, then the
> associated claims may be added to the authorisation token. It is not guaranteed that all of the
> associated claims will be added.
>
> - profile - (name, family\_name, given\_name, middle\_name, nickname, preferred\_username,
> profile, picture, website, gender, birthdate, zoneinfo, locale, and updated\_at)
> - email - (email, email\_verified)
> - address - (address)
> - phone - (phone\_number, phone\_number\_verified)
You can create a supplemental scope map with:
kanidm system oauth2 update_sup_scope_map <name> <kanidm_group_name> [scopes]...
kanidm system oauth2 update_sup_scope_map nextcloud nextcloud_admins admin
```bash
kanidm system oauth2 update_sup_scope_map <name> <kanidm_group_name> [scopes]...
kanidm system oauth2 update_sup_scope_map nextcloud nextcloud_admins admin
```
Once created you can view the details of the resource server.
kanidm system oauth2 get nextcloud
---
class: oauth2_resource_server
class: oauth2_resource_server_basic
class: object
displayname: Nextcloud Production
oauth2_rs_basic_secret: <secret>
oauth2_rs_name: nextcloud
oauth2_rs_origin: https://nextcloud.example.com
oauth2_rs_token_key: hidden
```bash
kanidm system oauth2 get nextcloud
---
class: oauth2_resource_server
class: oauth2_resource_server_basic
class: object
displayname: Nextcloud Production
oauth2_rs_basic_secret: <secret>
oauth2_rs_name: nextcloud
oauth2_rs_origin: https://nextcloud.example.com
oauth2_rs_token_key: hidden
```
### Configure the Resource Server
On your resource server, you should configure the client ID as the "oauth2\_rs\_name" from
Kanidm, and the password to be the value shown in "oauth2\_rs\_basic\_secret". Ensure that
the code challenge/verification method is set to S256.
On your resource server, you should configure the client ID as the "oauth2\_rs\_name" from Kanidm,
and the password to be the value shown in "oauth2\_rs\_basic\_secret". Ensure that the code
challenge/verification method is set to S256.
You should now be able to test authorisation.
## Resetting Resource Server Security Material
In the case of disclosure of the basic secret, or some other security event where you may wish
to invalidate a resource servers active sessions/tokens, you can reset the secret material of
the server with:
In the case of disclosure of the basic secret, or some other security event where you may wish to
invalidate a resource servers active sessions/tokens, you can reset the secret material of the
server with:
kanidm system oauth2 reset_secrets
```bash
kanidm system oauth2 reset_secrets
```
Each resource server has unique signing keys and access secrets, so this is limited to each
resource server.
Each resource server has unique signing keys and access secrets, so this is limited to each resource
server.
## Extended Options for Legacy Clients
Not all resource servers support modern standards like PKCE or ECDSA. In these situations
it may be necessary to disable these on a per-resource server basis. Disabling these on
one resource server will not affect others.
Not all resource servers support modern standards like PKCE or ECDSA. In these situations it may be
necessary to disable these on a per-resource server basis. Disabling these on one resource server
will not affect others.
{{#template
../templates/kani-warning.md
imagepath=../images
title=WARNING
text=Changing these settings MAY have serious consequences on the security of your resource server. You should avoid changing these if at all possible!
}}
{{#template ../templates/kani-warning.md imagepath=../images title=WARNING text=Changing these
settings MAY have serious consequences on the security of your resource server. You should avoid
changing these if at all possible! }}
To disable PKCE for a resource server:
kanidm system oauth2 warning_insecure_client_disable_pkce <resource server name>
```bash
kanidm system oauth2 warning_insecure_client_disable_pkce <resource server name>
```
To enable legacy cryptograhy (RSA PKCS1-5 SHA256):
kanidm system oauth2 warning_enable_legacy_crypto <resource server name>
```bash
kanidm system oauth2 warning_enable_legacy_crypto <resource server name>
```
## Example Integrations
### Apache mod\_auth\_openidc
Add the following to a `mod_auth_openidc.conf`. It should be included in a `mods_enabled` folder
or with an appropriate include.
Add the following to a `mod_auth_openidc.conf`. It should be included in a `mods_enabled` folder or
with an appropriate include.
OIDCRedirectURI /protected/redirect_uri
OIDCCryptoPassphrase <random password here>
OIDCProviderMetadataURL https://kanidm.example.com/oauth2/openid/<resource server name>/.well-known/openid-configuration
OIDCScope "openid"
OIDCUserInfoTokenMethod authz_header
OIDCClientID <resource server name>
OIDCClientSecret <resource server password>
OIDCPKCEMethod S256
OIDCCookieSameSite On
# Set the `REMOTE_USER` field to the `preferred_username` instead of the UUID.
# Remember that the username can change, but this can help with systems like Nagios which use this as a display name.
# OIDCRemoteUserClaim preferred_username
```
OIDCRedirectURI /protected/redirect_uri
OIDCCryptoPassphrase <random password here>
OIDCProviderMetadataURL https://kanidm.example.com/oauth2/openid/<resource server name>/.well-known/openid-configuration
OIDCScope "openid"
OIDCUserInfoTokenMethod authz_header
OIDCClientID <resource server name>
OIDCClientSecret <resource server password>
OIDCPKCEMethod S256
OIDCCookieSameSite On
# Set the `REMOTE_USER` field to the `preferred_username` instead of the UUID.
# Remember that the username can change, but this can help with systems like Nagios which use this as a display name.
# OIDCRemoteUserClaim preferred_username
```
Other scopes can be added as required to the `OIDCScope` line, eg: `OIDCScope "openid scope2 scope3"`
Other scopes can be added as required to the `OIDCScope` line, eg:
`OIDCScope "openid scope2 scope3"`
In the virtual host, to protect a location:
<Location />
AuthType openid-connect
Require valid-user
</Location>
```apache
<Location />
AuthType openid-connect
Require valid-user
</Location>
```
### Nextcloud
Install the module [from the nextcloud market place](https://apps.nextcloud.com/apps/user_oidc) -
it can also be found in the Apps section of your deployment as "OpenID Connect user backend".
Install the module [from the nextcloud market place](https://apps.nextcloud.com/apps/user_oidc) - it
can also be found in the Apps section of your deployment as "OpenID Connect user backend".
In Nextcloud's config.php you need to allow connection to remote servers:
'allow_local_remote_servers' => true,
```php
'allow_local_remote_servers' => true,
```
You may optionally choose to add:
'allow_user_to_change_display_name' => false,
'lost_password_link' => 'disabled',
```php
'allow_user_to_change_display_name' => false,
'lost_password_link' => 'disabled',
```
If you forget this, you may see the following error in logs:
Host 172.24.11.129 was not connected to because it violates local access rules
```
Host 172.24.11.129 was not connected to because it violates local access rules
```
This module does not support PKCE or ES256. You will need to run:
kanidm system oauth2 warning_insecure_client_disable_pkce <resource server name>
kanidm system oauth2 warning_enable_legacy_crypto <resource server name>
```bash
kanidm system oauth2 warning_insecure_client_disable_pkce <resource server name>
kanidm system oauth2 warning_enable_legacy_crypto <resource server name>
```
In the settings menu, configure the discovery URL and client ID and secret.
You can choose to disable other login methods with:
php occ config:app:set --value=0 user_oidc allow_multiple_user_backends
```bash
php occ config:app:set --value=0 user_oidc allow_multiple_user_backends
```
You can login directly by appending `?direct=1` to your login page. You can re-enable
other backends by setting the value to `1`
You can login directly by appending `?direct=1` to your login page. You can re-enable other backends
by setting the value to `1`
### Velociraptor
Velociraptor supports OIDC. To configure it select "Authenticate with SSO" then "OIDC" during
the interactive configuration generator. Alternately, you can set the following keys in server.config.yaml:
Velociraptor supports OIDC. To configure it select "Authenticate with SSO" then "OIDC" during the
interactive configuration generator. Alternately, you can set the following keys in
server.config.yaml:
GUI:
authenticator:
type: OIDC
oidc_issuer: https://idm.example.com/oauth2/openid/:client\_id:/
oauth_client_id: <resource server name/>
oauth_client_secret: <resource server secret>
```
GUI:
authenticator:
type: OIDC
oidc_issuer: https://idm.example.com/oauth2/openid/:client\_id:/
oauth_client_id: <resource server name/>
oauth_client_secret: <resource server secret>
```
Velociraptor does not support PKCE. You will need to run the following:
kanidm system oauth2 warning_insecure_client_disable_pkce <resource server name>
```bash
kanidm system oauth2 warning_insecure_client_disable_pkce <resource server name>
```
Initial users are mapped via their email in the Velociraptor server.config.yaml config:
GUI:
initial_users:
- name: <email address>
```
GUI:
initial_users:
- name: <email address>
```
Accounts require the `openid` and `email` scopes to be authenticated. It is recommended you limit
these to a group with a scope map due to Velociraptors high impact.
# kanidm group create velociraptor_users
# kanidm group add_members velociraptor_users ...
kanidm system oauth2 create_scope_map <resource server name> velociraptor_users openid email
```bash
# kanidm group create velociraptor_users
# kanidm group add_members velociraptor_users ...
kanidm system oauth2 create_scope_map <resource server name> velociraptor_users openid email
```
### Vouch Proxy
> **WARNING**
> Vouch proxy requires a unique identifier but does not use the proper scope, "sub". It uses the fields
> "username" or "email" as primary identifiers instead. As a result, this can cause user or deployment issues, at
> worst security bypasses. You should avoid Vouch Proxy if possible due to these issues.
> **WARNING** Vouch proxy requires a unique identifier but does not use the proper scope, "sub". It
> uses the fields "username" or "email" as primary identifiers instead. As a result, this can cause
> user or deployment issues, at worst security bypasses. You should avoid Vouch Proxy if possible
> due to these issues.
>
> * <https://github.com/vouch/vouch-proxy/issues/309>
> * <https://github.com/vouch/vouch-proxy/issues/310>
> - <https://github.com/vouch/vouch-proxy/issues/309>
> - <https://github.com/vouch/vouch-proxy/issues/310>
Note: **You need to run at least the version 0.37.0**
Vouch Proxy supports multiple OAuth and OIDC login providers.
To configure it you need to pass:
Vouch Proxy supports multiple OAuth and OIDC login providers. To configure it you need to pass:
```yaml
oauth:
@ -324,4 +354,6 @@ oauth:
The `email` scope needs to be passed and thus the mail attribute needs to exist on the account:
kanidm person update <ID> --mail "YYYY@somedomain.com" --name idm_admin
```bash
kanidm person update <ID> --mail "YYYY@somedomain.com" --name idm_admin
```

View file

@ -1,322 +1,357 @@
# PAM and nsswitch
[PAM](http://linux-pam.org) and [nsswitch](https://en.wikipedia.org/wiki/Name_Service_Switch)
are the core mechanisms used by Linux and BSD clients to resolve identities from an IDM service
like Kanidm into accounts that can be used on the machine for various interactive tasks.
[PAM](http://linux-pam.org) and [nsswitch](https://en.wikipedia.org/wiki/Name_Service_Switch) are
the core mechanisms used by Linux and BSD clients to resolve identities from an IDM service like
Kanidm into accounts that can be used on the machine for various interactive tasks.
## The UNIX Daemon
Kanidm provides a UNIX daemon that runs on any client that wants to use PAM and nsswitch integration.
The daemon can cache the accounts for users who have unreliable networks, or who leave
the site where Kanidm is hosted. The daemon is also able to cache missing-entry responses to reduce network
traffic and main server load.
Kanidm provides a UNIX daemon that runs on any client that wants to use PAM and nsswitch
integration. The daemon can cache the accounts for users who have unreliable networks, or who leave
the site where Kanidm is hosted. The daemon is also able to cache missing-entry responses to reduce
network traffic and main server load.
Additionally, running the daemon means that the PAM and nsswitch integration libraries can be small,
helping to reduce the attack surface of the machine. Similarly, a tasks daemon is available that can
create home directories on first login and supports several features related to aliases and links to
Additionally, running the daemon means that the PAM and nsswitch integration libraries can be small,
helping to reduce the attack surface of the machine. Similarly, a tasks daemon is available that can
create home directories on first login and supports several features related to aliases and links to
these home directories.
We recommend you install the client daemon from your system package manager:
# OpenSUSE
zypper in kanidm-unixd-clients
# Fedora
dnf install kanidm-unixd-clients
```bash
# OpenSUSE
zypper in kanidm-unixd-clients
# Fedora
dnf install kanidm-unixd-clients
```
You can check the daemon is running on your Linux system with:
systemctl status kanidm-unixd
```bash
systemctl status kanidm-unixd
```
You can check the privileged tasks daemon is running with:
systemctl status kanidm-unixd-tasks
```bash
systemctl status kanidm-unixd-tasks
```
> **NOTE** The `kanidm_unixd_tasks` daemon is not required for PAM and nsswitch functionality.
> If disabled, your system will function as usual. It is, however, recommended due to the features
> it provides supporting Kanidm's capabilities.
> **NOTE** The `kanidm_unixd_tasks` daemon is not required for PAM and nsswitch functionality. If
> disabled, your system will function as usual. It is, however, recommended due to the features it
> provides supporting Kanidm's capabilities.
Both unixd daemons use the connection configuration from /etc/kanidm/config. This is the covered in
[client_tools](./client_tools.md#kanidm-configuration).
You can also configure some unixd-specific options with the file /etc/kanidm/unixd:
pam_allowed_login_groups = ["posix_group"]
default_shell = "/bin/sh"
home_prefix = "/home/"
home_attr = "uuid"
home_alias = "spn"
use_etc_skel = false
uid_attr_map = "spn"
gid_attr_map = "spn"
```toml
pam_allowed_login_groups = ["posix_group"]
default_shell = "/bin/sh"
home_prefix = "/home/"
home_attr = "uuid"
home_alias = "spn"
use_etc_skel = false
uid_attr_map = "spn"
gid_attr_map = "spn"
```
`pam_allowed_login_groups` defines a set of POSIX groups where membership of any of these
groups will be allowed to login via PAM. All POSIX users and groups can be resolved by nss
regardless of PAM login status. This may be a group name, spn, or uuid.
`pam_allowed_login_groups` defines a set of POSIX groups where membership of any of these groups
will be allowed to login via PAM. All POSIX users and groups can be resolved by nss regardless of
PAM login status. This may be a group name, spn, or uuid.
`default_shell` is the default shell for users. Defaults to `/bin/sh`.
`home_prefix` is the prepended path to where home directories are stored. Must end with
a trailing `/`. Defaults to `/home/`.
`home_prefix` is the prepended path to where home directories are stored. Must end with a trailing
`/`. Defaults to `/home/`.
`home_attr` is the default token attribute used for the home directory path. Valid
choices are `uuid`, `name`, `spn`. Defaults to `uuid`.
`home_attr` is the default token attribute used for the home directory path. Valid choices are
`uuid`, `name`, `spn`. Defaults to `uuid`.
`home_alias` is the default token attribute used for generating symlinks
pointing to the user's
home directory. If set, this will become the value of the home path
to nss calls. It is recommended you choose a "human friendly" attribute here.
Valid choices are `none`, `uuid`, `name`, `spn`. Defaults to `spn`.
`home_alias` is the default token attribute used for generating symlinks pointing to the user's home
directory. If set, this will become the value of the home path to nss calls. It is recommended you
choose a "human friendly" attribute here. Valid choices are `none`, `uuid`, `name`, `spn`. Defaults
to `spn`.
> **NOTICE:**
> All users in Kanidm can change their name (and their spn) at any time. If you change
> `home_attr` from `uuid` you *must* have a plan on how to manage these directory renames
> in your system. We recommend that you have a stable ID (like the UUID), and symlinks
> from the name to the UUID folder. Automatic support is provided for this via the unixd
> tasks daemon, as documented here.
> **NOTICE:** All users in Kanidm can change their name (and their spn) at any time. If you change
> `home_attr` from `uuid` you _must_ have a plan on how to manage these directory renames in your
> system. We recommend that you have a stable ID (like the UUID), and symlinks from the name to the
> UUID folder. Automatic support is provided for this via the unixd tasks daemon, as documented
> here.
`use_etc_skel` controls if home directories should be prepopulated with the contents of `/etc/skel`
`use_etc_skel` controls if home directories should be prepopulated with the contents of `/etc/skel`
when first created. Defaults to false.
`uid_attr_map` chooses which attribute is used for domain local users in presentation. Defaults
to `spn`. Users from a trust will always use spn.
`uid_attr_map` chooses which attribute is used for domain local users in presentation. Defaults to
`spn`. Users from a trust will always use spn.
`gid_attr_map` chooses which attribute is used for domain local groups in presentation. Defaults
to `spn`. Groups from a trust will always use spn.
`gid_attr_map` chooses which attribute is used for domain local groups in presentation. Defaults to
`spn`. Groups from a trust will always use spn.
You can then check the communication status of the daemon:
kanidm_unixd_status
```bash
kanidm_unixd_status
```
If the daemon is working, you should see:
[2020-02-14T05:58:37Z INFO kanidm_unixd_status] working!
```
[2020-02-14T05:58:37Z INFO kanidm_unixd_status] working!
```
If it is not working, you will see an error message:
[2020-02-14T05:58:10Z ERROR kanidm_unixd_status] Error ->
Os { code: 111, kind: ConnectionRefused, message: "Connection refused" }
```
[2020-02-14T05:58:10Z ERROR kanidm_unixd_status] Error ->
Os { code: 111, kind: ConnectionRefused, message: "Connection refused" }
```
For more information, see the
[Troubleshooting](./pam_and_nsswitch.md#troubleshooting) section.
For more information, see the [Troubleshooting](./pam_and_nsswitch.md#troubleshooting) section.
## nsswitch
When the daemon is running you can add the nsswitch libraries to /etc/nsswitch.conf
passwd: compat kanidm
group: compat kanidm
```
passwd: compat kanidm
group: compat kanidm
```
You can [create a user](./accounts_and_groups.md#creating-accounts) then
You can [create a user](./accounts_and_groups.md#creating-accounts) then
[enable POSIX feature on the user](./posix_accounts.md#enabling-posix-attributes-on-accounts).
You can then test that the POSIX extended user is able to be resolved with:
getent passwd <account name>
getent passwd testunix
testunix:x:3524161420:3524161420:testunix:/home/testunix:/bin/sh
```bash
getent passwd <account name>
getent passwd testunix
testunix:x:3524161420:3524161420:testunix:/home/testunix:/bin/sh
```
You can also do the same for groups.
getent group <group name>
getent group testgroup
testgroup:x:2439676479:testunix
```bash
getent group <group name>
getent group testgroup
testgroup:x:2439676479:testunix
```
> **HINT** Remember to also create a UNIX password with something like
> `kanidm account posix set_password --name idm_admin demo_user`. Otherwise there will be no
> credential for the account to authenticate.
> **HINT** Remember to also create a UNIX password with something like
> `kanidm account posix set_password --name idm_admin demo_user`.
> Otherwise there will be no credential for the account to authenticate.
## PAM
> **WARNING:** Modifications to PAM configuration *may* leave your system in a state
> where you are unable to login or authenticate. You should always have a recovery
> shell open while making changes (for example, root), or have access to single-user mode
> at the machine's console.
> **WARNING:** Modifications to PAM configuration _may_ leave your system in a state where you are
> unable to login or authenticate. You should always have a recovery shell open while making changes
> (for example, root), or have access to single-user mode at the machine's console.
Pluggable Authentication Modules (PAM) is the mechanism a UNIX-like system
that authenticates users, and to control access to some resources. This is
configured through a stack of modules
that are executed in order to evaluate the request, and then each module may
request or reuse authentication token information.
Pluggable Authentication Modules (PAM) is the mechanism a UNIX-like system that authenticates users,
and to control access to some resources. This is configured through a stack of modules that are
executed in order to evaluate the request, and then each module may request or reuse authentication
token information.
### Before You Start
You *should* backup your /etc/pam.d directory from its original state as you
*may* change the PAM configuration in a way that will not allow you
to authenticate to your machine.
You _should_ backup your /etc/pam.d directory from its original state as you _may_ change the PAM
configuration in a way that will not allow you to authenticate to your machine.
cp -a /etc/pam.d /root/pam.d.backup
```bash
cp -a /etc/pam.d /root/pam.d.backup
```
### SUSE / OpenSUSE
To configure PAM on suse you must modify four files, which control the
various stages of authentication:
To configure PAM on suse you must modify four files, which control the various stages of
authentication:
/etc/pam.d/common-account
/etc/pam.d/common-auth
/etc/pam.d/common-password
/etc/pam.d/common-session
```bash
/etc/pam.d/common-account
/etc/pam.d/common-auth
/etc/pam.d/common-password
/etc/pam.d/common-session
```
> **IMPORTANT** By default these files are symlinks to their corresponding `-pc` file, for example
> `common-account -> common-account-pc`. If you directly edit these you are updating the inner
> content of the `-pc` file and it WILL be reset on a future upgrade. To prevent this you must
> first copy the `-pc` files. You can then edit the files safely.
> content of the `-pc` file and it WILL be reset on a future upgrade. To prevent this you must first
> copy the `-pc` files. You can then edit the files safely.
cp /etc/pam.d/common-account-pc /etc/pam.d/common-account
cp /etc/pam.d/common-auth-pc /etc/pam.d/common-auth
cp /etc/pam.d/common-password-pc /etc/pam.d/common-password
cp /etc/pam.d/common-session-pc /etc/pam.d/common-session
```bash
cp /etc/pam.d/common-account-pc /etc/pam.d/common-account
cp /etc/pam.d/common-auth-pc /etc/pam.d/common-auth
cp /etc/pam.d/common-password-pc /etc/pam.d/common-password
cp /etc/pam.d/common-session-pc /etc/pam.d/common-session
```
The content should look like:
# /etc/pam.d/common-auth-pc
# Controls authentication to this system (verification of credentials)
auth required pam_env.so
auth [default=1 ignore=ignore success=ok] pam_localuser.so
auth sufficient pam_unix.so nullok try_first_pass
auth requisite pam_succeed_if.so uid >= 1000 quiet_success
auth sufficient pam_kanidm.so ignore_unknown_user
auth required pam_deny.so
```
# /etc/pam.d/common-auth-pc
# Controls authentication to this system (verification of credentials)
auth required pam_env.so
auth [default=1 ignore=ignore success=ok] pam_localuser.so
auth sufficient pam_unix.so nullok try_first_pass
auth requisite pam_succeed_if.so uid >= 1000 quiet_success
auth sufficient pam_kanidm.so ignore_unknown_user
auth required pam_deny.so
# /etc/pam.d/common-account-pc
# Controls authorisation to this system (who may login)
account [default=1 ignore=ignore success=ok] pam_localuser.so
account sufficient pam_unix.so
account [default=1 ignore=ignore success=ok] pam_succeed_if.so uid >= 1000 quiet_success quiet_fail
account sufficient pam_kanidm.so ignore_unknown_user
account required pam_deny.so
# /etc/pam.d/common-account-pc
# Controls authorisation to this system (who may login)
account [default=1 ignore=ignore success=ok] pam_localuser.so
account sufficient pam_unix.so
account [default=1 ignore=ignore success=ok] pam_succeed_if.so uid >= 1000 quiet_success quiet_fail
account sufficient pam_kanidm.so ignore_unknown_user
account required pam_deny.so
# /etc/pam.d/common-password-pc
# Controls flow of what happens when a user invokes the passwd command. Currently does NOT
# interact with kanidm.
password [default=1 ignore=ignore success=ok] pam_localuser.so
password required pam_unix.so use_authtok nullok shadow try_first_pass
password [default=1 ignore=ignore success=ok] pam_succeed_if.so uid >= 1000 quiet_success quiet_fail
password required pam_kanidm.so
# /etc/pam.d/common-password-pc
# Controls flow of what happens when a user invokes the passwd command. Currently does NOT
# interact with kanidm.
password [default=1 ignore=ignore success=ok] pam_localuser.so
password required pam_unix.so use_authtok nullok shadow try_first_pass
password [default=1 ignore=ignore success=ok] pam_succeed_if.so uid >= 1000 quiet_success quiet_fail
password required pam_kanidm.so
# /etc/pam.d/common-session-pc
# Controls setup of the user session once a successful authentication and authorisation has
# occured.
session optional pam_systemd.so
session required pam_limits.so
session optional pam_unix.so try_first_pass
session optional pam_umask.so
session [default=1 ignore=ignore success=ok] pam_succeed_if.so uid >= 1000 quiet_success quiet_fail
session optional pam_kanidm.so
# /etc/pam.d/common-session-pc
# Controls setup of the user session once a successful authentication and authorisation has
# occured.
session optional pam_systemd.so
session required pam_limits.so
session optional pam_unix.so try_first_pass
session optional pam_umask.so
session [default=1 ignore=ignore success=ok] pam_succeed_if.so uid >= 1000 quiet_success quiet_fail
session optional pam_kanidm.so
session optional pam_env.so
```
> **WARNING:** Ensure that `pam_mkhomedir` or `pam_oddjobd` are *not* present in any stage of your
> PAM configuration, as they interfere with the correct operation of the
> Kanidm tasks daemon.
> **WARNING:** Ensure that `pam_mkhomedir` or `pam_oddjobd` are _not_ present in any stage of your
> PAM configuration, as they interfere with the correct operation of the Kanidm tasks daemon.
### Fedora / CentOS
> **WARNING:** Kanidm currently has no support for SELinux policy - this may mean you need to
> run the daemon with permissive mode for the unconfined_service_t daemon type. To do this run:
> `semanage permissive -a unconfined_service_t`. To undo this run `semanage permissive -d unconfined_service_t`.
> **WARNING:** Kanidm currently has no support for SELinux policy - this may mean you need to run
> the daemon with permissive mode for the unconfined_service_t daemon type. To do this run:
> `semanage permissive -a unconfined_service_t`. To undo this run
> `semanage permissive -d unconfined_service_t`.
>
> You may also need to run `audit2allow` for sshd and other types to be able to access the UNIX daemon sockets.
> You may also need to run `audit2allow` for sshd and other types to be able to access the UNIX
> daemon sockets.
These files are managed by authselect as symlinks. You can either work with
authselect, or remove the symlinks first.
These files are managed by authselect as symlinks. You can either work with authselect, or remove
the symlinks first.
#### Without authselect
If you just remove the symlinks:
Edit the content.
# /etc/pam.d/password-auth
auth required pam_env.so
auth required pam_faildelay.so delay=2000000
auth [default=1 ignore=ignore success=ok] pam_usertype.so isregular
auth [default=1 ignore=ignore success=ok] pam_localuser.so
auth sufficient pam_unix.so nullok try_first_pass
auth [default=1 ignore=ignore success=ok] pam_usertype.so isregular
auth sufficient pam_kanidm.so ignore_unknown_user
auth required pam_deny.so
account sufficient pam_unix.so
account sufficient pam_localuser.so
account sufficient pam_usertype.so issystem
account sufficient pam_kanidm.so ignore_unknown_user
account required pam_permit.so
password requisite pam_pwquality.so try_first_pass local_users_only
password sufficient pam_unix.so sha512 shadow nullok try_first_pass use_authtok
password sufficient pam_kanidm.so
password required pam_deny.so
session optional pam_keyinit.so revoke
session required pam_limits.so
-session optional pam_systemd.so
session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid
session required pam_unix.so
session optional pam_kanidm.so
```
# /etc/pam.d/password-auth
auth required pam_env.so
auth required pam_faildelay.so delay=2000000
auth [default=1 ignore=ignore success=ok] pam_usertype.so isregular
auth [default=1 ignore=ignore success=ok] pam_localuser.so
auth sufficient pam_unix.so nullok try_first_pass
auth [default=1 ignore=ignore success=ok] pam_usertype.so isregular
auth sufficient pam_kanidm.so ignore_unknown_user
auth required pam_deny.so
account sufficient pam_unix.so
account sufficient pam_localuser.so
account sufficient pam_usertype.so issystem
account sufficient pam_kanidm.so ignore_unknown_user
account required pam_permit.so
password requisite pam_pwquality.so try_first_pass local_users_only
password sufficient pam_unix.so sha512 shadow nullok try_first_pass use_authtok
password sufficient pam_kanidm.so
password required pam_deny.so
session optional pam_keyinit.so revoke
session required pam_limits.so
-session optional pam_systemd.so
session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid
session required pam_unix.so
session optional pam_kanidm.so
-
# /etc/pam.d/system-auth
auth required pam_env.so
auth required pam_faildelay.so delay=2000000
auth sufficient pam_fprintd.so
auth [default=1 ignore=ignore success=ok] pam_usertype.so isregular
auth [default=1 ignore=ignore success=ok] pam_localuser.so
auth sufficient pam_unix.so nullok try_first_pass
auth [default=1 ignore=ignore success=ok] pam_usertype.so isregular
auth sufficient pam_kanidm.so ignore_unknown_user
auth required pam_deny.so
# /etc/pam.d/system-auth
auth required pam_env.so
auth required pam_faildelay.so delay=2000000
auth sufficient pam_fprintd.so
auth [default=1 ignore=ignore success=ok] pam_usertype.so isregular
auth [default=1 ignore=ignore success=ok] pam_localuser.so
auth sufficient pam_unix.so nullok try_first_pass
auth [default=1 ignore=ignore success=ok] pam_usertype.so isregular
auth sufficient pam_kanidm.so ignore_unknown_user
auth required pam_deny.so
account sufficient pam_unix.so
account sufficient pam_localuser.so
account sufficient pam_usertype.so issystem
account sufficient pam_kanidm.so ignore_unknown_user
account required pam_permit.so
account sufficient pam_unix.so
account sufficient pam_localuser.so
account sufficient pam_usertype.so issystem
account sufficient pam_kanidm.so ignore_unknown_user
account required pam_permit.so
password requisite pam_pwquality.so try_first_pass local_users_only
password sufficient pam_unix.so sha512 shadow nullok try_first_pass use_authtok
password sufficient pam_kanidm.so
password required pam_deny.so
password requisite pam_pwquality.so try_first_pass local_users_only
password sufficient pam_unix.so sha512 shadow nullok try_first_pass use_authtok
password sufficient pam_kanidm.so
password required pam_deny.so
session optional pam_keyinit.so revoke
session required pam_limits.so
-session optional pam_systemd.so
session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid
session required pam_unix.so
session optional pam_kanidm.so
session optional pam_keyinit.so revoke
session required pam_limits.so
-session optional pam_systemd.so
session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid
session required pam_unix.so
session optional pam_kanidm.so
```
#### With authselect
To work with authselect:
You will need to
You will need to
[create a new profile](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_authentication_and_authorization_in_rhel/configuring-user-authentication-using-authselect_configuring-authentication-and-authorization-in-rhel#creating-and-deploying-your-own-authselect-profile_configuring-user-authentication-using-authselect).
<!--TODO this URL is too short -->
First run the following command:
authselect create-profile kanidm -b sssd
```bash
authselect create-profile kanidm -b sssd
```
A new folder, /etc/authselect/custom/kanidm, should be created. Inside that folder, create or
overwrite the following three files: nsswitch.conf, password-auth, system-auth.
password-auth and system-auth should be the same as above. nsswitch should be
modified for your use case. A working example looks like this:
A new folder, /etc/authselect/custom/kanidm, should be created. Inside that folder, create or
overwrite the following three files: nsswitch.conf, password-auth, system-auth. password-auth and
system-auth should be the same as above. nsswitch should be modified for your use case. A working
example looks like this:
passwd: compat kanidm sss files systemd
group: compat kanidm sss files systemd
shadow: files
hosts: files dns myhostname
services: sss files
netgroup: sss files
automount: sss files
aliases: files
ethers: files
gshadow: files
networks: files dns
protocols: files
publickey: files
rpc: files
```
passwd: compat kanidm sss files systemd
group: compat kanidm sss files systemd
shadow: files
hosts: files dns myhostname
services: sss files
netgroup: sss files
automount: sss files
aliases: files
ethers: files
gshadow: files
networks: files dns
protocols: files
publickey: files
rpc: files
```
Then run:
authselect select custom/kanidm
```bash
authselect select custom/kanidm
```
to update your profile.
@ -324,33 +359,34 @@ to update your profile.
### Check POSIX-status of Group and Configuration
If authentication is failing via PAM, make sure that a list of groups is configured in `/etc/kanidm/unixd`:
If authentication is failing via PAM, make sure that a list of groups is configured in
`/etc/kanidm/unixd`:
```
```toml
pam_allowed_login_groups = ["example_group"]
```
Check the status of the group with `kanidm group posix show example_group`.
If you get something similar to the following example:
Check the status of the group with `kanidm group posix show example_group`. If you get something
similar to the following example:
```shell
```bash
> kanidm group posix show example_group
Using cached token for name idm_admin
Error -> Http(500, Some(InvalidAccountState("Missing class: account && posixaccount OR group && posixgroup")),
"b71f137e-39f3-4368-9e58-21d26671ae24")
```
POSIX-enable the group with `kanidm group posix set example_group`. You should get a result similar
POSIX-enable the group with `kanidm group posix set example_group`. You should get a result similar
to this when you search for your group name:
```shell
```bash
> kanidm group posix show example_group
[ spn: example_group@kanidm.example.com, gidnumber: 3443347205 name: example_group, uuid: b71f137e-39f3-4368-9e58-21d26671ae24 ]
```
Also, ensure the target user is in the group by running:
```
```bash
> kanidm group list_members example_group
```
@ -358,12 +394,16 @@ Also, ensure the target user is in the group by running:
For the unixd daemon, you can increase the logging with:
systemctl edit kanidm-unixd.service
```bash
systemctl edit kanidm-unixd.service
```
And add the lines:
[Service]
Environment="RUST_LOG=kanidm=debug"
```
[Service]
Environment="RUST_LOG=kanidm=debug"
```
Then restart the kanidm-unixd.service.
@ -371,33 +411,39 @@ The same pattern is true for the kanidm-unixd-tasks.service daemon.
To debug the pam module interactions add `debug` to the module arguments such as:
auth sufficient pam_kanidm.so debug
```
auth sufficient pam_kanidm.so debug
```
### Check the Socket Permissions
Check that the `/var/run/kanidm-unixd/sock` has permissions mode 777, and that non-root readers can see it with
ls or other tools.
Check that the `/var/run/kanidm-unixd/sock` has permissions mode 777, and that non-root readers can
see it with ls or other tools.
Ensure that `/var/run/kanidm-unixd/task_sock` has permissions mode 700, and
that it is owned by the kanidm unixd process user.
Ensure that `/var/run/kanidm-unixd/task_sock` has permissions mode 700, and that it is owned by the
kanidm unixd process user.
### Verify that You Can Access the Kanidm Server
You can check this with the client tools:
kanidm self whoami --name anonymous
```bash
kanidm self whoami --name anonymous
```
### Ensure the Libraries are Correct
You should have:
/usr/lib64/libnss_kanidm.so.2
/usr/lib64/security/pam_kanidm.so
The exact path *may* change depending on your distribution, `pam_unixd.so` should be co-located
with pam_kanidm.so. Look for it with the find command:
```bash
/usr/lib64/libnss_kanidm.so.2
/usr/lib64/security/pam_kanidm.so
```
The exact path _may_ change depending on your distribution, `pam_unixd.so` should be co-located with
pam_kanidm.so. Look for it with the find command:
```bash
find /usr/ -name 'pam_unix.so'
```
@ -405,36 +451,41 @@ For example, on a Debian machine, it's located in `/usr/lib/x86_64-linux-gnu/sec
### Increase Connection Timeout
In some high-latency environments, you may need to increase the connection timeout. We set
this low to improve response on LANs, but over the internet this may need to be increased.
By increasing the conn_timeout, you will be able to operate on higher latency
links, but some operations may take longer to complete causing a degree of
latency.
In some high-latency environments, you may need to increase the connection timeout. We set this low
to improve response on LANs, but over the internet this may need to be increased. By increasing the
conn_timeout, you will be able to operate on higher latency links, but some operations may take
longer to complete causing a degree of latency.
By increasing the cache_timeout, you will need to refresh less often, but it may result in an
account lockout or group change until cache_timeout takes effect. Note that this has security
By increasing the cache_timeout, you will need to refresh less often, but it may result in an
account lockout or group change until cache_timeout takes effect. Note that this has security
implications:
# /etc/kanidm/unixd
# Seconds
conn_timeout = 8
# Cache timeout
cache_timeout = 60
```toml
# /etc/kanidm/unixd
# Seconds
conn_timeout = 8
# Cache timeout
cache_timeout = 60
```
### Invalidate or Clear the Cache
You can invalidate the kanidm_unixd cache with:
kanidm_cache_invalidate
```bash
kanidm_cache_invalidate
```
You can clear (wipe) the cache with:
kanidm_cache_clear
```bash
kanidm_cache_clear
```
There is an important distinction between these two - invalidated cache items may still
be yielded to a client request if the communication to the main Kanidm server is not
possible. For example, you may have your laptop in a park without wifi.
There is an important distinction between these two - invalidated cache items may still be yielded
to a client request if the communication to the main Kanidm server is not possible. For example, you
may have your laptop in a park without wifi.
Clearing the cache, however, completely wipes all local data about all accounts and groups.
If you are relying on this cached (but invalid) data, you may lose access to your accounts until
other communication issues have been resolved.
Clearing the cache, however, completely wipes all local data about all accounts and groups. If you
are relying on this cached (but invalid) data, you may lose access to your accounts until other
communication issues have been resolved.

View file

@ -1,14 +1,13 @@
# RADIUS
Remote Authentication Dial In User Service (RADIUS) is a network protocol
that is commonly used to authenticate Wi-Fi devices or Virtual Private
Networks (VPNs). While it should not be a sole point of trust/authentication
to an identity, it's still an important control for protecting network resources.
Remote Authentication Dial In User Service (RADIUS) is a network protocol that is commonly used to
authenticate Wi-Fi devices or Virtual Private Networks (VPNs). While it should not be a sole point
of trust/authentication to an identity, it's still an important control for protecting network
resources.
Kanidm has a philosophy that each account can have multiple credentials which
are related to their devices, and limited to specific resources. RADIUS is
no exception and has a separate credential for each account to use for
RADIUS access.
Kanidm has a philosophy that each account can have multiple credentials which are related to their
devices, and limited to specific resources. RADIUS is no exception and has a separate credential for
each account to use for RADIUS access.
## Disclaimer
@ -16,106 +15,103 @@ It's worth noting some disclaimers about Kanidm's RADIUS integration.
### One Credential - One Account
Kanidm normally attempts to have credentials for each *device* and
*application* rather than the legacy model of one to one.
Kanidm normally attempts to have credentials for each _device_ and _application_ rather than the
legacy model of one to one.
The RADIUS protocol is only able to attest a *single* password based credential in an
authentication attempt, which limits us to storing a single RADIUS password credential
per account. However, despite this limitation, it still greatly improves the
situation by isolating the RADIUS credential from the primary or application
credentials of the account. This solves many common security concerns around
credential loss or disclosure, and prevents rogue devices from locking out
accounts as they attempt to authenticate to Wi-Fi with expired credentials.
The RADIUS protocol is only able to attest a _single_ password based credential in an authentication
attempt, which limits us to storing a single RADIUS password credential per account. However,
despite this limitation, it still greatly improves the situation by isolating the RADIUS credential
from the primary or application credentials of the account. This solves many common security
concerns around credential loss or disclosure, and prevents rogue devices from locking out accounts
as they attempt to authenticate to Wi-Fi with expired credentials.
Alternatelly, Kanidm supports mapping users with special configuration of certificates
allowing some systems to use EAP-TLS for RADIUS authentication. This returns to the
"per device" credential model.
Alternatelly, Kanidm supports mapping users with special configuration of certificates allowing some
systems to use EAP-TLS for RADIUS authentication. This returns to the "per device" credential model.
### Cleartext Credential Storage
RADIUS offers many different types of tunnels and authentication mechanisms.
However, most client devices "out of the box" only attempt a single type when
a WPA2-Enterprise network is selected: MSCHAPv2 with PEAP. This is a
challenge-response protocol that requires clear text or Windows NT LAN
RADIUS offers many different types of tunnels and authentication mechanisms. However, most client
devices "out of the box" only attempt a single type when a WPA2-Enterprise network is selected:
MSCHAPv2 with PEAP. This is a challenge-response protocol that requires clear text or Windows NT LAN
Manager (NTLM) credentials.
As MSCHAPv2 with PEAP is the only practical, universal RADIUS-type supported
on all devices with minimal configuration, we consider it imperative
that it MUST be supported as the default. Esoteric RADIUS types can be used
as well, but this is up to administrators to test and configure.
As MSCHAPv2 with PEAP is the only practical, universal RADIUS-type supported on all devices with
minimal configuration, we consider it imperative that it MUST be supported as the default. Esoteric
RADIUS types can be used as well, but this is up to administrators to test and configure.
Due to this requirement, we must store the RADIUS material as clear text or
NTLM hashes. It would be silly to think that NTLM is secure as it relies on
the obsolete and deprecated MD4 cryptographic hash, providing only an
illusion of security.
Due to this requirement, we must store the RADIUS material as clear text or NTLM hashes. It would be
silly to think that NTLM is secure as it relies on the obsolete and deprecated MD4 cryptographic
hash, providing only an illusion of security.
This means Kanidm stores RADIUS credentials in the database as clear text.
We believe this is a reasonable decision and is a low risk to security because:
* The access controls around RADIUS secrets by default are strong, limited
to only self-account read and RADIUS-server read.
* As RADIUS credentials are separate from the primary account credentials and
have no other rights, their disclosure is not going to lead to a full
account compromise.
* Having the credentials in clear text allows a better user experience as
clients can view the credentials at any time to enroll further devices.
- The access controls around RADIUS secrets by default are strong, limited to only self-account read
and RADIUS-server read.
- As RADIUS credentials are separate from the primary account credentials and have no other rights,
their disclosure is not going to lead to a full account compromise.
- Having the credentials in clear text allows a better user experience as clients can view the
credentials at any time to enroll further devices.
### Service Accounts Do Not Have Radius Access
Due to the design of service accounts, they do not have access to radius for credential assignemnt.
If you require RADIUS usage with a service account you *may* need to use EAP-TLS or some other
If you require RADIUS usage with a service account you _may_ need to use EAP-TLS or some other
authentication method.
## Account Credential Configuration
For an account to use RADIUS they must first generate a RADIUS secret unique
to that account. By default, all accounts can self-create this secret.
For an account to use RADIUS they must first generate a RADIUS secret unique to that account. By
default, all accounts can self-create this secret.
kanidm person radius generate_secret --name william william
kanidm person radius show_secret --name william william
```bash
kanidm person radius generate_secret --name william william
kanidm person radius show_secret --name william william
```
## Account Group Configuration
In Kanidm, accounts which can authenticate to RADIUS must be a member
of an allowed group. This allows you to define which users or groups may use
a Wi-Fi or VPN infrastructure, and provides a path for revoking access to the
resources through group management. The key point of this is that service
accounts should not be part of this group:
In Kanidm, accounts which can authenticate to RADIUS must be a member of an allowed group. This
allows you to define which users or groups may use a Wi-Fi or VPN infrastructure, and provides a
path for revoking access to the resources through group management. The key point of this is that
service accounts should not be part of this group:
kanidm group create --name idm_admin radius_access_allowed
kanidm group add_members --name idm_admin radius_access_allowed william
```bash
kanidm group create --name idm_admin radius_access_allowed
kanidm group add_members --name idm_admin radius_access_allowed william
```
## RADIUS Server Service Account
To read these secrets, the RADIUS server requires an account with the
correct privileges. This can be created and assigned through the group
"idm_radius_servers", which is provided by default.
To read these secrets, the RADIUS server requires an account with the correct privileges. This can
be created and assigned through the group "idm_radius_servers", which is provided by default.
First, create the service account and add it to the group:
```shell
```bash
kanidm service-account create --name admin radius_service_account "Radius Service Account"
kanidm group add_members --name admin idm_radius_servers radius_service_account
```
Now reset the account password, using the `admin` account:
```shell
```bash
kanidm service-account credential generate-pw --name admin radius_service_account
```
## Deploying a RADIUS Container
We provide a RADIUS container that has all the needed integrations.
This container requires some cryptographic material, with the following files being in `/etc/raddb/certs`. (Modifiable in the configuration)
| filename | description |
| --- | --- |
| ca.pem | The signing CA of the RADIUS certificate |
| dh.pem | The output of `openssl dhparam -in ca.pem -out ./dh.pem 2048` |
| cert.pem | The certificate for the RADIUS server |
| key.pem | The signing key for the RADIUS certificate |
We provide a RADIUS container that has all the needed integrations. This container requires some
cryptographic material, with the following files being in `/etc/raddb/certs`. (Modifiable in the
configuration)
| filename | description |
| -------- | ------------------------------------------------------------- |
| ca.pem | The signing CA of the RADIUS certificate |
| dh.pem | The output of `openssl dhparam -in ca.pem -out ./dh.pem 2048` |
| cert.pem | The certificate for the RADIUS server |
| key.pem | The signing key for the RADIUS certificate |
The configuration file (`/data/kanidm`) has the following template:
@ -156,12 +152,10 @@ radius_clients = [
# radius_dh_path = "/etc/raddb/certs/dh.pem"
# the CA certificate
# radius_ca_path = "/etc/raddb/certs/ca.pem"
```
## A fully configured example
```toml
url = "https://example.com"
@ -186,6 +180,7 @@ radius_clients = [
{ name = "docker" , ipaddr = "172.17.0.0/16", secret = "testing123" },
]
```
## Moving to Production
To expose this to a Wi-Fi infrastructure, add your NAS in the configuration:
@ -199,11 +194,10 @@ radius_clients = [
Then re-create/run your docker instance and expose the ports by adding
`-p 1812:1812 -p 1812:1812/udp` to the command.
If you have any issues, check the logs from the RADIUS output, as they tend
to indicate the cause of the problem. To increase the logging level you can
re-run your environment with debug enabled:
If you have any issues, check the logs from the RADIUS output, as they tend to indicate the cause of
the problem. To increase the logging level you can re-run your environment with debug enabled:
```shell
```bash
docker rm radiusd
docker run --name radiusd \
-e DEBUG=True \
@ -214,8 +208,7 @@ docker run --name radiusd \
kanidm/radius:latest
```
Note: the RADIUS container *is* configured to provide
[Tunnel-Private-Group-ID](https://freeradius.org/rfc/rfc2868.html#Tunnel-Private-Group-ID),
so if you wish to use Wi-Fi-assigned VLANs on your infrastructure, you can
assign these by groups in the configuration file as shown in the above examples.
Note: the RADIUS container _is_ configured to provide
[Tunnel-Private-Group-ID](https://freeradius.org/rfc/rfc2868.html#Tunnel-Private-Group-ID), so if
you wish to use Wi-Fi-assigned VLANs on your infrastructure, you can assign these by groups in the
configuration file as shown in the above examples.

View file

@ -1,26 +1,32 @@
# Traefik
# Traefik
Traefik is a flexible HTTP reverse proxy webserver that can be integrated with Docker to allow dynamic configuration
and to automatically use LetsEncrypt to provide valid TLS certificates.
We can leverage this in the setup of Kanidm by specifying the configuration of Kanidm and Traefik in the same [Docker Compose configuration](https://docs.docker.com/compose/).
Traefik is a flexible HTTP reverse proxy webserver that can be integrated with Docker to allow
dynamic configuration and to automatically use LetsEncrypt to provide valid TLS certificates. We can
leverage this in the setup of Kanidm by specifying the configuration of Kanidm and Traefik in the
same [Docker Compose configuration](https://docs.docker.com/compose/).
## Example setup
Create a new directory and copy the following YAML file into it as `docker-compose.yml`.
Edit the YAML to update the LetsEncrypt account email for your domain and the FQDN where Kanidm will be made available.
Ensure you adjust this file or Kanidm's configuration to have a matching HTTPS port; the line `traefik.http.services.kanidm.loadbalancer.server.port=8443` sets this on the Traefik side.
> **NOTE** You will need to generate self-signed certificates for Kanidm, and copy the configuration into the `kanidm_data` volume. Some instructions are available in the "Installing the Server" section of this book.
Create a new directory and copy the following YAML file into it as `docker-compose.yml`. Edit the
YAML to update the LetsEncrypt account email for your domain and the FQDN where Kanidm will be made
available. Ensure you adjust this file or Kanidm's configuration to have a matching HTTPS port; the
line `traefik.http.services.kanidm.loadbalancer.server.port=8443` sets this on the Traefik side.
> **NOTE** You will need to generate self-signed certificates for Kanidm, and copy the configuration
> into the `kanidm_data` volume. Some instructions are available in the "Installing the Server"
> section of this book.
`docker-compose.yml`
```yaml
version: "3.4"
services:
traefik:
image: traefik:v2.6
container_name: traefik
command:
- "--certificatesresolvers.http.acme.email=admin@example.com"
- "--certificatesresolvers.http.acme.email=admin@example.com"
- "--certificatesresolvers.http.acme.storage=/letsencrypt/acme.json"
- "--certificatesresolvers.http.acme.tlschallenge=true"
- "--entrypoints.websecure.address=:443"
@ -37,7 +43,7 @@ services:
- "443:443"
kanidm:
container_name: kanidm
image: kanidm/server:devel
image: kanidm/server:devel
restart: unless-stopped
volumes:
- kanidm_data:/data
@ -53,4 +59,4 @@ volumes:
kanidm_data: {}
```
Finally you may run `docker-compose up` to start up both Kanidm and Traefik.
Finally you may run `docker-compose up` to start up both Kanidm and Traefik.

View file

@ -1,37 +1,33 @@
# Introduction to Kanidm
Kanidm is an identity management server, acting as an authority on account information, authentication
and authorisation within a technical environment.
Kanidm is an identity management server, acting as an authority on account information,
authentication and authorisation within a technical environment.
The intent of the Kanidm project is to:
* Provide a single truth source for accounts, groups and privileges.
* Enable integrations to systems and services so they can authenticate accounts.
* Make system, network, application and web authentication easy and accessible.
* Secure and reliable by default, aiming for the highest levels of quality.
- Provide a single truth source for accounts, groups and privileges.
- Enable integrations to systems and services so they can authenticate accounts.
- Make system, network, application and web authentication easy and accessible.
- Secure and reliable by default, aiming for the highest levels of quality.
{{#template
templates/kani-warning.md
imagepath=images
title=NOTICE
text=Kanidm is still a work in progress. Many features will evolve and change over time which may not be suitable for all users.
}}
{{#template templates/kani-warning.md imagepath=images title=NOTICE text=Kanidm is still a work in
progress. Many features will evolve and change over time which may not be suitable for all users. }}
## Why do I want Kanidm?
Whether you work in a business, a volunteer organisation, or are an enthusiast who manages
their personal services, you need methods of authenticating and identifying
to your systems, and subsequently, ways to determine what authorisation and privileges you have
while accessing these systems.
Whether you work in a business, a volunteer organisation, or are an enthusiast who manages their
personal services, you need methods of authenticating and identifying to your systems, and
subsequently, ways to determine what authorisation and privileges you have while accessing these
systems.
We've probably all been in workplaces where you end up with multiple accounts on various
systems - one for a workstation, different SSH keys for different tasks, maybe some shared
account passwords. Not only is it difficult for people to manage all these different credentials
and what they have access to, but it also means that sometimes these credentials have more
access or privilege than they require.
We've probably all been in workplaces where you end up with multiple accounts on various systems -
one for a workstation, different SSH keys for different tasks, maybe some shared account passwords.
Not only is it difficult for people to manage all these different credentials and what they have
access to, but it also means that sometimes these credentials have more access or privilege than
they require.
Kanidm acts as a central authority of accounts in your organisation and allows each account to associate
many devices and credentials with different privileges. An example of how this looks:
Kanidm acts as a central authority of accounts in your organisation and allows each account to
associate many devices and credentials with different privileges. An example of how this looks:
┌──────────────────┐
┌┴─────────────────┐│
@ -78,19 +74,20 @@ many devices and credentials with different privileges. An example of how this l
│ You │
└──────────┘
A key design goal is that you authenticate with your device in some manner, and then your device will
continue to authenticate you in the future. Each of these different types of credentials, from SSH keys,
application passwords, to RADIUS passwords and others, are "things your device knows". Each password
has limited capability, and can only access that exact service or resource.
A key design goal is that you authenticate with your device in some manner, and then your device
will continue to authenticate you in the future. Each of these different types of credentials, from
SSH keys, application passwords, to RADIUS passwords and others, are "things your device knows".
Each password has limited capability, and can only access that exact service or resource.
This helps improve security; a compromise of the service or the network transmission does not
grant you unlimited access to your account and all its privileges. As the credentials are specific
to a device, if a device is compromised you can revoke its associated credentials. If a
specific service is compromised, only the credentials for that service need to be revoked.
This helps improve security; a compromise of the service or the network transmission does not grant
you unlimited access to your account and all its privileges. As the credentials are specific to a
device, if a device is compromised you can revoke its associated credentials. If a specific service
is compromised, only the credentials for that service need to be revoked.
Due to this model, and the design of Kanidm to centre the device and to have more per-service credentials,
workflows and automation are added or designed to reduce human handling.
Due to this model, and the design of Kanidm to centre the device and to have more per-service
credentials, workflows and automation are added or designed to reduce human handling.
## Library documentation
Looking for the `rustdoc` documentation for the libraries themselves? [Click here!](https://kanidm.com/documentation/)
Looking for the `rustdoc` documentation for the libraries themselves?
[Click here!](https://kanidm.com/documentation/)

View file

@ -1,17 +1,17 @@
# Monitoring the platform
The monitoring design of Kanidm is still very much in its infancy -
The monitoring design of Kanidm is still very much in its infancy -
[take part in the dicussion at github.com/kanidm/kanidm/issues/216](https://github.com/kanidm/kanidm/issues/216).
## kanidmd
kanidmd currently responds to HTTP GET requests at the `/status` endpoint with a JSON object of
kanidmd currently responds to HTTP GET requests at the `/status` endpoint with a JSON object of
either "true" or "false". `true` indicates that the platform is responding to requests.
| URL | `<hostname>/status` |
| --- | --- |
| Example URL | `https://example.com/status` |
| Expected response | One of either `true` or `false` (without quotes) |
| Additional Headers | x-kanidm-opid
| Content Type | application/json |
| Cookies | kanidm-session |
| URL | `<hostname>/status` |
| ------------------ | ------------------------------------------------ |
| Example URL | `https://example.com/status` |
| Expected response | One of either `true` or `false` (without quotes) |
| Additional Headers | x-kanidm-opid |
| Content Type | application/json |
| Cookies | kanidm-session |

View file

@ -2,14 +2,14 @@
Packages are known to exist for the following distributions:
- [Arch Linux](https://aur.archlinux.org/packages?O=0&K=kanidm)
- [OpenSUSE](https://software.opensuse.org/search?baseproject=ALL&q=kanidm)
- [NixOS](https://search.nixos.org/packages?sort=relevance&type=packages&query=kanidm)
- [Arch Linux](https://aur.archlinux.org/packages?O=0&K=kanidm)
- [OpenSUSE](https://software.opensuse.org/search?baseproject=ALL&q=kanidm)
- [NixOS](https://search.nixos.org/packages?sort=relevance&type=packages&query=kanidm)
To ease packaging for your distribution, the `Makefile` has targets for sets of binary outputs.
| Target | Description |
| --- | --- |
| Target | Description |
| ---------------------- | --------------------------- |
| `release/kanidm` | Kanidm's CLI |
| `release/kanidmd` | The server daemon |
| `release/kanidm-ssh` | SSH-related utilities |

View file

@ -5,7 +5,8 @@
This happens in Docker currently, and here's some instructions for doing it for Ubuntu:
1. Start in the root directory of the repository.
2. Run `./platform/debian/ubuntu_docker_builder.sh` This'll start a container, mounting the repository in `~/kanidm/`.
2. Run `./platform/debian/ubuntu_docker_builder.sh` This'll start a container, mounting the
repository in `~/kanidm/`.
3. Install the required dependencies by running `./platform/debian/install_deps.sh`.
4. Building packages uses make, get a list by running `make -f ./platform/debian/Makefile help`
@ -23,12 +24,16 @@ debs/all:
build all the debs
```
5. So if you wanted to build the package for the Kanidm CLI, run `make -f ./platform/debian/Makefile debs/kanidm`.
6. The package will be copied into the `target` directory of the repository on the docker host - not just in the container.
5. So if you wanted to build the package for the Kanidm CLI, run
`make -f ./platform/debian/Makefile debs/kanidm`.
6. The package will be copied into the `target` directory of the repository on the docker host - not
just in the container.
## Adding a package
There's a set of default configuration files in `packaging/`; if you want to add a package definition, add a folder with the package name and then files in there will be copied over the top of the ones from `packaging/` on build.
There's a set of default configuration files in `packaging/`; if you want to add a package
definition, add a folder with the package name and then files in there will be copied over the top
of the ones from `packaging/` on build.
You'll need two custom files at minimum:
@ -38,14 +43,14 @@ You'll need two custom files at minimum:
There's a lot of other files that can go into a .deb, some handy ones are:
| Filename | What it does |
| --- | --- |
| -------- | ------------------------------------------------------------------------ |
| preinst | Runs before installation occurs |
| postrm | Runs after removal happens |
| prerm | Runs before removal happens - handy to shut down services. |
| postinst | Runs after installation occurs - we're using that to show notes to users |
## Some Debian packaging links
* [DH reference](https://www.debian.org/doc/manuals/maint-guide/dreq.en.html) - Explains what needs to be done for packaging (mostly).
* [Reference for what goes in control files](https://www.debian.org/doc/debian-policy/ch-controlfields)
- [DH reference](https://www.debian.org/doc/manuals/maint-guide/dreq.en.html) - Explains what needs
to be done for packaging (mostly).
- [Reference for what goes in control files](https://www.debian.org/doc/debian-policy/ch-controlfields)

View file

@ -1,32 +1,33 @@
# Password Quality and Badlisting
Kanidm embeds a set of tools to help your users use and create strong passwords.
This is important as not all user types will require multi-factor authentication (MFA)
for their roles, but compromised accounts still pose a risk. There may also be deployment
or other barriers to a site rolling out sitewide MFA.
Kanidm embeds a set of tools to help your users use and create strong passwords. This is important
as not all user types will require multi-factor authentication (MFA) for their roles, but
compromised accounts still pose a risk. There may also be deployment or other barriers to a site
rolling out sitewide MFA.
## Quality Checking
Kanidm enforces that all passwords are checked by the library "[zxcvbn](https://github.com/dropbox/zxcvbn)".
This has a large number of checks for password quality. It also provides constructive feedback to users on how
to improve their passwords if they are rejected.
Kanidm enforces that all passwords are checked by the library
"[zxcvbn](https://github.com/dropbox/zxcvbn)". This has a large number of checks for password
quality. It also provides constructive feedback to users on how to improve their passwords if they
are rejected.
Some things that zxcvbn looks for is use of the account name or email in the password, common passwords,
low entropy passwords, dates, reverse words and more.
Some things that zxcvbn looks for is use of the account name or email in the password, common
passwords, low entropy passwords, dates, reverse words and more.
This library can not be disabled - all passwords in Kanidm must pass this check.
## Password Badlisting
This is the process of configuring a list of passwords to exclude from being able to be used.
This is especially useful if a specific business has been notified of compromised accounts, allowing
you to maintain a list of customised excluded passwords.
This is the process of configuring a list of passwords to exclude from being able to be used. This
is especially useful if a specific business has been notified of compromised accounts, allowing you
to maintain a list of customised excluded passwords.
The other value to this feature is being able to badlist common passwords that zxcvbn does not detect, or
from other large scale password compromises.
The other value to this feature is being able to badlist common passwords that zxcvbn does not
detect, or from other large scale password compromises.
By default we ship with a preconfigured badlist that is updated over time as new password breach lists are
made available.
By default we ship with a preconfigured badlist that is updated over time as new password breach
lists are made available.
The password badlist by default is append only, meaning it can only grow, but will never remove
passwords previously considered breached.
@ -35,18 +36,21 @@ passwords previously considered breached.
You can display the current badlist with:
kanidm system pw-badlist show
```bash
kanidm system pw-badlist show
```
You can update your own badlist with:
kanidm system pw-badlist upload "path/to/badlist" [...]
```bash
kanidm system pw-badlist upload "path/to/badlist" [...]
```
Multiple bad lists can be listed and uploaded at once. These are preprocessed to identify and remove
passwords that zxcvbn and our password rules would already have eliminated. That helps to make the bad
list more efficent to operate over at run time.
passwords that zxcvbn and our password rules would already have eliminated. That helps to make the
bad list more efficent to operate over at run time.
## Password Rotation
Kanidm will never support this "anti-feature". Password rotation encourages poor password hygiene and
is not shown to prevent any attacks.
Kanidm will never support this "anti-feature". Password rotation encourages poor password hygiene
and is not shown to prevent any attacks.

View file

@ -1,73 +1,69 @@
# POSIX Accounts and Groups
Kanidm has features that enable its accounts and groups to be consumed on
POSIX-like machines, such as Linux, FreeBSD, or others. Both service accounts
and person accounts can be used on POSIX systems.
Kanidm has features that enable its accounts and groups to be consumed on POSIX-like machines, such
as Linux, FreeBSD, or others. Both service accounts and person accounts can be used on POSIX
systems.
## Notes on POSIX Features
Many design decisions have been made in the POSIX features
of Kanidm that are intended to make distributed systems easier to manage and
client systems more secure.
Many design decisions have been made in the POSIX features of Kanidm that are intended to make
distributed systems easier to manage and client systems more secure.
### UID and GID Numbers
In Kanidm there is no difference between a UID and a GID number. On most UNIX systems
a user will create all files with a primary user and group. The primary group is
effectively equivalent to the permissions of the user. It is very easy to see scenarios
where someone may change the account to have a shared primary group (ie `allusers`),
but without changing the umask on all client systems. This can cause users' data to be
compromised by any member of the same shared group.
In Kanidm there is no difference between a UID and a GID number. On most UNIX systems a user will
create all files with a primary user and group. The primary group is effectively equivalent to the
permissions of the user. It is very easy to see scenarios where someone may change the account to
have a shared primary group (ie `allusers`), but without changing the umask on all client systems.
This can cause users' data to be compromised by any member of the same shared group.
To prevent this, many systems create a "user private group", or UPG. This group has the
GID number matching the UID of the user, and the user sets their primary
group ID to the GID number of the UPG.
To prevent this, many systems create a "user private group", or UPG. This group has the GID number
matching the UID of the user, and the user sets their primary group ID to the GID number of the UPG.
As there is now an equivalence between the UID and GID number of the user and the UPG,
there is no benefit in separating these values. As a result Kanidm accounts *only*
have a GID number, which is also considered to be its UID number as well. This has the benefit
of preventing the accidental creation of a separate group that has an overlapping GID number
(the `uniqueness` attribute of the schema will block the creation).
As there is now an equivalence between the UID and GID number of the user and the UPG, there is no
benefit in separating these values. As a result Kanidm accounts _only_ have a GID number, which is
also considered to be its UID number as well. This has the benefit of preventing the accidental
creation of a separate group that has an overlapping GID number (the `uniqueness` attribute of the
schema will block the creation).
### UPG Generation
Due to the requirement that a user have a UPG for security, many systems create these as
two independent items. For example in /etc/passwd and /etc/group:
Due to the requirement that a user have a UPG for security, many systems create these as two
independent items. For example in /etc/passwd and /etc/group:
# passwd
william:x:654401105:654401105::/home/william:/bin/zsh
# group
william:x:654401105:
```
# passwd
william:x:654401105:654401105::/home/william:/bin/zsh
# group
william:x:654401105:
```
Other systems like FreeIPA use a plugin that generates a UPG as a seperate group entry on
creation of the account. This means there are two entries for an account, and they must
be kept in lock-step.
Other systems like FreeIPA use a plugin that generates a UPG as a seperate group entry on creation
of the account. This means there are two entries for an account, and they must be kept in lock-step.
Kanidm does neither of these. As the GID number of the user must be unique, and a user
implies the UPG must exist, we can generate UPG's on-demand from the account.
This has a single side effect - that you are unable to add any members to a
UPG - given the nature of a user private group, this is the point.
Kanidm does neither of these. As the GID number of the user must be unique, and a user implies the
UPG must exist, we can generate UPG's on-demand from the account. This has a single side effect -
that you are unable to add any members to a UPG - given the nature of a user private group, this is
the point.
### GID Number Generation
Kanidm will have asynchronous replication as a feature between writable
database servers. In this case, we need to be able to allocate stable and reliable
GID numbers to accounts on replicas that may not be in continual communication.
Kanidm will have asynchronous replication as a feature between writable database servers. In this
case, we need to be able to allocate stable and reliable GID numbers to accounts on replicas that
may not be in continual communication.
To do this, we use the last 32 bits of the account or group's UUID to generate the GID number.
A valid concern is the possibility of duplication in the lower 32 bits. Given the
birthday problem, if you have 77,000 groups and accounts, you have a 50% chance
of duplication. With 50,000 you have a 20% chance, 9,300 you have a 1% chance and
with 2900 you have a 0.1% chance.
A valid concern is the possibility of duplication in the lower 32 bits. Given the birthday problem,
if you have 77,000 groups and accounts, you have a 50% chance of duplication. With 50,000 you have a
20% chance, 9,300 you have a 1% chance and with 2900 you have a 0.1% chance.
We advise that if you have a site with >10,000 users you should use an external system
to allocate GID numbers serially or consistently to avoid potential duplication events.
We advise that if you have a site with >10,000 users you should use an external system to allocate
GID numbers serially or consistently to avoid potential duplication events.
This design decision is made as most small sites will benefit greatly from the
auto-allocation policy and the simplicity of its design, while larger enterprises
will already have IDM or business process applications for HR/People that are
capable of supplying this kind of data in batch jobs.
This design decision is made as most small sites will benefit greatly from the auto-allocation
policy and the simplicity of its design, while larger enterprises will already have IDM or business
process applications for HR/People that are capable of supplying this kind of data in batch jobs.
## Enabling POSIX Attributes
@ -78,48 +74,58 @@ To enable POSIX account features and IDs on an account, you require the permissi
You can then use the following command to enable POSIX extensions on a person or service account.
kanidm [person OR service-account] posix set --name idm_admin <account_id> [--shell SHELL --gidnumber GID]
```bash
kanidm [person OR service-account] posix set --name idm_admin <account_id> [--shell SHELL --gidnumber GID]
kanidm person posix set --name idm_admin demo_user
kanidm person posix set --name idm_admin demo_user --shell /bin/zsh
kanidm person posix set --name idm_admin demo_user --gidnumber 2001
kanidm person posix set --name idm_admin demo_user
kanidm person posix set --name idm_admin demo_user --shell /bin/zsh
kanidm person posix set --name idm_admin demo_user --gidnumber 2001
kanidm service-account posix set --name idm_admin demo_account
kanidm service-account posix set --name idm_admin demo_account --shell /bin/zsh
kanidm service-account posix set --name idm_admin demo_account --gidnumber 2001
kanidm service-account posix set --name idm_admin demo_account
kanidm service-account posix set --name idm_admin demo_account --shell /bin/zsh
kanidm service-account posix set --name idm_admin demo_account --gidnumber 2001
```
You can view the accounts POSIX token details with:
kanidm person posix show --name anonymous demo_user
kanidm service-account posix show --name anonymous demo_account
```bash
kanidm person posix show --name anonymous demo_user
kanidm service-account posix show --name anonymous demo_account
```
### Enabling POSIX Attributes on Groups
To enable POSIX group features and IDs on an account, you require the permission `idm_group_unix_extend_priv`.
This is provided to `idm_admins` in the default database.
To enable POSIX group features and IDs on an account, you require the permission
`idm_group_unix_extend_priv`. This is provided to `idm_admins` in the default database.
You can then use the following command to enable POSIX extensions:
kanidm group posix set --name idm_admin <group_id> [--gidnumber GID]
kanidm group posix set --name idm_admin demo_group
kanidm group posix set --name idm_admin demo_group --gidnumber 2001
```bash
kanidm group posix set --name idm_admin <group_id> [--gidnumber GID]
kanidm group posix set --name idm_admin demo_group
kanidm group posix set --name idm_admin demo_group --gidnumber 2001
```
You can view the accounts POSIX token details with:
kanidm group posix show --name anonymous demo_group
```bash
kanidm group posix show --name anonymous demo_group
```
POSIX-enabled groups will supply their members as POSIX members to clients. There is no
special or separate type of membership for POSIX members required.
POSIX-enabled groups will supply their members as POSIX members to clients. There is no special or
separate type of membership for POSIX members required.
## Troubleshooting Common Issues
### subuid conflicts with Podman
Due to the way that Podman operates, in some cases using the Kanidm client inside non-root containers
with Kanidm accounts may fail with an error such as:
Due to the way that Podman operates, in some cases using the Kanidm client inside non-root
containers with Kanidm accounts may fail with an error such as:
ERRO[0000] cannot find UID/GID for user NAME: No subuid ranges found for user "NAME" in /etc/subuid
```
ERRO[0000] cannot find UID/GID for user NAME: No subuid ranges found for user "NAME" in /etc/subuid
```
This is a fault in Podman and how it attempts to provide non-root containers, when UID/GIDs
are greater than 65535. In this case you may manually allocate your users GID number to be
between 1000 - 65535, which may not trigger the fault.
This is a fault in Podman and how it attempts to provide non-root containers, when UID/GIDs are
greater than 65535. In this case you may manually allocate your users GID number to be between
1000 - 65535, which may not trigger the fault.

View file

@ -2,23 +2,30 @@
## Software Installation Method
> **NOTE** Our preferred deployment method is in containers, and this documentation assumes you're running in docker. Kanidm will alternately run as a daemon/service, and server builds are available for multiple platforms if you prefer this option.
> **NOTE** Our preferred deployment method is in containers, and this documentation assumes you're
> running in docker. Kanidm will alternately run as a daemon/service, and server builds are
> available for multiple platforms if you prefer this option.
We provide docker images for the server components. They can be found at:
- <https://hub.docker.com/r/kanidm/server>
- <https://hub.docker.com/r/kanidm/radius>
- <https://hub.docker.com/r/kanidm/server>
- <https://hub.docker.com/r/kanidm/radius>
You can fetch these by running the commands:
docker pull kanidm/server:x86_64_latest
docker pull kanidm/radius:latest
```bash
docker pull kanidm/server:x86_64_latest
docker pull kanidm/radius:latest
```
If you do not meet the [system requirements](#system-requirements) for your CPU you should use:
docker pull kanidm/server:latest
```bash
docker pull kanidm/server:latest
```
You may need to adjust your example commands throughout this document to suit your desired server type.
You may need to adjust your example commands throughout this document to suit your desired server
type.
## Development Version
@ -34,8 +41,10 @@ report issues, we will make every effort to help resolve them.
If you are using the x86\_64 cpu-optimised version, you must have a CPU that is from 2013 or newer
(Haswell, Ryzen). The following instruction flags are used.
cmov, cx8, fxsr, mmx, sse, sse2, cx16, sahf, popcnt, sse3, sse4.1, sse4.2, avx, avx2,
bmi, bmi2, f16c, fma, lzcnt, movbe, xsave
```
cmov, cx8, fxsr, mmx, sse, sse2, cx16, sahf, popcnt, sse3, sse4.1, sse4.2, avx, avx2,
bmi, bmi2, f16c, fma, lzcnt, movbe, xsave
```
Older or unsupported CPUs may raise a SIGIL (Illegal Instruction) on hardware that is not supported
by the project.
@ -45,96 +54,111 @@ In this case, you should use the standard server:latest image.
In the future we may apply a baseline of flags as a requirement for x86\_64 for the server:latest
image. These flags will be:
cmov, cx8, fxsr, mmx, sse, sse2
```
cmov, cx8, fxsr, mmx, sse, sse2
```
{{#template
templates/kani-alert.md
imagepath=images
title=Tip
text=You can check your cpu flags on Linux with the command `lscpu`
}}
{{#template templates/kani-alert.md imagepath=images title=Tip text=You can check your cpu flags on
Linux with the command `lscpu` }}
#### Memory
Kanidm extensively uses memory caching, trading memory consumption to improve parallel throughput.
You should expect to see 64KB of ram per entry in your database, depending on cache tuning and settings.
You should expect to see 64KB of ram per entry in your database, depending on cache tuning and
settings.
#### Disk
You should expect to use up to 8KB of disk per entry you plan to store. At an estimate 10,000 entry
databases will consume 40MB, 100,000 entry will consume 400MB.
For best performance, you should use non-volatile memory express (NVME), or other Flash storage media.
For best performance, you should use non-volatile memory express (NVME), or other Flash storage
media.
## TLS
You'll need a volume where you can place configuration, certificates, and the database:
docker volume create kanidmd
```bash
docker volume create kanidmd
```
You should have a chain.pem and key.pem in your kanidmd volume. The reason for requiring
Transport Layer Security (TLS, which replaces the deprecated Secure Sockets Layer, SSL) is explained in [why tls](./why_tls.md). In summary, TLS is our root of trust between the
server and clients, and a critical element of ensuring a secure system.
You should have a chain.pem and key.pem in your kanidmd volume. The reason for requiring Transport
Layer Security (TLS, which replaces the deprecated Secure Sockets Layer, SSL) is explained in
[why tls](./why_tls.md). In summary, TLS is our root of trust between the server and clients, and a
critical element of ensuring a secure system.
The key.pem should be a single PEM private key, with no encryption. The file content should be
similar to:
-----BEGIN RSA PRIVATE KEY-----
MII...<base64>
-----END RSA PRIVATE KEY-----
```
-----BEGIN RSA PRIVATE KEY-----
MII...<base64>
-----END RSA PRIVATE KEY-----
```
The chain.pem is a series of PEM formatted certificates. The leaf certificate, or the certificate
that matches the private key should be the first certificate in the file. This should be followed
by the series of intermediates, and the final certificate should be the CA root. For example:
that matches the private key should be the first certificate in the file. This should be followed by
the series of intermediates, and the final certificate should be the CA root. For example:
-----BEGIN CERTIFICATE-----
<leaf certificate>
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
<intermediate certificate>
-----END CERTIFICATE-----
[ more intermediates if needed ]
-----BEGIN CERTIFICATE-----
<ca/croot certificate>
-----END CERTIFICATE-----
```
-----BEGIN CERTIFICATE-----
<leaf certificate>
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
<intermediate certificate>
-----END CERTIFICATE-----
[ more intermediates if needed ]
-----BEGIN CERTIFICATE-----
<ca/croot certificate>
-----END CERTIFICATE-----
```
> **HINT**
> If you are using Let's Encrypt the provided files "fullchain.pem" and "privkey.pem" are already
> correctly formatted as required for Kanidm.
> **HINT** If you are using Let's Encrypt the provided files "fullchain.pem" and "privkey.pem" are
> already correctly formatted as required for Kanidm.
You can validate that the leaf certificate matches the key with the command:
# ECDSA
openssl ec -in key.pem -pubout | openssl sha1
1c7e7bf6ef8f83841daeedf16093bda585fc5bb0
openssl x509 -in chain.pem -noout -pubkey | openssl sha1
1c7e7bf6ef8f83841daeedf16093bda585fc5bb0
```bash
# ECDSA
openssl ec -in key.pem -pubout | openssl sha1
1c7e7bf6ef8f83841daeedf16093bda585fc5bb0
openssl x509 -in chain.pem -noout -pubkey | openssl sha1
1c7e7bf6ef8f83841daeedf16093bda585fc5bb0
# RSA
# openssl rsa -noout -modulus -in key.pem | openssl sha1
d2188932f520e45f2e76153fbbaf13f81ea6c1ef
# openssl x509 -noout -modulus -in chain.pem | openssl sha1
d2188932f520e45f2e76153fbbaf13f81ea6c1ef
# RSA
# openssl rsa -noout -modulus -in key.pem | openssl sha1
d2188932f520e45f2e76153fbbaf13f81ea6c1ef
# openssl x509 -noout -modulus -in chain.pem | openssl sha1
d2188932f520e45f2e76153fbbaf13f81ea6c1ef
```
If your chain.pem contains the CA certificate, you can validate this file with the command:
openssl verify -CAfile chain.pem chain.pem
```bash
openssl verify -CAfile chain.pem chain.pem
```
If your chain.pem does not contain the CA certificate (Let's Encrypt chains do not contain the CA
for example) then you can validate with this command.
openssl verify -untrusted fullchain.pem fullchain.pem
```bash
openssl verify -untrusted fullchain.pem fullchain.pem
```
> **NOTE** Here "-untrusted" flag means a list of further certificates in the chain to build up
> to the root is provided, but that the system CA root should be consulted. Verification is NOT bypassed
> or allowed to be invalid.
> **NOTE** Here "-untrusted" flag means a list of further certificates in the chain to build up to
> the root is provided, but that the system CA root should be consulted. Verification is NOT
> bypassed or allowed to be invalid.
If these verifications pass you can now use these certificates with Kanidm. To put the certificates
in place you can use a shell container that mounts the volume such as:
docker run --rm -i -t -v kanidmd:/data -v /my/host/path/work:/work opensuse/leap:latest /bin/sh -c "cp /work/* /data/"
```bash
docker run --rm -i -t -v kanidmd:/data -v /my/host/path/work:/work opensuse/leap:latest /bin/sh -c "cp /work/* /data/"
```
OR for a shell into the volume:
docker run --rm -i -t -v kanidmd:/data opensuse/leap:latest /bin/sh
```bash
docker run --rm -i -t -v kanidmd:/data opensuse/leap:latest /bin/sh
```

View file

@ -1,25 +1,22 @@
# Recycle Bin
The recycle bin is a storage of deleted entries from the server. This allows
recovery from mistakes for a period of time.
The recycle bin is a storage of deleted entries from the server. This allows recovery from mistakes
for a period of time.
{{#template
templates/kani-warning.md
imagepath=images
title=Warning!
text=The recycle bin is a best effort - when recovering in some cases not everything can be "put back" the way it was. Be sure to check your entries are valid once they have been revived.
}}
{{#template\
templates/kani-warning.md imagepath=images title=Warning! text=The recycle bin is a best effort -
when recovering in some cases not everything can be "put back" the way it was. Be sure to check your
entries are valid once they have been revived. }}
## Where is the Recycle Bin?
The recycle bin is stored as part of your main database - it is included in all
backups and restores, just like any other data. It is also replicated between
all servers.
The recycle bin is stored as part of your main database - it is included in all backups and
restores, just like any other data. It is also replicated between all servers.
## How do Things Get Into the Recycle Bin?
Any delete operation of an entry will cause it to be sent to the recycle bin. No
configuration or specification is required.
Any delete operation of an entry will cause it to be sent to the recycle bin. No configuration or
specification is required.
## How Long Do Items Stay in the Recycle Bin?
@ -29,46 +26,56 @@ Currently they stay up to 1 week before they are removed.
You can display all items in the Recycle Bin with:
kanidm recycle-bin list --name admin
```bash
kanidm recycle-bin list --name admin
```
You can show a single item with:
kanidm recycle-bin get --name admin <uuid>
```bash
kanidm recycle-bin get --name admin <uuid>
```
An entry can be revived with:
kanidm recycle-bin revive --name admin <uuid>
```bash
kanidm recycle-bin revive --name admin <uuid>
```
## Edge Cases
The recycle bin is a best effort to restore your data - there are some cases where
the revived entries may not be the same as their were when they were deleted. This
generally revolves around reference types such as group membership, or when the reference
type includes supplemental map data such as the oauth2 scope map type.
The recycle bin is a best effort to restore your data - there are some cases where the revived
entries may not be the same as their were when they were deleted. This generally revolves around
reference types such as group membership, or when the reference type includes supplemental map data
such as the oauth2 scope map type.
An example of this data loss is the following steps:
add user1
add group1
add user1 as member of group1
delete user1
delete group1
revive user1
revive group1
```
add user1
add group1
add user1 as member of group1
delete user1
delete group1
revive user1
revive group1
```
In this series of steps, due to the way that referential integrity is implemented, the
membership of user1 in group1 would be lost in this process. To explain why:
In this series of steps, due to the way that referential integrity is implemented, the membership of
user1 in group1 would be lost in this process. To explain why:
add user1
add group1
add user1 as member of group1 // refint between the two established, and memberof added
delete user1 // group1 removes member user1 from refint
delete group1 // user1 now removes memberof group1 from refint
revive user1 // re-add groups based on directmemberof (empty set)
revive group1 // no members
```
add user1
add group1
add user1 as member of group1 // refint between the two established, and memberof added
delete user1 // group1 removes member user1 from refint
delete group1 // user1 now removes memberof group1 from refint
revive user1 // re-add groups based on directmemberof (empty set)
revive group1 // no members
```
These issues could be looked at again in the future, but for now we think that deletes of
groups is rare - we expect recycle bin to save you in "opps" moments, and in a majority
of cases you may delete a group or a user and then restore them. To handle this series
of steps requires extra code complexity in how we flag operations. For more,
see [This issue on github](https://github.com/kanidm/kanidm/issues/177).
These issues could be looked at again in the future, but for now we think that deletes of groups is
rare - we expect recycle bin to save you in "opps" moments, and in a majority of cases you may
delete a group or a user and then restore them. To handle this series of steps requires extra code
complexity in how we flag operations. For more, see
[This issue on github](https://github.com/kanidm/kanidm/issues/177).

View file

@ -1,50 +1,51 @@
# Security Hardening
Kanidm ships with a secure-by-default configuration, however that is only as strong
as the environment that Kanidm operates in. This could be your container environment
or your Unix-like system.
Kanidm ships with a secure-by-default configuration, however that is only as strong as the
environment that Kanidm operates in. This could be your container environment or your Unix-like
system.
This chapter will detail a number of warnings and security practices you should
follow to ensure that Kanidm operates in a secure environment.
This chapter will detail a number of warnings and security practices you should follow to ensure
that Kanidm operates in a secure environment.
The main server is a high-value target for a potential attack, as Kanidm serves as
the authority on identity and authorisation in a network. Compromise of the Kanidm
server is equivalent to a full-network take over, also known as "game over".
The main server is a high-value target for a potential attack, as Kanidm serves as the authority on
identity and authorisation in a network. Compromise of the Kanidm server is equivalent to a
full-network take over, also known as "game over".
The unixd resolver is also a high value target as it can be accessed to allow unauthorised
access to a server, to intercept communications to the server, or more. This also must be protected
carefully.
The unixd resolver is also a high value target as it can be accessed to allow unauthorised access to
a server, to intercept communications to the server, or more. This also must be protected carefully.
For this reason, Kanidm's components must be protected carefully. Kanidm avoids many classic
attacks by being developed in a memory safe language, but risks still exist.
For this reason, Kanidm's components must be protected carefully. Kanidm avoids many classic attacks
by being developed in a memory safe language, but risks still exist.
## Startup Warnings
At startup Kanidm will warn you if the environment it is running in is suspicious or
has risks. For example:
At startup Kanidm will warn you if the environment it is running in is suspicious or has risks. For
example:
kanidmd server -c /tmp/server.toml
WARNING: permissions on /tmp/server.toml may not be secure. Should be readonly to running uid. This could be a security risk ...
WARNING: /tmp/server.toml has 'everyone' permission bits in the mode. This could be a security risk ...
WARNING: /tmp/server.toml owned by the current uid, which may allow file permission changes. This could be a security risk ...
WARNING: permissions on ../insecure/ca.pem may not be secure. Should be readonly to running uid. This could be a security risk ...
WARNING: permissions on ../insecure/cert.pem may not be secure. Should be readonly to running uid. This could be a security risk ...
WARNING: permissions on ../insecure/key.pem may not be secure. Should be readonly to running uid. This could be a security risk ...
WARNING: ../insecure/key.pem has 'everyone' permission bits in the mode. This could be a security risk ...
WARNING: DB folder /tmp has 'everyone' permission bits in the mode. This could be a security risk ...
```bash
kanidmd server -c /tmp/server.toml
WARNING: permissions on /tmp/server.toml may not be secure. Should be readonly to running uid. This could be a security risk ...
WARNING: /tmp/server.toml has 'everyone' permission bits in the mode. This could be a security risk ...
WARNING: /tmp/server.toml owned by the current uid, which may allow file permission changes. This could be a security risk ...
WARNING: permissions on ../insecure/ca.pem may not be secure. Should be readonly to running uid. This could be a security risk ...
WARNING: permissions on ../insecure/cert.pem may not be secure. Should be readonly to running uid. This could be a security risk ...
WARNING: permissions on ../insecure/key.pem may not be secure. Should be readonly to running uid. This could be a security risk ...
WARNING: ../insecure/key.pem has 'everyone' permission bits in the mode. This could be a security risk ...
WARNING: DB folder /tmp has 'everyone' permission bits in the mode. This could be a security risk ...
```
Each warning highlights an issue that may exist in your environment. It is not possible for us to
prescribe an exact configuration that may secure your system. This is why we only present
possible risks.
prescribe an exact configuration that may secure your system. This is why we only present possible
risks.
### Should be Read-only to Running UID
Files, such as configuration files, should be read-only to the UID of the Kanidm daemon. If an attacker is
able to gain code execution, they are then unable to modify the configuration to write, or to over-write
files in other locations, or to tamper with the systems configuration.
Files, such as configuration files, should be read-only to the UID of the Kanidm daemon. If an
attacker is able to gain code execution, they are then unable to modify the configuration to write,
or to over-write files in other locations, or to tamper with the systems configuration.
This can be prevented by changing the files ownership to another user, or removing "write" bits
from the group.
This can be prevented by changing the files ownership to another user, or removing "write" bits from
the group.
### 'everyone' Permission Bits in the Mode
@ -57,93 +58,103 @@ configuration, and removing "everyone" bits from the files in question.
### Owned by the Current UID, Which May Allow File Permission Changes
File permissions in UNIX systems are a discretionary access control system, which means the
named UID owner is able to further modify the access of a file regardless of the current
settings. For example:
File permissions in UNIX systems are a discretionary access control system, which means the named
UID owner is able to further modify the access of a file regardless of the current settings. For
example:
[william@amethyst 12:25] /tmp > touch test
[william@amethyst 12:25] /tmp > ls -al test
-rw-r--r-- 1 william wheel 0 29 Jul 12:25 test
[william@amethyst 12:25] /tmp > chmod 400 test
[william@amethyst 12:25] /tmp > ls -al test
-r-------- 1 william wheel 0 29 Jul 12:25 test
[william@amethyst 12:25] /tmp > chmod 644 test
[william@amethyst 12:26] /tmp > ls -al test
-rw-r--r-- 1 william wheel 0 29 Jul 12:25 test
```bash
[william@amethyst 12:25] /tmp > touch test
[william@amethyst 12:25] /tmp > ls -al test
-rw-r--r-- 1 william wheel 0 29 Jul 12:25 test
[william@amethyst 12:25] /tmp > chmod 400 test
[william@amethyst 12:25] /tmp > ls -al test
-r-------- 1 william wheel 0 29 Jul 12:25 test
[william@amethyst 12:25] /tmp > chmod 644 test
[william@amethyst 12:26] /tmp > ls -al test
-rw-r--r-- 1 william wheel 0 29 Jul 12:25 test
```
Notice that even though the file was set to "read only" to william, and no permission to any
other users, user "william" can change the bits to add write permissions back or permissions
for other users.
Notice that even though the file was set to "read only" to william, and no permission to any other
users, user "william" can change the bits to add write permissions back or permissions for other
users.
This can be prevent by making the file owner a different UID than the running process for kanidm.
### A Secure Example
Between these three issues it can be hard to see a possible strategy to secure files, however
one way exists - group read permissions. The most effective method to secure resources for Kanidm
is to set configurations to:
Between these three issues it can be hard to see a possible strategy to secure files, however one
way exists - group read permissions. The most effective method to secure resources for Kanidm is to
set configurations to:
[william@amethyst 12:26] /etc/kanidm > ls -al server.toml
-r--r----- 1 root kanidm 212 28 Jul 16:53 server.toml
```bash
[william@amethyst 12:26] /etc/kanidm > ls -al server.toml
-r--r----- 1 root kanidm 212 28 Jul 16:53 server.toml
```
The Kanidm server should be run as "kanidm:kanidm" with the appropriate user and user private
group created on your system. This applies to unixd configuration as well.
The Kanidm server should be run as "kanidm:kanidm" with the appropriate user and user private group
created on your system. This applies to unixd configuration as well.
For the database your data folder should be:
[root@amethyst 12:38] /data/kanidm > ls -al .
total 1064
drwxrwx--- 3 root kanidm 96 29 Jul 12:38 .
-rw-r----- 1 kanidm kanidm 544768 29 Jul 12:38 kanidm.db
```bash
[root@amethyst 12:38] /data/kanidm > ls -al .
total 1064
drwxrwx--- 3 root kanidm 96 29 Jul 12:38 .
-rw-r----- 1 kanidm kanidm 544768 29 Jul 12:38 kanidm.db
```
This means 770 root:kanidm. This allows Kanidm to create new files in the folder, but prevents
Kanidm from being able to change the permissions of the folder. Because the folder does not have
"everyone" mode bits, the content of the database is secure because users can now cd/read
from the directory.
"everyone" mode bits, the content of the database is secure because users can now cd/read from the
directory.
Configurations for clients, such as /etc/kanidm/config, should be secured with read-only permissions
and owned by root:
[william@amethyst 12:26] /etc/kanidm > ls -al config
-r--r--r-- 1 root root 38 10 Jul 10:10 config
```bash
[william@amethyst 12:26] /etc/kanidm > ls -al config
-r--r--r-- 1 root root 38 10 Jul 10:10 config
```
This file should be "everyone"-readable, which is why the bits are defined as such.
> NOTE: Why do you use 440 or 444 modes?
>
> A bug exists in the implementation of readonly() in rust that checks this as "does a write
> bit exist for any user" vs "can the current UID write the file?". This distinction is subtle
> but it affects the check. We don't believe this is a significant issue though, because
> setting these to 440 and 444 helps to prevent accidental changes by an administrator anyway
> A bug exists in the implementation of readonly() in rust that checks this as "does a write bit
> exist for any user" vs "can the current UID write the file?". This distinction is subtle but it
> affects the check. We don't believe this is a significant issue though, because setting these to
> 440 and 444 helps to prevent accidental changes by an administrator anyway
## Running as Non-root in docker
The commands provided in this book will run kanidmd as "root" in the container to make the onboarding
smoother. However, this is not recommended in production for security reasons.
The commands provided in this book will run kanidmd as "root" in the container to make the
onboarding smoother. However, this is not recommended in production for security reasons.
You should allocate unique UID and GID numbers for the service to run as on your host
system. In this example we use `1000:1000`
You should allocate unique UID and GID numbers for the service to run as on your host system. In
this example we use `1000:1000`
You will need to adjust the permissions on the `/data` volume to ensure that the process
can manage the files. Kanidm requires the ability to write to the `/data` directory to create
the sqlite files. This UID/GID number should match the above. You could consider the following
changes to help isolate these changes:
You will need to adjust the permissions on the `/data` volume to ensure that the process can manage
the files. Kanidm requires the ability to write to the `/data` directory to create the sqlite files.
This UID/GID number should match the above. You could consider the following changes to help isolate
these changes:
docker run --rm -i -t -v kanidmd:/data opensuse/leap:latest /bin/sh
mkdir /data/db/
chown 1000:1000 /data/db/
chmod 750 /data/db/
sed -i -e "s/db_path.*/db_path = \"\/data\/db\/kanidm.db\"/g" /data/server.toml
chown root:root /data/server.toml
chmod 644 /data/server.toml
```bash
docker run --rm -i -t -v kanidmd:/data opensuse/leap:latest /bin/sh
mkdir /data/db/
chown 1000:1000 /data/db/
chmod 750 /data/db/
sed -i -e "s/db_path.*/db_path = \"\/data\/db\/kanidm.db\"/g" /data/server.toml
chown root:root /data/server.toml
chmod 644 /data/server.toml
```
Note that the example commands all run inside the docker container.
You can then use this to run the Kanidm server in docker with a user:
docker run --rm -i -t -u 1000:1000 -v kanidmd:/data kanidm/server:latest /sbin/kanidmd ...
> **HINT**
> You need to use the UID or GID number with the `-u` argument, as the container can't resolve
> usernames from the host system.
```bash
docker run --rm -i -t -u 1000:1000 -v kanidmd:/data kanidm/server:latest /sbin/kanidmd ...
```
> **HINT** You need to use the UID or GID number with the `-u` argument, as the container can't
> resolve usernames from the host system.

View file

@ -2,49 +2,56 @@
### Configuring server.toml
You need a configuration file in the volume named `server.toml`. (Within the container it should be `/data/server.toml`) Its contents should be as follows:
You need a configuration file in the volume named `server.toml`. (Within the container it should be
`/data/server.toml`) Its contents should be as follows:
```
{{#rustdoc_include ../../examples/server_container.toml}}
```
This example is located in [examples/server_container.toml](https://github.com/kanidm/kanidm/blob/master/examples/server_container.toml).
This example is located in
[examples/server_container.toml](https://github.com/kanidm/kanidm/blob/master/examples/server_container.toml).
{{#template
templates/kani-warning.md
imagepath=images
title=Warning!
text=You MUST set the `domain` name correctly, aligned with your `origin`, else the server may refuse to start or some features (e.g. webauthn, oauth) may not work correctly!
}}
{{#template templates/kani-warning.md imagepath=images title=Warning! text=You MUST set the `domain`
name correctly, aligned with your `origin`, else the server may refuse to start or some features
(e.g. webauthn, oauth) may not work correctly! }}
### Check the configuration is valid.
You should test your configuration is valid before you proceed.
docker run --rm -i -t -v kanidmd:/data \
kanidm/server:latest /sbin/kanidmd configtest -c /data/server.toml
```bash
docker run --rm -i -t -v kanidmd:/data \
kanidm/server:latest /sbin/kanidmd configtest -c /data/server.toml
```
### Default Admin Account
Then you can setup the initial admin account and initialise the database into your volume. This command
will generate a new random password for the admin account.
Then you can setup the initial admin account and initialise the database into your volume. This
command will generate a new random password for the admin account.
docker run --rm -i -t -v kanidmd:/data \
kanidm/server:latest /sbin/kanidmd recover_account -c /data/server.toml admin
# success - recover_account password for user admin: vv...
```bash
docker run --rm -i -t -v kanidmd:/data \
kanidm/server:latest /sbin/kanidmd recover_account -c /data/server.toml admin
# success - recover_account password for user admin: vv...
```
### Run the Server
Now we can run the server so that it can accept connections. This defaults to using `-c /data/server.toml`
Now we can run the server so that it can accept connections. This defaults to using
`-c /data/server.toml`
docker run -p 443:8443 -v kanidmd:/data kanidm/server:latest
```bash
docker run -p 443:8443 -v kanidmd:/data kanidm/server:latest
```
### Using the NET\_BIND\_SERVICE capability
If you plan to run without using docker port mapping or some other reverse proxy, and your bindaddress
or ldapbindaddress port is less than `1024` you will need the `NET_BIND_SERVICE` in docker to allow
these port binds. You can add this with `--cap-add` in your docker run command.
docker run --cap-add NET_BIND_SERVICE --network [host OR macvlan OR ipvlan] \
-v kanidmd:/data kanidm/server:latest
If you plan to run without using docker port mapping or some other reverse proxy, and your
bindaddress or ldapbindaddress port is less than `1024` you will need the `NET_BIND_SERVICE` in
docker to allow these port binds. You can add this with `--cap-add` in your docker run command.
```bash
docker run --cap-add NET_BIND_SERVICE --network [host OR macvlan OR ipvlan] \
-v kanidmd:/data kanidm/server:latest
```

View file

@ -2,18 +2,22 @@
### Preserving the Previous Image
You may wish to preserve the previous image before updating. This is useful if an issue is encountered
in upgrades.
You may wish to preserve the previous image before updating. This is useful if an issue is
encountered in upgrades.
docker tag kanidm/server:latest kanidm/server:<DATE>
docker tag kanidm/server:latest kanidm/server:2022-10-24
```bash
docker tag kanidm/server:latest kanidm/server:<DATE>
docker tag kanidm/server:latest kanidm/server:2022-10-24
```
### Update your Image
Pull the latest version of Kanidm that matches your CPU profile
docker pull kanidm/server:latest
docker pull kanidm/server:x86_64_latest
```bash
docker pull kanidm/server:latest
docker pull kanidm/server:x86_64_latest
```
### Perform a backup
@ -21,42 +25,50 @@ See [backup and restore](backup_restore.md)
### Update your Instance
{{#template
templates/kani-warning.md
imagepath=images
title=WARNING
text=It is not always guaranteed that downgrades are possible. It is critical you know how to backup and restore before you proceed with this step.
}}
{{#template templates/kani-warning.md imagepath=images title=WARNING text=It is not always
guaranteed that downgrades are possible. It is critical you know how to backup and restore before
you proceed with this step. }}
Docker updates by deleting and recreating the instance. All that needs to be preserved in your
storage volume.
docker stop <previous instance name>
```bash
docker stop <previous instance name>
```
You can test that your configuration is correct, and the server should correctly start.
docker run --rm -i -t -v kanidmd:/data \
kanidm/server:latest /sbin/kanidmd configtest -c /data/server.toml
```bash
docker run --rm -i -t -v kanidmd:/data \
kanidm/server:latest /sbin/kanidmd configtest -c /data/server.toml
```
You can then follow through with the upgrade
docker run -p PORTS -v kanidmd:/data \
OTHER_CUSTOM_OPTIONS \
kanidm/server:latest
```bash
docker run -p PORTS -v kanidmd:/data \
OTHER_CUSTOM_OPTIONS \
kanidm/server:latest
```
Once you confirm the upgrade is successful you can delete the previous instance
docker rm <previous instance name>
```bash
docker rm <previous instance name>
```
If you encounter an issue you can revert to the previous version.
docker stop <new instance name>
docker start <previous instance name>
```bash
docker stop <new instance name>
docker start <previous instance name>
```
If you deleted the previous instance, you can recreate it from your preserved tag instead.
docker run -p ports -v volumes kanidm/server:<DATE>
```bash
docker run -p ports -v volumes kanidm/server:<DATE>
```
In some cases the downgrade to the previous instance may not work. If the server from your previous
version fails to start, you may need to restore from backup.

View file

@ -1,102 +1,119 @@
# SSH Key Distribution
To support SSH authentication securely to a large set of hosts running SSH, we support
distribution of SSH public keys via the Kanidm server. Both persons and service accounts
support SSH public keys on their accounts.
To support SSH authentication securely to a large set of hosts running SSH, we support distribution
of SSH public keys via the Kanidm server. Both persons and service accounts support SSH public keys
on their accounts.
## Configuring Accounts
To view the current SSH public keys on accounts, you can use:
kanidm person|service-account ssh list_publickeys --name <login user> <account to view>
kanidm person|service-account ssh list_publickeys --name idm_admin william
```bash
kanidm person|service-account ssh list_publickeys --name <login user> <account to view>
kanidm person|service-account ssh list_publickeys --name idm_admin william
```
All users by default can self-manage their SSH public keys. To upload a key, a command like this
is the best way to do so:
All users by default can self-manage their SSH public keys. To upload a key, a command like this is
the best way to do so:
kanidm person|service-account ssh add_publickey --name william william 'test-key' "`cat ~/.ssh/id_rsa.pub`"
```bash
kanidm person|service-account ssh add_publickey --name william william 'test-key' "`cat ~/.ssh/id_rsa.pub`"
```
To remove (revoke) an SSH public key, delete them by the tag name:
kanidm person|service-account ssh delete_publickey --name william william 'test-key'
```bash
kanidm person|service-account ssh delete_publickey --name william william 'test-key'
```
## Security Notes
As a security feature, Kanidm validates *all* public keys to ensure they are valid SSH public keys.
As a security feature, Kanidm validates _all_ public keys to ensure they are valid SSH public keys.
Uploading a private key or other data will be rejected. For example:
kanidm person|service-account ssh add_publickey --name william william 'test-key' "invalid"
Enter password:
... Some(SchemaViolation(InvalidAttributeSyntax)))' ...
```bash
kanidm person|service-account ssh add_publickey --name william william 'test-key' "invalid"
Enter password:
... Some(SchemaViolation(InvalidAttributeSyntax)))' ...
```
## Server Configuration
### Public Key Caching Configuration
If you have kanidm_unixd running, you can use it to locally cache SSH public keys. This means you
can still SSH into your machines, even if your network is down, you move away from Kanidm, or
some other interruption occurs.
can still SSH into your machines, even if your network is down, you move away from Kanidm, or some
other interruption occurs.
The kanidm_ssh_authorizedkeys command is part of the kanidm-unix-clients package, so should be installed
on the servers. It communicates to kanidm_unixd, so you should have a configured PAM/nsswitch
setup as well.
The kanidm_ssh_authorizedkeys command is part of the kanidm-unix-clients package, so should be
installed on the servers. It communicates to kanidm_unixd, so you should have a configured
PAM/nsswitch setup as well.
You can test this is configured correctly by running:
kanidm_ssh_authorizedkeys <account name>
```bash
kanidm_ssh_authorizedkeys <account name>
```
If the account has SSH public keys you should see them listed, one per line.
To configure servers to accept these keys, you must change their /etc/ssh/sshd_config to
contain the lines:
To configure servers to accept these keys, you must change their /etc/ssh/sshd_config to contain the
lines:
PubkeyAuthentication yes
UsePAM yes
AuthorizedKeysCommand /usr/bin/kanidm_ssh_authorizedkeys %u
AuthorizedKeysCommandUser nobody
```
PubkeyAuthentication yes
UsePAM yes
AuthorizedKeysCommand /usr/bin/kanidm_ssh_authorizedkeys %u
AuthorizedKeysCommandUser nobody
```
Restart sshd, and then attempt to authenticate with the keys.
It's highly recommended you keep your client configuration and sshd_configuration in a configuration
management tool such as salt or ansible.
> **NOTICE:**
> With a working SSH key setup, you should also consider adding the following
> **NOTICE:** With a working SSH key setup, you should also consider adding the following
> sshd_config options as hardening.
PermitRootLogin no
PasswordAuthentication no
PermitEmptyPasswords no
GSSAPIAuthentication no
KerberosAuthentication no
```
PermitRootLogin no
PasswordAuthentication no
PermitEmptyPasswords no
GSSAPIAuthentication no
KerberosAuthentication no
```
### Direct Communication Configuration
In this mode, the authorised keys commands will contact Kanidm directly.
> **NOTICE:**
> As Kanidm is contacted directly there is no SSH public key cache. Any network
> outage or communication loss may prevent you accessing your systems. You should
> only use this version if you have a requirement for it.
> **NOTICE:** As Kanidm is contacted directly there is no SSH public key cache. Any network outage
> or communication loss may prevent you accessing your systems. You should only use this version if
> you have a requirement for it.
The kanidm_ssh_authorizedkeys_direct command is part of the kanidm-clients package, so should be installed
on the servers.
The kanidm_ssh_authorizedkeys_direct command is part of the kanidm-clients package, so should be
installed on the servers.
To configure the tool, you should edit /etc/kanidm/config, as documented in [clients](./client_tools.md)
To configure the tool, you should edit /etc/kanidm/config, as documented in
[clients](./client_tools.md)
You can test this is configured correctly by running:
kanidm_ssh_authorizedkeys_direct -D anonymous <account name>
```bash
kanidm_ssh_authorizedkeys_direct -D anonymous <account name>
```
If the account has SSH public keys you should see them listed, one per line.
To configure servers to accept these keys, you must change their /etc/ssh/sshd_config to
contain the lines:
To configure servers to accept these keys, you must change their /etc/ssh/sshd_config to contain the
lines:
PubkeyAuthentication yes
UsePAM yes
AuthorizedKeysCommand /usr/bin/kanidm_ssh_authorizedkeys_direct -D anonymous %u
AuthorizedKeysCommandUser nobody
```
PubkeyAuthentication yes
UsePAM yes
AuthorizedKeysCommand /usr/bin/kanidm_ssh_authorizedkeys_direct -D anonymous %u
AuthorizedKeysCommandUser nobody
```
Restart sshd, and then attempt to authenticate with the keys.

View file

@ -9,13 +9,13 @@ Kanidm to work with these, it is possible to synchronised data between these IDM
Currently Kanidm can consume (import) data from another IDM system. There are two major use cases
for this:
* Running Kanidm in parallel with another IDM system
* Migrating from an existing IDM to Kanidm
- Running Kanidm in parallel with another IDM system
- Migrating from an existing IDM to Kanidm
An incoming IDM data source is bound to Kanidm by a sync account. All synchronised entries will
have a reference to the sync account that they came from defined by their "sync parent uuid".
While an entry is owned by a sync account we refer to the sync account as having authority over
the content of that entry.
An incoming IDM data source is bound to Kanidm by a sync account. All synchronised entries will have
a reference to the sync account that they came from defined by their "sync parent uuid". While an
entry is owned by a sync account we refer to the sync account as having authority over the content
of that entry.
The sync process is driven by a sync tool. This tool extracts the current state of the sync from
Kanidm, requests the set of changes (differences) from the IDM source, and then submits these
@ -23,45 +23,50 @@ changes to Kanidm. Kanidm will update and apply these changes and commit the new
success.
In the event of a conflict or data import error, Kanidm will halt and rollback the synchronisation
to the last good state. The sync tool should be reconfigured to exclude the conflicting entry or
to remap it's properties to resolve the conflict. The operation can then be retried.
to the last good state. The sync tool should be reconfigured to exclude the conflicting entry or to
remap it's properties to resolve the conflict. The operation can then be retried.
This process can continue long term to allow Kanidm to operate in parallel to another IDM system. If
this is for a migration however, the sync account can be finalised. This terminates the sync account
and removes the sync parent uuid from all synchronised entries, moving authority of the entry into
Kanidm.
Alternatelly, the sync account can be terminated which removes all synchronised content that was submitted.
Alternatelly, the sync account can be terminated which removes all synchronised content that was
submitted.
## Creating a Sync Account
Creating a sync account requires administration permissions. By default this is available to
members of the "system\_admins" group which "admin" is a memberof by default.
Creating a sync account requires administration permissions. By default this is available to members
of the "system\_admins" group which "admin" is a memberof by default.
kanidm system sync create <sync account name>
kanidm system sync create ipasync
```bash
kanidm system sync create <sync account name>
kanidm system sync create ipasync
```
Once the sync account is created you can then generate the sync token which identifies the
sync tool.
Once the sync account is created you can then generate the sync token which identifies the sync
tool.
kanidm system sync generate-token <sync account name> <token label>
kanidm system sync generate-token ipasync mylabel
token: eyJhbGci...
```bash
kanidm system sync generate-token <sync account name> <token label>
kanidm system sync generate-token ipasync mylabel
token: eyJhbGci...
```
{{#template
../templates/kani-warning.md
imagepath=../images
title=Warning!
text=The sync account token has a high level of privilege, able to create new accounts and groups. It should be treated carefully as a result!
}}
{{#template\
../templates/kani-warning.md imagepath=../images title=Warning! text=The sync account token has a
high level of privilege, able to create new accounts and groups. It should be treated carefully as a
result! }}
If you need to revoke the token, you can do so with:
kanidm system sync destroy-token <sync account name>
kanidm system sync destroy-token ipasync
```bash
kanidm system sync destroy-token <sync account name>
kanidm system sync destroy-token ipasync
```
Destroying the token does NOT affect the state of the sync account and it's synchronised entries. Creating
a new token and providing that to the sync tool will continue the sync process.
Destroying the token does NOT affect the state of the sync account and it's synchronised entries.
Creating a new token and providing that to the sync tool will continue the sync process.
## Operating the Sync Tool
@ -84,16 +89,15 @@ If you are performing a migration from an external IDM to Kanidm, when that migr
you can nominate that Kanidm now owns all of the imported data. This is achieved by finalising the
sync account.
{{#template
../templates/kani-warning.md
imagepath=../images
title=Warning!
text=You can not undo this operation. Once you have finalised an agreement, Kanidm owns all of the synchronised data, and you can not resume synchronisation.
}}
{{#template ../templates/kani-warning.md imagepath=../images title=Warning! text=You can not undo
this operation. Once you have finalised an agreement, Kanidm owns all of the synchronised data, and
you can not resume synchronisation. }}
kanidm system sync finalise <sync account name>
kanidm system sync finalise ipasync
# Do you want to continue? This operation can NOT be undone. [y/N]
```bash
kanidm system sync finalise <sync account name>
kanidm system sync finalise ipasync
# Do you want to continue? This operation can NOT be undone. [y/N]
```
Once finalised, imported accounts can now be fully managed by Kanidm.
@ -102,16 +106,14 @@ Once finalised, imported accounts can now be fully managed by Kanidm.
If you decide to cease importing accounts or need to remove all imported accounts from a sync
account, you can choose to terminate the agreement removing all data that was imported.
{{#template
../templates/kani-warning.md
imagepath=../images
title=Warning!
text=You can not undo this operation. Once you have terminated an agreement, Kanidm deletes all of the synchronised data, and you can not resume synchronisation.
}}
{{#template ../templates/kani-warning.md imagepath=../images title=Warning! text=You can not undo
this operation. Once you have terminated an agreement, Kanidm deletes all of the synchronised data,
and you can not resume synchronisation. }}
kanidm system sync terminate <sync account name>
kanidm system sync terminate ipasync
# Do you want to continue? This operation can NOT be undone. [y/N]
```bash
kanidm system sync terminate <sync account name>
kanidm system sync terminate ipasync
# Do you want to continue? This operation can NOT be undone. [y/N]
```
Once terminated all imported data will be deleted by Kanidm.

View file

@ -19,62 +19,75 @@ to understand how to connect to Kanidm.
The sync tool specific components are configured in it's own configuration file.
```
```rust
{{#rustdoc_include ../../../examples/kanidm-ipa-sync}}
```
This example is located in [examples/kanidm-ipa-sync](https://github.com/kanidm/kanidm/blob/master/examples/kanidm-ipa-sync).
This example is located in
[examples/kanidm-ipa-sync](https://github.com/kanidm/kanidm/blob/master/examples/kanidm-ipa-sync).
In addition to this, you must make some configuration changes to FreeIPA to enable synchronisation.
You can find the name of your 389 Directory Server instance with:
dsconf --list
```bash
dsconf --list
```
Using this you can show the current status of the retro changelog plugin to see if you need
to change it's configuration.
Using this you can show the current status of the retro changelog plugin to see if you need to
change it's configuration.
dsconf <instance name> plugin retro-changelog show
dsconf slapd-DEV-KANIDM-COM plugin retro-changelog show
```bash
dsconf <instance name> plugin retro-changelog show
dsconf slapd-DEV-KANIDM-COM plugin retro-changelog show
```
You must modify the retro changelog plugin to include the full scope of the database suffix so that
the sync tool can view the changes to the database. Currently dsconf can not modify the include-suffix
so you must do this manually.
the sync tool can view the changes to the database. Currently dsconf can not modify the
include-suffix so you must do this manually.
You need to change the `nsslapd-include-suffix` to match your FreeIPA baseDN here. You can
access the basedn with:
ldapsearch -H ldaps://<IPA SERVER HOSTNAME/IP> -x -b '' -s base namingContexts
# namingContexts: dc=ipa,dc=dev,dc=kanidm,dc=com
You should ignore `cn=changelog` and `o=ipaca` as these are system internal namingContexts. You
can then create an ldapmodify like the following.
You need to change the `nsslapd-include-suffix` to match your FreeIPA baseDN here. You can access
the basedn with:
```bash
ldapsearch -H ldaps://<IPA SERVER HOSTNAME/IP> -x -b '' -s base namingContexts
# namingContexts: dc=ipa,dc=dev,dc=kanidm,dc=com
```
You should ignore `cn=changelog` and `o=ipaca` as these are system internal namingContexts. You can
then create an ldapmodify like the following.
```rust
{{#rustdoc_include ../../../iam_migrations/freeipa/00config-mod.ldif}}
```
And apply it with:
ldapmodify -f change.ldif -H ldaps://<IPA SERVER HOSTNAME/IP> -x -D 'cn=Directory Manager' -W
# Enter LDAP Password:
```bash
ldapmodify -f change.ldif -H ldaps://<IPA SERVER HOSTNAME/IP> -x -D 'cn=Directory Manager' -W
# Enter LDAP Password:
```
You must then reboot your FreeIPA server.
## Running the Sync Tool Manually
You can perform a dry run with the sync tool manually to check your configurations are
correct and that the tool can synchronise from FreeIPA.
You can perform a dry run with the sync tool manually to check your configurations are correct and
that the tool can synchronise from FreeIPA.
kanidm-ipa-sync [-c /path/to/kanidm/config] -i /path/to/kanidm-ipa-sync -n
kanidm-ipa-sync -i /etc/kanidm/ipa-sync -n
```bash
kanidm-ipa-sync [-c /path/to/kanidm/config] -i /path/to/kanidm-ipa-sync -n
kanidm-ipa-sync -i /etc/kanidm/ipa-sync -n
```
## Running the Sync Tool Automatically
The sync tool can be run on a schedule if you configure the `schedule` parameter, and provide
the option "--schedule" on the cli
The sync tool can be run on a schedule if you configure the `schedule` parameter, and provide the
option "--schedule" on the cli
kanidm-ipa-sync [-c /path/to/kanidm/config] -i /path/to/kanidm-ipa-sync --schedule
```bash
kanidm-ipa-sync [-c /path/to/kanidm/config] -i /path/to/kanidm-ipa-sync --schedule
```
## Monitoring the Sync Tool
@ -85,10 +98,11 @@ You can configure a status listener that can be monitored via tcp with the param
An example of monitoring this with netcat is:
# status_bind = "[::1]:12345"
# nc ::1 12345
Ok
It's important to note no details are revealed via the status socket, and is purely for Ok or Err status
of the last sync.
```bash
# status_bind = "[::1]:12345"
# nc ::1 12345
Ok
```
It's important to note no details are revealed via the status socket, and is purely for Ok or Err
status of the last sync.

View file

@ -6,4 +6,4 @@
<tr>
<td>[[#text]]</td>
</tr>
</table>
</table>

View file

@ -4,18 +4,21 @@ Some things to try.
## Is the server started?
If you don't see "ready to rock! 🪨" in your logs, it's not started. Scroll back and look for errors!dd
If you don't see "ready to rock! 🪨" in your logs, it's not started. Scroll back and look for
errors!dd
## Can you connect?
If the server's running on `idm.example.com:8443` then a simple connectivity test is done using [curl](https://curl.se).
If the server's running on `idm.example.com:8443` then a simple connectivity test is done using
[curl](https://curl.se).
Run the following command:
```shell
curl -k https://idm.example.com:8443/status
```
This is similar to what you *should* see:
This is similar to what you _should_ see:
```shell
{{#rustdoc_include troubleshooting/curl_connection_test.txt}}
@ -38,9 +41,11 @@ If you see something like this:
curl: (7) Failed to connect to idm.example.com port 8443 after 5 ms: Connection refused
```
Then either your DNS is wrong (it's pointing at 10.0.0.1) or you can't connect to the server for some reason.
Then either your DNS is wrong (it's pointing at 10.0.0.1) or you can't connect to the server for
some reason.
If you get errors about certificates, try adding `-k` to skip certificate verification checking and just test connectivity:
If you get errors about certificates, try adding `-k` to skip certificate verification checking and
just test connectivity:
```
curl -vk https://idm.example.com:8443
@ -48,9 +53,10 @@ curl -vk https://idm.example.com:8443
## Server things to check
* Has the config file got `bindaddress = "127.0.0.1:8443"` ? Change it to `bindaddress = "[::]:8443"`, so it listens on all interfaces.
* Is there a firewall on the server?
* If you're running in docker, did you expose the port? (`-p 8443:8443`)
- Has the config file got `bindaddress = "127.0.0.1:8443"` ? Change it to
`bindaddress = "[::]:8443"`, so it listens on all interfaces.
- Is there a firewall on the server?
- If you're running in docker, did you expose the port? (`-p 8443:8443`)
## Client things to check
@ -59,4 +65,3 @@ Try running commands with `RUST_LOG=debug` to get more information:
```
RUST_LOG=debug kanidm login --name anonymous
```

View file

@ -1,32 +1,29 @@
# Why TLS?
You may have noticed that Kanidm requires you to configure TLS in your container.
We are a secure-by-design rather than secure-by-installation system, so TLS for
all connections is considered mandatory.
We are a secure-by-design rather than secure-by-installation system, so TLS for all connections is
considered mandatory.
## What are Secure Cookies?
`secure-cookies` is a flag set in cookies that asks a client to transmit them
back to the origin site if and only if HTTPS is present in the URL.
`secure-cookies` is a flag set in cookies that asks a client to transmit them back to the origin
site if and only if HTTPS is present in the URL.
Certificate authority (CA) verification is *not* checked - you can use invalid,
out of date certificates, or even certificates where the `subjectAltName` does
not match, but the client must see https:// as the destination else it *will not*
send the cookies.
Certificate authority (CA) verification is _not_ checked - you can use invalid, out of date
certificates, or even certificates where the `subjectAltName` does not match, but the client must
see https:// as the destination else it _will not_ send the cookies.
## How Does That Affect Kanidm?
Kanidm's authentication system is a stepped challenge response design, where you
initially request an "intent" to authenticate. Once you establish this intent,
the server sets up a session-id into a cookie, and informs the client of
what authentication methods can proceed.
Kanidm's authentication system is a stepped challenge response design, where you initially request
an "intent" to authenticate. Once you establish this intent, the server sets up a session-id into a
cookie, and informs the client of what authentication methods can proceed.
If you do NOT have a HTTPS URL, the cookie with the session-id is not transmitted.
The server detects this as an invalid-state request in the authentication design,
and immediately breaks the connection, because it appears insecure.
If you do NOT have a HTTPS URL, the cookie with the session-id is not transmitted. The server
detects this as an invalid-state request in the authentication design, and immediately breaks the
connection, because it appears insecure.
Simply put, we are trying to use settings like `secure_cookies` to add constraints
to the server so that you *must* perform and adhere to best practices - such
as having TLS present on your communication channels.
Simply put, we are trying to use settings like `secure_cookies` to add constraints to the server so
that you _must_ perform and adhere to best practices - such as having TLS present on your
communication channels.

View file

@ -1,28 +1,22 @@
Mozilla Public License Version 2.0
==================================
# Mozilla Public License Version 2.0
1. Definitions
--------------
1.1. "Contributor"
means each individual or legal entity that creates, contributes to
the creation of, or owns Covered Software.
---
1.2. "Contributor Version"
means the combination of the Contributions of others (if any) used
by a Contributor and that particular Contributor's Contribution.
1.1. "Contributor" means each individual or legal entity that creates, contributes to the creation
of, or owns Covered Software.
1.3. "Contribution"
means Covered Software of a particular Contributor.
1.2. "Contributor Version" means the combination of the Contributions of others (if any) used by a
Contributor and that particular Contributor's Contribution.
1.4. "Covered Software"
means Source Code Form to which the initial Contributor has attached
the notice in Exhibit A, the Executable Form of such Source Code
Form, and Modifications of such Source Code Form, in each case
including portions thereof.
1.3. "Contribution" means Covered Software of a particular Contributor.
1.5. "Incompatible With Secondary Licenses"
means
1.4. "Covered Software" means Source Code Form to which the initial Contributor has attached the
notice in Exhibit A, the Executable Form of such Source Code Form, and Modifications of such Source
Code Form, in each case including portions thereof.
1.5. "Incompatible With Secondary Licenses" means
(a) that the initial Contributor has attached the notice described
in Exhibit B to the Covered Software; or
@ -31,23 +25,17 @@ Mozilla Public License Version 2.0
version 1.1 or earlier of the License, but not also under the
terms of a Secondary License.
1.6. "Executable Form"
means any form of the work other than Source Code Form.
1.6. "Executable Form" means any form of the work other than Source Code Form.
1.7. "Larger Work"
means a work that combines Covered Software with other material, in
a separate file or files, that is not Covered Software.
1.7. "Larger Work" means a work that combines Covered Software with other material, in a separate
file or files, that is not Covered Software.
1.8. "License"
means this document.
1.8. "License" means this document.
1.9. "Licensable"
means having the right to grant, to the maximum extent possible,
whether at the time of the initial grant or subsequently, any and
all of the rights conveyed by this License.
1.9. "Licensable" means having the right to grant, to the maximum extent possible, whether at the
time of the initial grant or subsequently, any and all of the rights conveyed by this License.
1.10. "Modifications"
means any of the following:
1.10. "Modifications" means any of the following:
(a) any file in Source Code Form that results from an addition to,
deletion from, or modification of the contents of Covered
@ -56,319 +44,284 @@ Mozilla Public License Version 2.0
(b) any new file in Source Code Form that contains any Covered
Software.
1.11. "Patent Claims" of a Contributor
means any patent claim(s), including without limitation, method,
process, and apparatus claims, in any patent Licensable by such
Contributor that would be infringed, but for the grant of the
License, by the making, using, selling, offering for sale, having
made, import, or transfer of either its Contributions or its
Contributor Version.
1.11. "Patent Claims" of a Contributor means any patent claim(s), including without limitation,
method, process, and apparatus claims, in any patent Licensable by such Contributor that would be
infringed, but for the grant of the License, by the making, using, selling, offering for sale,
having made, import, or transfer of either its Contributions or its Contributor Version.
1.12. "Secondary License"
means either the GNU General Public License, Version 2.0, the GNU
Lesser General Public License, Version 2.1, the GNU Affero General
Public License, Version 3.0, or any later versions of those
licenses.
1.12. "Secondary License" means either the GNU General Public License, Version 2.0, the GNU Lesser
General Public License, Version 2.1, the GNU Affero General Public License, Version 3.0, or any
later versions of those licenses.
1.13. "Source Code Form"
means the form of the work preferred for making modifications.
1.13. "Source Code Form" means the form of the work preferred for making modifications.
1.14. "You" (or "Your")
means an individual or a legal entity exercising rights under this
License. For legal entities, "You" includes any entity that
controls, is controlled by, or is under common control with You. For
purposes of this definition, "control" means (a) the power, direct
or indirect, to cause the direction or management of such entity,
whether by contract or otherwise, or (b) ownership of more than
fifty percent (50%) of the outstanding shares or beneficial
ownership of such entity.
1.14. "You" (or "Your") means an individual or a legal entity exercising rights under this License.
For legal entities, "You" includes any entity that controls, is controlled by, or is under common
control with You. For purposes of this definition, "control" means (a) the power, direct or
indirect, to cause the direction or management of such entity, whether by contract or otherwise, or
(b) ownership of more than fifty percent (50%) of the outstanding shares or beneficial ownership of
such entity.
2. License Grants and Conditions
--------------------------------
---
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
Each Contributor hereby grants You a world-wide, royalty-free, non-exclusive license:
(a) under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or
as part of a Larger Work; and
(a) under intellectual property rights (other than patent or trademark) Licensable by such
Contributor to use, reproduce, make available, modify, display, perform, distribute, and otherwise
exploit its Contributions, either on an unmodified basis, with Modifications, or as part of a Larger
Work; and
(b) under Patent Claims of such Contributor to make, use, sell, offer
for sale, have made, import, and otherwise transfer either its
Contributions or its Contributor Version.
(b) under Patent Claims of such Contributor to make, use, sell, offer for sale, have made, import,
and otherwise transfer either its Contributions or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution
become effective for each Contribution on the date the Contributor first
distributes such Contribution.
The licenses granted in Section 2.1 with respect to any Contribution become effective for each
Contribution on the date the Contributor first distributes such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under
this License. No additional rights or licenses will be implied from the
distribution or licensing of Covered Software under this License.
Notwithstanding Section 2.1(b) above, no patent license is granted by a
Contributor:
The licenses granted in this Section 2 are the only rights granted under this License. No additional
rights or licenses will be implied from the distribution or licensing of Covered Software under this
License. Notwithstanding Section 2.1(b) above, no patent license is granted by a Contributor:
(a) for any code that a Contributor has removed from Covered Software;
or
(a) for any code that a Contributor has removed from Covered Software; or
(b) for infringements caused by: (i) Your and any other third party's
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its Contributor
Version); or
(b) for infringements caused by: (i) Your and any other third party's modifications of Covered
Software, or (ii) the combination of its Contributions with other software (except as part of its
Contributor Version); or
(c) under Patent Claims infringed by Covered Software in the absence of
its Contributions.
(c) under Patent Claims infringed by Covered Software in the absence of its Contributions.
This License does not grant any rights in the trademarks, service marks,
or logos of any Contributor (except as may be necessary to comply with
the notice requirements in Section 3.4).
This License does not grant any rights in the trademarks, service marks, or logos of any Contributor
(except as may be necessary to comply with the notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this
License (see Section 10.2) or under the terms of a Secondary License (if
permitted under the terms of Section 3.3).
No Contributor makes additional grants as a result of Your choice to distribute the Covered Software
under a subsequent version of this License (see Section 10.2) or under the terms of a Secondary
License (if permitted under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its
Contributions are its original creation(s) or it has sufficient rights
to grant the rights to its Contributions conveyed by this License.
Each Contributor represents that the Contributor believes its Contributions are its original
creation(s) or it has sufficient rights to grant the rights to its Contributions conveyed by this
License.
2.6. Fair Use
This License is not intended to limit any rights You have under
applicable copyright doctrines of fair use, fair dealing, or other
equivalents.
This License is not intended to limit any rights You have under applicable copyright doctrines of
fair use, fair dealing, or other equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted
in Section 2.1.
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in Section 2.1.
3. Responsibilities
-------------------
---
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under
the terms of this License. You must inform recipients that the Source
Code Form of the Covered Software is governed by the terms of this
License, and how they can obtain a copy of this License. You may not
attempt to alter or restrict the recipients' rights in the Source Code
Form.
All distribution of Covered Software in Source Code Form, including any Modifications that You
create or to which You contribute, must be under the terms of this License. You must inform
recipients that the Source Code Form of the Covered Software is governed by the terms of this
License, and how they can obtain a copy of this License. You may not attempt to alter or restrict
the recipients' rights in the Source Code Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
(a) such Covered Software must also be made available in Source Code
Form, as described in Section 3.1, and You must inform recipients of
the Executable Form how they can obtain a copy of such Source Code
Form by reasonable means in a timely manner, at a charge no more
than the cost of distribution to the recipient; and
(a) such Covered Software must also be made available in Source Code Form, as described in Section
3.1, and You must inform recipients of the Executable Form how they can obtain a copy of such Source
Code Form by reasonable means in a timely manner, at a charge no more than the cost of distribution
to the recipient; and
(b) You may distribute such Executable Form under the terms of this
License, or sublicense it under different terms, provided that the
license for the Executable Form does not attempt to limit or alter
the recipients' rights in the Source Code Form under this License.
(b) You may distribute such Executable Form under the terms of this License, or sublicense it under
different terms, provided that the license for the Executable Form does not attempt to limit or
alter the recipients' rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for
the Covered Software. If the Larger Work is a combination of Covered
Software with a work governed by one or more Secondary Licenses, and the
Covered Software is not Incompatible With Secondary Licenses, this
License permits You to additionally distribute such Covered Software
under the terms of such Secondary License(s), so that the recipient of
the Larger Work may, at their option, further distribute the Covered
Software under the terms of either this License or such Secondary
License(s).
You may create and distribute a Larger Work under terms of Your choice, provided that You also
comply with the requirements of this License for the Covered Software. If the Larger Work is a
combination of Covered Software with a work governed by one or more Secondary Licenses, and the
Covered Software is not Incompatible With Secondary Licenses, this License permits You to
additionally distribute such Covered Software under the terms of such Secondary License(s), so that
the recipient of the Larger Work may, at their option, further distribute the Covered Software under
the terms of either this License or such Secondary License(s).
3.4. Notices
You may not remove or alter the substance of any license notices
(including copyright notices, patent notices, disclaimers of warranty,
or limitations of liability) contained within the Source Code Form of
the Covered Software, except that You may alter any license notices to
the extent required to remedy known factual inaccuracies.
You may not remove or alter the substance of any license notices (including copyright notices,
patent notices, disclaimers of warranty, or limitations of liability) contained within the Source
Code Form of the Covered Software, except that You may alter any license notices to the extent
required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on
behalf of any Contributor. You must make it absolutely clear that any
such warranty, support, indemnity, or liability obligation is offered by
You alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
You may choose to offer, and to charge a fee for, warranty, support, indemnity or liability
obligations to one or more recipients of Covered Software. However, You may do so only on Your own
behalf, and not on behalf of any Contributor. You must make it absolutely clear that any such
warranty, support, indemnity, or liability obligation is offered by You alone, and You hereby agree
to indemnify every Contributor for any liability incurred by such Contributor as a result of
warranty, support, indemnity or liability terms You offer. You may include additional disclaimers of
warranty and limitations of liability specific to any jurisdiction.
4. Inability to Comply Due to Statute or Regulation
---------------------------------------------------
If it is impossible for You to comply with any of the terms of this
License with respect to some or all of the Covered Software due to
statute, judicial order, or regulation then You must: (a) comply with
the terms of this License to the maximum extent possible; and (b)
describe the limitations and the code they affect. Such description must
be placed in a text file included with all distributions of the Covered
Software under this License. Except to the extent prohibited by statute
or regulation, such description must be sufficiently detailed for a
recipient of ordinary skill to be able to understand it.
---
If it is impossible for You to comply with any of the terms of this License with respect to some or
all of the Covered Software due to statute, judicial order, or regulation then You must: (a) comply
with the terms of this License to the maximum extent possible; and (b) describe the limitations and
the code they affect. Such description must be placed in a text file included with all distributions
of the Covered Software under this License. Except to the extent prohibited by statute or
regulation, such description must be sufficiently detailed for a recipient of ordinary skill to be
able to understand it.
5. Termination
--------------
5.1. The rights granted under this License will terminate automatically
if You fail to comply with any of its terms. However, if You become
compliant, then the rights granted under this License from a particular
Contributor are reinstated (a) provisionally, unless and until such
Contributor explicitly and finally terminates Your grants, and (b) on an
ongoing basis, if such Contributor fails to notify You of the
non-compliance by some reasonable means prior to 60 days after You have
come back into compliance. Moreover, Your grants from a particular
Contributor are reinstated on an ongoing basis if such Contributor
notifies You of the non-compliance by some reasonable means, this is the
first time You have received notice of non-compliance with this License
from such Contributor, and You become compliant prior to 30 days after
Your receipt of the notice.
---
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions,
counter-claims, and cross-claims) alleging that a Contributor Version
directly or indirectly infringes any patent, then the rights granted to
You by any and all Contributors for the Covered Software under Section
2.1 of this License shall terminate.
5.1. The rights granted under this License will terminate automatically if You fail to comply with
any of its terms. However, if You become compliant, then the rights granted under this License from
a particular Contributor are reinstated (a) provisionally, unless and until such Contributor
explicitly and finally terminates Your grants, and (b) on an ongoing basis, if such Contributor
fails to notify You of the non-compliance by some reasonable means prior to 60 days after You have
come back into compliance. Moreover, Your grants from a particular Contributor are reinstated on an
ongoing basis if such Contributor notifies You of the non-compliance by some reasonable means, this
is the first time You have received notice of non-compliance with this License from such
Contributor, and You become compliant prior to 30 days after Your receipt of the notice.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all
end user license agreements (excluding distributors and resellers) which
have been validly granted by You or Your distributors under this License
prior to termination shall survive termination.
5.2. If You initiate litigation against any entity by asserting a patent infringement claim
(excluding declaratory judgment actions, counter-claims, and cross-claims) alleging that a
Contributor Version directly or indirectly infringes any patent, then the rights granted to You by
any and all Contributors for the Covered Software under Section 2.1 of this License shall terminate.
************************************************************************
* *
* 6. Disclaimer of Warranty *
* ------------------------- *
* *
* Covered Software is provided under this License on an "as is" *
* basis, without warranty of any kind, either expressed, implied, or *
* statutory, including, without limitation, warranties that the *
* Covered Software is free of defects, merchantable, fit for a *
* particular purpose or non-infringing. The entire risk as to the *
* quality and performance of the Covered Software is with You. *
* Should any Covered Software prove defective in any respect, You *
* (not any Contributor) assume the cost of any necessary servicing, *
* repair, or correction. This disclaimer of warranty constitutes an *
* essential part of this License. No use of any Covered Software is *
* authorized under this License except under this disclaimer. *
* *
************************************************************************
5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user license agreements
(excluding distributors and resellers) which have been validly granted by You or Your distributors
under this License prior to termination shall survive termination.
************************************************************************
* *
* 7. Limitation of Liability *
* -------------------------- *
* *
* Under no circumstances and under no legal theory, whether tort *
* (including negligence), contract, or otherwise, shall any *
* Contributor, or anyone who distributes Covered Software as *
* permitted above, be liable to You for any direct, indirect, *
* special, incidental, or consequential damages of any character *
* including, without limitation, damages for lost profits, loss of *
* goodwill, work stoppage, computer failure or malfunction, or any *
* and all other commercial damages or losses, even if such party *
* shall have been informed of the possibility of such damages. This *
* limitation of liability shall not apply to liability for death or *
* personal injury resulting from such party's negligence to the *
* extent applicable law prohibits such limitation. Some *
* jurisdictions do not allow the exclusion or limitation of *
* incidental or consequential damages, so this exclusion and *
* limitation may not apply to You. *
* *
************************************************************************
---
-
-
-
6. Disclaimer of Warranty *
- ------------------------- *
-
-
- Covered Software is provided under this License on an "as is" *
- basis, without warranty of any kind, either expressed, implied, or *
- statutory, including, without limitation, warranties that the *
- Covered Software is free of defects, merchantable, fit for a *
- particular purpose or non-infringing. The entire risk as to the *
- quality and performance of the Covered Software is with You. *
- Should any Covered Software prove defective in any respect, You *
- (not any Contributor) assume the cost of any necessary servicing, *
- repair, or correction. This disclaimer of warranty constitutes an *
- essential part of this License. No use of any Covered Software is *
- authorized under this License except under this disclaimer. *
-
-
---
---
-
-
-
7. Limitation of Liability *
- -------------------------- *
-
-
- Under no circumstances and under no legal theory, whether tort *
- (including negligence), contract, or otherwise, shall any *
- Contributor, or anyone who distributes Covered Software as *
- permitted above, be liable to You for any direct, indirect, *
- special, incidental, or consequential damages of any character *
- including, without limitation, damages for lost profits, loss of *
- goodwill, work stoppage, computer failure or malfunction, or any *
- and all other commercial damages or losses, even if such party *
- shall have been informed of the possibility of such damages. This *
- limitation of liability shall not apply to liability for death or *
- personal injury resulting from such party's negligence to the *
- extent applicable law prohibits such limitation. Some *
- jurisdictions do not allow the exclusion or limitation of *
- incidental or consequential damages, so this exclusion and *
- limitation may not apply to You. *
-
-
---
8. Litigation
-------------
Any litigation relating to this License may be brought only in the
courts of a jurisdiction where the defendant maintains its principal
place of business and such litigation shall be governed by laws of that
jurisdiction, without reference to its conflict-of-law provisions.
Nothing in this Section shall prevent a party's ability to bring
cross-claims or counter-claims.
---
Any litigation relating to this License may be brought only in the courts of a jurisdiction where
the defendant maintains its principal place of business and such litigation shall be governed by
laws of that jurisdiction, without reference to its conflict-of-law provisions. Nothing in this
Section shall prevent a party's ability to bring cross-claims or counter-claims.
9. Miscellaneous
----------------
This License represents the complete agreement concerning the subject
matter hereof. If any provision of this License is held to be
unenforceable, such provision shall be reformed only to the extent
necessary to make it enforceable. Any law or regulation which provides
that the language of a contract shall be construed against the drafter
shall not be used to construe this License against a Contributor.
---
This License represents the complete agreement concerning the subject matter hereof. If any
provision of this License is held to be unenforceable, such provision shall be reformed only to the
extent necessary to make it enforceable. Any law or regulation which provides that the language of a
contract shall be construed against the drafter shall not be used to construe this License against a
Contributor.
10. Versions of the License
---------------------------
---
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
Mozilla Foundation is the license steward. Except as provided in Section 10.3, no one other than the
license steward has the right to modify or publish new versions of this License. Each version will
be given a distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version
of the License under which You originally received the Covered Software,
or under the terms of any subsequent version published by the license
steward.
You may distribute the Covered Software under the terms of the version of the License under which
You originally received the Covered Software, or under the terms of any subsequent version published
by the license steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a
modified version of this License if you rename the license and remove
any references to the name of the license steward (except to note that
such modified license differs from this License).
If you create software not governed by this License, and you want to create a new license for such
software, you may create and use a modified version of this License if you rename the license and
remove any references to the name of the license steward (except to note that such modified license
differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary
Licenses
10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses
If You choose to distribute Source Code Form that is Incompatible With
Secondary Licenses under the terms of this version of the License, the
notice described in Exhibit B of this License must be attached.
If You choose to distribute Source Code Form that is Incompatible With Secondary Licenses under the
terms of this version of the License, the notice described in Exhibit B of this License must be
attached.
Exhibit A - Source Code Form License Notice
-------------------------------------------
## Exhibit A - Source Code Form License Notice
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
This Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of
the MPL was not distributed with this file, You can obtain one at http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular
file, then You may include the notice in a location (such as a LICENSE
file in a relevant directory) where a recipient would be likely to look
for such a notice.
If it is not possible or desirable to put the notice in a particular file, then You may include the
notice in a location (such as a LICENSE file in a relevant directory) where a recipient would be
likely to look for such a notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - "Incompatible With Secondary Licenses" Notice
---------------------------------------------------------
This Source Code Form is "Incompatible With Secondary Licenses", as
defined by the Mozilla Public License, v. 2.0.
## Exhibit B - "Incompatible With Secondary Licenses" Notice
This Source Code Form is "Incompatible With Secondary Licenses", as defined by the Mozilla Public
License, v. 2.0.

View file

@ -6,30 +6,31 @@
## About
Kanidm is a simple and secure identity management platform, which provides services to allow
other systems and application to authenticate against. The project aims for the highest levels
of reliability, security and ease of use.
Kanidm is a simple and secure identity management platform, which provides services to allow other
systems and application to authenticate against. The project aims for the highest levels of
reliability, security and ease of use.
The goal of this project is to be a complete identity management provider, covering the broadest
possible set of requirements and integrations. You should not need any other components (like Keycloak)
when you use Kanidm. We want to create a project that will be suitable for everything
from personal home deployments, to the largest enterprise needs.
possible set of requirements and integrations. You should not need any other components (like
Keycloak) when you use Kanidm. We want to create a project that will be suitable for everything from
personal home deployments, to the largest enterprise needs.
To achieve this we rely heavily on strict defaults, simple configuration, and self-healing components.
To achieve this we rely heavily on strict defaults, simple configuration, and self-healing
components.
The project is still growing and some areas are developing at a fast pace. The core of the server
however is reliable and we make all effort to ensure upgrades will always work.
Kanidm supports:
* Oauth2/OIDC Authentication provider for web SSO
* Read only LDAPS gateway
* Linux/Unix integration (with offline authentication)
* SSH key distribution to Linux/Unix systems
* RADIUS for network authentication
* Passkeys / Webauthn for secure cryptographic authentication
* A self service web ui
* Complete CLI tooling for administration
- Oauth2/OIDC Authentication provider for web SSO
- Read only LDAPS gateway
- Linux/Unix integration (with offline authentication)
- SSH key distribution to Linux/Unix systems
- RADIUS for network authentication
- Passkeys / Webauthn for secure cryptographic authentication
- A self service web ui
- Complete CLI tooling for administration
If you want to host your own centralised authentication service, then Kanidm is for you!
@ -40,7 +41,8 @@ If you want to deploy Kanidm to see what it can do, you should read the Kanidm b
- [Kanidm book (Latest stable)](https://kanidm.github.io/kanidm/stable/)
- [Kanidm book (Latest commit)](https://kanidm.github.io/kanidm/master/)
We also publish [support guidelines](https://github.com/kanidm/kanidm/blob/master/project_docs/RELEASE_AND_SUPPORT.md)
We also publish
[support guidelines](https://github.com/kanidm/kanidm/blob/master/project_docs/RELEASE_AND_SUPPORT.md)
for what the project will support.
## Code of Conduct / Ethics
@ -54,8 +56,8 @@ See our documentation on [rights and ethics]
## Getting in Contact / Questions
We have a [gitter community channel] where we can talk. Firstyear is also happy to
answer questions via email, which can be found on their github profile.
We have a [gitter community channel] where we can talk. Firstyear is also happy to answer questions
via email, which can be found on their github profile.
[gitter community channel]: https://gitter.im/kanidm/community
@ -63,29 +65,29 @@ answer questions via email, which can be found on their github profile.
### LLDAP
[LLDAP](https://github.com/nitnelave/lldap) is a similar project aiming for a small and easy to administer
LDAP server with a web administration portal. Both projects use the [Kanidm LDAP bindings](https://github.com/kanidm/ldap3), and have
many similar ideas.
[LLDAP](https://github.com/nitnelave/lldap) is a similar project aiming for a small and easy to
administer LDAP server with a web administration portal. Both projects use the
[Kanidm LDAP bindings](https://github.com/kanidm/ldap3), and have many similar ideas.
The primary benefit of Kanidm over LLDAP is that Kanidm offers a broader set of "built in" features
like Oauth2 and OIDC. To use these from LLDAP you need an external portal like Keycloak, where in Kanidm
they are "built in". However that is also a strength of LLDAP is that is offers "less" which may make
it easier to administer and deploy for you.
like Oauth2 and OIDC. To use these from LLDAP you need an external portal like Keycloak, where in
Kanidm they are "built in". However that is also a strength of LLDAP is that is offers "less" which
may make it easier to administer and deploy for you.
If Kanidm is too complex for your needs, you should check out LLDAP as a smaller alternative. If you
want a project which has a broader feature set out of the box, then Kanidm might be a better fit.
### 389-ds / OpenLDAP
Both 389-ds and OpenLDAP are generic LDAP servers. This means they only provide LDAP and you need
to bring your own IDM configuration on top.
Both 389-ds and OpenLDAP are generic LDAP servers. This means they only provide LDAP and you need to
bring your own IDM configuration on top.
If you need the highest levels of customisation possible from your LDAP deployment, then these are
probably better alternatives. If you want a service that is easier to setup and focused on IDM, then
Kanidm is a better choice.
Kanidm was originally inspired by many elements of both 389-ds and OpenLDAP. Already Kanidm is as fast
as (or faster than) 389-ds for performance and scaling.
Kanidm was originally inspired by many elements of both 389-ds and OpenLDAP. Already Kanidm is as
fast as (or faster than) 389-ds for performance and scaling.
### FreeIPA
@ -101,15 +103,14 @@ Kanidm is probably for you.
## Developer Getting Started
If you want to develop on the server, there is a getting started [guide for developers]. IDM
is a diverse topic and we encourage contributions of many kinds in the project, from people of
all backgrounds.
If you want to develop on the server, there is a getting started [guide for developers]. IDM is a
diverse topic and we encourage contributions of many kinds in the project, from people of all
backgrounds.
[guide for developers]: https://kanidm.github.io/kanidm/master/DEVELOPER_README.html
## What does Kanidm mean?
The original project name was rsidm while it was a thought experiment. Now that it's growing
and developing, we gave it a better project name. Kani is Japanese for "crab". Rust's mascot is a crab.
The original project name was rsidm while it was a thought experiment. Now that it's growing and
developing, we gave it a better project name. Kani is Japanese for "crab". Rust's mascot is a crab.
IDM is the common industry term for identity management services.

View file

@ -1,14 +1,12 @@
# Developer Principles
As a piece of software that stores the identities of people, the project becomes
bound to social and political matters. The decisions we make have consequences
on many people - many who never have the chance to choose what software is used
to store their identities (think employees in a business).
As a piece of software that stores the identities of people, the project becomes bound to social and
political matters. The decisions we make have consequences on many people - many who never have the
chance to choose what software is used to store their identities (think employees in a business).
This means we have a responsibility to not only be aware of our impact on our
direct users (developers, system administrators, dev ops, security and more)
but also the impact on indirect consumers - many of who are unlikely to be in
a position to contact us to ask for changes and help.
This means we have a responsibility to not only be aware of our impact on our direct users
(developers, system administrators, dev ops, security and more) but also the impact on indirect
consumers - many of who are unlikely to be in a position to contact us to ask for changes and help.
## Ethics / Rights
@ -18,58 +16,52 @@ If you have not already, please see our documentation on [rights and ethics]
## Humans First
We must at all times make decisions that put humans first. We must respect
all cultures, languages, and identities and how they are represented.
We must at all times make decisions that put humans first. We must respect all cultures, languages,
and identities and how they are represented.
This may mean we make technical choices that are difficult or more complex,
or different to "how things have always been done". But we do this to
ensure that all people can have their identities stored how they choose.
This may mean we make technical choices that are difficult or more complex, or different to "how
things have always been done". But we do this to ensure that all people can have their identities
stored how they choose.
For example, any user may change their name, display name and legal name at
any time. Many applications will break as they primary key from name when
this occurs. But this is the fault of the application. Name changes must
be allowed. Our job as technical experts is to allow that to happen.
For example, any user may change their name, display name and legal name at any time. Many
applications will break as they primary key from name when this occurs. But this is the fault of the
application. Name changes must be allowed. Our job as technical experts is to allow that to happen.
We will never put a burden on the user to correct for poor designs on
our part. For example, locking an account if it logs in from a different
country unless the user logs in before hand to indicate where they are
going. This makes the user responsible for a burden (changing the allowed login
country) when the real problem is preventing bruteforce attacks - which
can be technically solved in better ways that don't put administrative
load to humans.
We will never put a burden on the user to correct for poor designs on our part. For example, locking
an account if it logs in from a different country unless the user logs in before hand to indicate
where they are going. This makes the user responsible for a burden (changing the allowed login
country) when the real problem is preventing bruteforce attacks - which can be technically solved in
better ways that don't put administrative load to humans.
## Correct and Simple
As a piece of security sensitive software we must always put correctness
first. All code must have tests. All developers must be able to run all
tests on their machine and environment of choice.
As a piece of security sensitive software we must always put correctness first. All code must have
tests. All developers must be able to run all tests on their machine and environment of choice.
This means that the following must always work:
git clone ...
cargo test
```bash
git clone ...
cargo test
```
If a test or change would require extra requirements, dependencies, or
preconfiguration, then we can no longer provide the above. Testing must
be easy and accesible, else we wont do it, and that leads to poor
software quality.
If a test or change would require extra requirements, dependencies, or preconfiguration, then we can
no longer provide the above. Testing must be easy and accesible, else we wont do it, and that leads
to poor software quality.
The project must be simple. Any one should be able to understand how it
works and why those decisions were made.
The project must be simple. Any one should be able to understand how it works and why those
decisions were made.
## Languages
The core server will (for now) always be written in Rust. This is due to
the strong type guarantees it gives, and how that can help raise the
quality of our project.
The core server will (for now) always be written in Rust. This is due to the strong type guarantees
it gives, and how that can help raise the quality of our project.
## Over-Configuration
Configuration will be allowed, but only if it does not impact the statements
above. Having configuration is good, but allowing too much (IE a scripting
engine for security rules) can give deployments the ability to violate human
first principles, which reflects badly on us.
Configuration will be allowed, but only if it does not impact the statements above. Having
configuration is good, but allowing too much (IE a scripting engine for security rules) can give
deployments the ability to violate human first principles, which reflects badly on us.
All configuration items, must be constrained to fit within our principles
so that every kanidm deployment, will always provide a positive experience
to all people.
All configuration items, must be constrained to fit within our principles so that every kanidm
deployment, will always provide a positive experience to all people.

View file

@ -2,24 +2,24 @@
Kanidm is released on a 3 month (quarterly) basis.
* February 1st
* May 1st
* August 1st
* November 1st
- February 1st
- May 1st
- August 1st
- November 1st
Releases will be tagged and branched in git.
1.2.0 will be released as the first supported version once the project believes the project is
in a maintainable longterm state, without requiring backward breaking changes. There is no current
1.2.0 will be released as the first supported version once the project believes the project is in a
maintainable longterm state, without requiring backward breaking changes. There is no current
estimated date for 1.2.0.
## Support
Releases during alpha will recieve limited fixes once released. Specifically we will resolve:
* Moderate security issues and above
* Flaws leading to dataloss or corruption
* Other quality fixes at the discrestion of the project team
- Moderate security issues and above
- Flaws leading to dataloss or corruption
- Other quality fixes at the discrestion of the project team
These will be backported to the latest stable branch only.
@ -27,23 +27,25 @@ These will be backported to the latest stable branch only.
There are a number of "surfaces" that can be considered as "API" in Kanidm.
* JSON HTTP end points of kanidmd
* unix domain socket API of `kanidm_unixd` resolver
* LDAP interface of kanidm
* CLI interface of kanidm admin command
* Many other interaction surfaces
- JSON HTTP end points of kanidmd
- unix domain socket API of `kanidm_unixd` resolver
- LDAP interface of kanidm
- CLI interface of kanidm admin command
- Many other interaction surfaces
During the Alpha, there is no guarantee that *any* of these APIs named here or not named will remain stable.
Only elements from "the same release" are guaranteed to work with each other.
During the Alpha, there is no guarantee that _any_ of these APIs named here or not named will remain
stable. Only elements from "the same release" are guaranteed to work with each other.
Once an official release is made, only the JSON API and LDAP interface will be declared stable.
The unix domain socket API is internal and will never be "stable".
The CLI is *not* an API and can change with the interest of human interaction during any release.
The CLI is _not_ an API and can change with the interest of human interaction during any release.
## Python module
The python module will typically trail changes in functionality of the core Rust code, and will be developed as we it for our own needs - please feel free to add functionality or improvements, or [ask for them in a Github issue](http://github.com/kanidm/kanidm/issues/new/choose)!
The python module will typically trail changes in functionality of the core Rust code, and will be
developed as we it for our own needs - please feel free to add functionality or improvements, or
[ask for them in a Github issue](http://github.com/kanidm/kanidm/issues/new/choose)!
All code changes will include full type-casting wherever possible.

View file

@ -1,74 +1,74 @@
## Pre-Reqs
cargo install cargo-audit
cargo install cargo-outdated
```bash
cargo install cargo-audit
cargo install cargo-outdated
```
## Check List
### Start a release
* [ ] git checkout -b YYYYMMDD-release
- [ ] git checkout -b YYYYMMDD-release
### Cargo Tasks
* [ ] cargo outdated -R
* [ ] cargo audit
* [ ] cargo test
- [ ] cargo outdated -R
- [ ] cargo audit
- [ ] cargo test
### Code Changes
* [ ] upgrade crypto policy values if requires
* [ ] bump index version in constants
* [ ] check for breaking db entry changes.
- [ ] upgrade crypto policy values if requires
- [ ] bump index version in constants
- [ ] check for breaking db entry changes.
### Administration
* [ ] update version in ./kanidmd\_web\_ui/Cargo.toml
* [ ] update version in ./Cargo.toml
* [ ] cargo test
* [ ] build wasm components with release profile
* [ ] Update `RELEASE_NOTES.md`
* [ ] git commit
* [ ] git rebase -i HEAD~X
* [ ] git push origin YYYYMMDD-release
* [ ] Merge PR
- [ ] update version in ./kanidmd\_web\_ui/Cargo.toml
- [ ] update version in ./Cargo.toml
- [ ] cargo test
- [ ] build wasm components with release profile
- [ ] Update `RELEASE_NOTES.md`
- [ ] git commit
- [ ] git rebase -i HEAD~X
- [ ] git push origin YYYYMMDD-release
- [ ] Merge PR
### Git Management
* [ ] git checkout master
* [ ] git branch 1.1.0-alpha.x (Note no v to prevent ref conflict)
* [ ] git checkout v1.1.0-alpha.x
* [ ] git tag v1.1.0-alpha.x
- [ ] git checkout master
- [ ] git branch 1.1.0-alpha.x (Note no v to prevent ref conflict)
- [ ] git checkout v1.1.0-alpha.x
- [ ] git tag v1.1.0-alpha.x
* [ ] Final inspect of the branch
- [ ] Final inspect of the branch
* [ ] git push origin 1.1.0-alpha.x
* [ ] git push origin 1.1.0-alpha.x --tags
- [ ] git push origin 1.1.0-alpha.x
- [ ] git push origin 1.1.0-alpha.x --tags
### Cargo publish
* [ ] publish `kanidm_proto`
* [ ] publish `kanidmd/kanidm`
* [ ] publish `kanidm_client`
* [ ] publish `kanidm_tools`
- [ ] publish `kanidm_proto`
- [ ] publish `kanidmd/kanidm`
- [ ] publish `kanidm_client`
- [ ] publish `kanidm_tools`
### Docker
* [ ] docker buildx use cluster
* [ ] `make buildx/kanidmd/x86_64_v3 buildx/kanidmd buildx/radiusd`
* [ ] Update the readme on docker https://hub.docker.com/repository/docker/kanidm/server
- [ ] docker buildx use cluster
- [ ] `make buildx/kanidmd/x86_64_v3 buildx/kanidmd buildx/radiusd`
- [ ] Update the readme on docker https://hub.docker.com/repository/docker/kanidm/server
### Distro
* [ ] vendor and release to build.opensuse.org
- [ ] vendor and release to build.opensuse.org
### Follow up
* [ ] git checkout master
* [ ] git pull
* [ ] git branch YYYYMMDD-dev-version
* [ ] update version in ./kanidmd\_web\_ui/Cargo.toml
* [ ] update version in ./Cargo.toml
* [ ] build wasm components with debug profile
- [ ] git checkout master
- [ ] git pull
- [ ] git branch YYYYMMDD-dev-version
- [ ] update version in ./kanidmd\_web\_ui/Cargo.toml
- [ ] update version in ./Cargo.toml
- [ ] build wasm components with debug profile

View file

@ -2,28 +2,32 @@
A Python module for interacting with Kanidm.
Currently in very very very early beta, please [log an issue](https://github.com/kanidm/kanidm/issues/new/choose) for feature requests and bugs.
Currently in very very very early beta, please
[log an issue](https://github.com/kanidm/kanidm/issues/new/choose) for feature requests and bugs.
## Installation
```shell
```bash
python -m pip install kanidm
```
## Documentation
Documentation can be generated by [cloning the repository](https://github.com/kanidm/kanidm) and running `make docs/pykanidm/build`. The documentation will appear in `./pykanidm/site`. You'll need make and the [poetry](https://pypi.org/project/poetry/) package installed.
Documentation can be generated by [cloning the repository](https://github.com/kanidm/kanidm) and
running `make docs/pykanidm/build`. The documentation will appear in `./pykanidm/site`. You'll need
make and the [poetry](https://pypi.org/project/poetry/) package installed.
## Testing
Set up your dev environment using `poetry` - `python -m pip install poetry && poetry install`.
Pytest it used for testing, if you don't have a live server to test against and config set up, use `poetry run pytest -m 'not network'`.
Pytest it used for testing, if you don't have a live server to test against and config set up, use
`poetry run pytest -m 'not network'`.
## Changelog
| Version | Date | Notes |
| --- | --- | --- |
| 0.0.1 | 2022-08-16 | Initial release |
| 0.0.2 | 2022-08-16 | Updated license, including test code in package |
| Version | Date | Notes |
| ------- | ---------- | ----------------------------------------------------- |
| 0.0.1 | 2022-08-16 | Initial release |
| 0.0.2 | 2022-08-16 | Updated license, including test code in package |
| 0.0.3 | 2022-08-17 | Updated test suite to allow skipping of network tests |

View file

@ -1,4 +1,3 @@
# kanidm.KanidmClient
::: kanidm.KanidmClient
::: kanidm.KanidmClient

View file

@ -1,4 +1,3 @@
# kanidm.types.KanidmClientConfig
::: kanidm.types.KanidmClientConfig

View file

@ -1,4 +1,3 @@
# kanidm.types.RadiusClient
::: kanidm.types.RadiusClient

View file

@ -1,3 +1 @@
::: kanidm.tokens
::: kanidm.tokens