mirror of
https://github.com/kanidm/kanidm.git
synced 2025-02-23 20:47:01 +01:00
Spell checking and stuff (#1314)
* codespell run and spelling fixes * some clippying * minor fmt fix * making yamllint happy * adding codespell github action
This commit is contained in:
parent
e08eca4c2a
commit
b8dcb47f93
27
.github/workflows/codespell.yml
vendored
Normal file
27
.github/workflows/codespell.yml
vendored
Normal file
|
@ -0,0 +1,27 @@
|
||||||
|
---
|
||||||
|
name: Spell Check
|
||||||
|
|
||||||
|
"on":
|
||||||
|
push:
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-${{ github.ref }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
jobs:
|
||||||
|
codespell:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
clean: false
|
||||||
|
|
||||||
|
- name: Install python 3.10
|
||||||
|
uses: actions/setup-python@v4
|
||||||
|
with:
|
||||||
|
python-version: "3.10"
|
||||||
|
|
||||||
|
- name: Install and run codespell
|
||||||
|
run: |
|
||||||
|
python -m pip install codespell
|
||||||
|
make codespell
|
4
.github/workflows/kanidm_book.yml
vendored
4
.github/workflows/kanidm_book.yml
vendored
|
@ -25,7 +25,8 @@ jobs:
|
||||||
libpam0g-dev
|
libpam0g-dev
|
||||||
|
|
||||||
- name: Setup deno
|
- name: Setup deno
|
||||||
uses: denoland/setup-deno@v1 # Documentation: https://github.com/denoland/setup-deno
|
# Documentation: https://github.com/denoland/setup-deno
|
||||||
|
uses: denoland/setup-deno@v1
|
||||||
with:
|
with:
|
||||||
deno-version: v1.x
|
deno-version: v1.x
|
||||||
|
|
||||||
|
@ -56,6 +57,7 @@ jobs:
|
||||||
uses: actions/setup-python@v4
|
uses: actions/setup-python@v4
|
||||||
with:
|
with:
|
||||||
python-version: "3.10"
|
python-version: "3.10"
|
||||||
|
|
||||||
- name: pykanidm docs
|
- name: pykanidm docs
|
||||||
run: |
|
run: |
|
||||||
python -m pip install poetry
|
python -m pip install poetry
|
||||||
|
|
12
FAQ.md
12
FAQ.md
|
@ -11,7 +11,7 @@ projects can come in different forms so I'll answer to a few of them:
|
||||||
|
|
||||||
If it's not in Rust, it's not ellegible for inclusion. There is a single exception today (rlm
|
If it's not in Rust, it's not ellegible for inclusion. There is a single exception today (rlm
|
||||||
python) but it's very likely this will also be removed in the future. Keeping a single language
|
python) but it's very likely this will also be removed in the future. Keeping a single language
|
||||||
helps with testing, but also makes the project more accesible and consistent to developers.
|
helps with testing, but also makes the project more accessible and consistent to developers.
|
||||||
Additionally, features exist in Rust that help to improve quality of the project from development to
|
Additionally, features exist in Rust that help to improve quality of the project from development to
|
||||||
production.
|
production.
|
||||||
|
|
||||||
|
@ -40,7 +40,7 @@ communicating to a real server. Many developer choices have already been made to
|
||||||
is the most important aspect of the project to ensure that every feature is high quality and
|
is the most important aspect of the project to ensure that every feature is high quality and
|
||||||
reliable.
|
reliable.
|
||||||
|
|
||||||
Additon of extra projects or dependencies, would violate this principle and lead to a situation
|
Addition of extra projects or dependencies, would violate this principle and lead to a situation
|
||||||
where it would not be possible to effectively test for all developers.
|
where it would not be possible to effectively test for all developers.
|
||||||
|
|
||||||
## Why don't you use Raft/Etcd/MongoDB/Other to solve replication?
|
## Why don't you use Raft/Etcd/MongoDB/Other to solve replication?
|
||||||
|
@ -54,11 +54,11 @@ CAP theorem states that in a database you must choose only two of the three poss
|
||||||
|
|
||||||
- Consistency - All servers in a topology see the same data at all times
|
- Consistency - All servers in a topology see the same data at all times
|
||||||
- Availability - All servers in a a topology can accept write operations at all times
|
- Availability - All servers in a a topology can accept write operations at all times
|
||||||
- Partitioning - In the case of a network seperation in the topology, all systems can continue to
|
- Partitioning - In the case of a network separation in the topology, all systems can continue to
|
||||||
process read operations
|
process read operations
|
||||||
|
|
||||||
Many protocols like Raft or Etcd are databases that provide PC guarantees. They guarantee that they
|
Many protocols like Raft or Etcd are databases that provide PC guarantees. They guarantee that they
|
||||||
are always consistent, and can always be read in the face of patitioning, but to accept a write,
|
are always consistent, and can always be read in the face of partitioning, but to accept a write,
|
||||||
they must not be experiencing a partitioning event. Generally this is achieved by the fact that
|
they must not be experiencing a partitioning event. Generally this is achieved by the fact that
|
||||||
these systems elect a single node to process all operations, and then re-elect a new node in the
|
these systems elect a single node to process all operations, and then re-elect a new node in the
|
||||||
case of partitioning events. The elections will fail if a quorum is not met disallowing writes
|
case of partitioning events. The elections will fail if a quorum is not met disallowing writes
|
||||||
|
@ -77,12 +77,12 @@ _without_ communication between the nodes.
|
||||||
## Update Resolutionn
|
## Update Resolutionn
|
||||||
|
|
||||||
Many databases do exist that are PA, such as CouchDB or MongoDB. However, they often do not have the
|
Many databases do exist that are PA, such as CouchDB or MongoDB. However, they often do not have the
|
||||||
properties required in update resoultion that is required for Kanidm.
|
properties required in update resolution that is required for Kanidm.
|
||||||
|
|
||||||
An example of this is that CouchDB uses object-level resolution. This means that if two servers
|
An example of this is that CouchDB uses object-level resolution. This means that if two servers
|
||||||
update the same entry the "latest write wins". An example of where this won't work for Kanidm is if
|
update the same entry the "latest write wins". An example of where this won't work for Kanidm is if
|
||||||
one server locks the account as an admin is revoking the access of an account, but another account
|
one server locks the account as an admin is revoking the access of an account, but another account
|
||||||
updates the username. If the username update happenned second, the lock event would be lost creating
|
updates the username. If the username update happened second, the lock event would be lost creating
|
||||||
a security risk. There are certainly cases where this resolution method is valid, but Kanidm is not
|
a security risk. There are certainly cases where this resolution method is valid, but Kanidm is not
|
||||||
one.
|
one.
|
||||||
|
|
||||||
|
|
12
Makefile
12
Makefile
|
@ -117,6 +117,15 @@ prep:
|
||||||
cargo outdated -R
|
cargo outdated -R
|
||||||
cargo audit
|
cargo audit
|
||||||
|
|
||||||
|
.PHONY: codespell
|
||||||
|
codespell:
|
||||||
|
codespell -c \
|
||||||
|
-L crate,unexpect,Pres,pres,ACI,aci,te,ue \
|
||||||
|
--skip='./target,./pykanidm/.venv,./pykanidm/.mypy_cache,./.mypy_cache' \
|
||||||
|
--skip='./docs/*,./.git' \
|
||||||
|
--skip='./kanidmd_web_ui/src/external,./kanidmd_web_ui/pkg/external' \
|
||||||
|
--skip='./kanidmd/lib/src/constants/system_config.rs,./pykanidm/site,./kanidmd/lib/src/constants/*.json'
|
||||||
|
|
||||||
.PHONY: test/pykanidm/pytest
|
.PHONY: test/pykanidm/pytest
|
||||||
test/pykanidm/pytest:
|
test/pykanidm/pytest:
|
||||||
cd pykanidm && \
|
cd pykanidm && \
|
||||||
|
@ -142,7 +151,8 @@ test/pykanidm: test/pykanidm/pytest test/pykanidm/mypy test/pykanidm/pylint
|
||||||
|
|
||||||
.PHONY: test/doc/format
|
.PHONY: test/doc/format
|
||||||
test/doc/format: ## Format docs and the Kanidm book
|
test/doc/format: ## Format docs and the Kanidm book
|
||||||
find . -type f -name \*.md -exec deno fmt --check $(MARKDOWN_FORMAT_ARGS) "{}" +
|
find . -type f -not -path './target/*' -name \*.md \
|
||||||
|
-exec deno fmt --check $(MARKDOWN_FORMAT_ARGS) "{}" +
|
||||||
|
|
||||||
########################################################################
|
########################################################################
|
||||||
|
|
||||||
|
|
|
@ -45,7 +45,7 @@ proxy. You should be ready for this change when you upgrade to the latest versio
|
||||||
- Components for account permission elevation modes
|
- Components for account permission elevation modes
|
||||||
- Make pam\_unix more robust in high latency environments
|
- Make pam\_unix more robust in high latency environments
|
||||||
- Add proc macros for test cases
|
- Add proc macros for test cases
|
||||||
- Improve authentication requests with cookie/token seperation
|
- Improve authentication requests with cookie/token separation
|
||||||
- Cleanup of expired authentication sessions
|
- Cleanup of expired authentication sessions
|
||||||
- Improved administration of password badlists
|
- Improved administration of password badlists
|
||||||
|
|
||||||
|
@ -194,7 +194,7 @@ for a future supported release.
|
||||||
- Rate limiting and softlocking of account credentials to prevent bruteforcing.
|
- Rate limiting and softlocking of account credentials to prevent bruteforcing.
|
||||||
- Foundations of webauthn and multiple credential support.
|
- Foundations of webauthn and multiple credential support.
|
||||||
- Rewrite of json authentication protocol components.
|
- Rewrite of json authentication protocol components.
|
||||||
- Unixd will cache "non-existant" items to improve nss/pam latency.
|
- Unixd will cache "non-existent" items to improve nss/pam latency.
|
||||||
|
|
||||||
## 2020-10-01 - Kanidm 1.1.0-alpha2
|
## 2020-10-01 - Kanidm 1.1.0-alpha2
|
||||||
|
|
||||||
|
|
|
@ -76,9 +76,9 @@ For accounts with password-only:
|
||||||
|
|
||||||
* After 5 incorrect attempts the account is rate limited by an increasing time window within the API. This limit delays the response to the auth (regardless of success)
|
* After 5 incorrect attempts the account is rate limited by an increasing time window within the API. This limit delays the response to the auth (regardless of success)
|
||||||
* After X attempts, the account is soft locked on the affected server only for a time window of Y increasing up to Z.
|
* After X attempts, the account is soft locked on the affected server only for a time window of Y increasing up to Z.
|
||||||
* If the attempts continue, the account is hard locked and signalled to an external system that this has occured.
|
* If the attempts continue, the account is hard locked and signalled to an external system that this has occurred.
|
||||||
|
|
||||||
The value of X should be less than 100, so that the NIST guidelines can be met. This is beacuse when there are
|
The value of X should be less than 100, so that the NIST guidelines can be met. This is because when there are
|
||||||
many replicas, each replica maintains its own locking state, so "eventually" as each replica is attempted to be
|
many replicas, each replica maintains its own locking state, so "eventually" as each replica is attempted to be
|
||||||
bruteforced, then they will all eventually soft lock the account. In larger environments, we require
|
bruteforced, then they will all eventually soft lock the account. In larger environments, we require
|
||||||
external signalling to coordinate the locking of the account.
|
external signalling to coordinate the locking of the account.
|
||||||
|
|
|
@ -23,7 +23,7 @@ abstraction over the REST API.
|
||||||
The `kanidm` proto is a set of structures that are used by the REST and raw API's for HTTP
|
The `kanidm` proto is a set of structures that are used by the REST and raw API's for HTTP
|
||||||
communication. These are intended to be a reference implementation of the on-the-wire protocol, but
|
communication. These are intended to be a reference implementation of the on-the-wire protocol, but
|
||||||
importantly these are also how the server represents its communication. This makes this the
|
importantly these are also how the server represents its communication. This makes this the
|
||||||
authorative source of protocol layouts with regard to REST or raw communication.
|
authoritative source of protocol layouts with regard to REST or raw communication.
|
||||||
|
|
||||||
## Kanidmd (main server)
|
## Kanidmd (main server)
|
||||||
|
|
||||||
|
@ -55,8 +55,8 @@ it is checked by the schema to ensure that the request is valid and can be satis
|
||||||
|
|
||||||
As these workers are in a thread pool, it's important that these are concurrent and do not lock or
|
As these workers are in a thread pool, it's important that these are concurrent and do not lock or
|
||||||
block - this concurrency is key to high performance and safety. It's also worth noting that this is
|
block - this concurrency is key to high performance and safety. It's also worth noting that this is
|
||||||
the level where read transactions are created and commited - all operations are transactionally
|
the level where read transactions are created and committed - all operations are transactionally
|
||||||
proctected from an early stage to guarantee consistency of the operations.
|
protected from an early stage to guarantee consistency of the operations.
|
||||||
|
|
||||||
3. When the event is known to be consistent, it is then handed to the queryserver - the query server
|
3. When the event is known to be consistent, it is then handed to the queryserver - the query server
|
||||||
begins a process of steps on the event to apply it and determine the results for the request.
|
begins a process of steps on the event to apply it and determine the results for the request.
|
||||||
|
@ -65,7 +65,7 @@ proctected from an early stage to guarantee consistency of the operations.
|
||||||
|
|
||||||
4. The backend takes the request and begins the low-level processing to actually determine a
|
4. The backend takes the request and begins the low-level processing to actually determine a
|
||||||
candidate set. The first step in query optimisation, to ensure we apply the query in the most
|
candidate set. The first step in query optimisation, to ensure we apply the query in the most
|
||||||
effecient manner. Once optimised, we then use the query to query indexes and create a potential
|
efficient manner. Once optimised, we then use the query to query indexes and create a potential
|
||||||
candidate set of identifiers for matching entries (5.). Once we have this candidate id set, we
|
candidate set of identifiers for matching entries (5.). Once we have this candidate id set, we
|
||||||
then retrieve the relevant entries as our result candidate set (6.) and return them (7.) to the
|
then retrieve the relevant entries as our result candidate set (6.) and return them (7.) to the
|
||||||
backend.
|
backend.
|
||||||
|
@ -76,8 +76,8 @@ proctected from an early stage to guarantee consistency of the operations.
|
||||||
|
|
||||||
6. The query server now applies access controls over what you can / can't see. This happens in two
|
6. The query server now applies access controls over what you can / can't see. This happens in two
|
||||||
phases. The first is to determine "which candidate entries you have the rights to query and view"
|
phases. The first is to determine "which candidate entries you have the rights to query and view"
|
||||||
and the second is to determine "which attributes of each entry you have the right to percieve".
|
and the second is to determine "which attributes of each entry you have the right to perceive".
|
||||||
This seperation exists so that other parts of the server can _impersonate_ users and conduct
|
This separation exists so that other parts of the server can _impersonate_ users and conduct
|
||||||
searches on their behalf, but still internally operate on the full entry without access controls
|
searches on their behalf, but still internally operate on the full entry without access controls
|
||||||
limiting their scope of attributes we can view.
|
limiting their scope of attributes we can view.
|
||||||
|
|
||||||
|
@ -99,7 +99,7 @@ generated into messages. These messages are sent to a single write worker. There
|
||||||
write worker due to the use of copy-on-write structures in the server, limiting us to a single
|
write worker due to the use of copy-on-write structures in the server, limiting us to a single
|
||||||
writer, but allowing search transaction to proceed without blocking in parallel.
|
writer, but allowing search transaction to proceed without blocking in parallel.
|
||||||
|
|
||||||
(3) From the worker, the relevent event is created. This may be a "Create", "Modify" or "Delete"
|
(3) From the worker, the relevant event is created. This may be a "Create", "Modify" or "Delete"
|
||||||
event. The query server handles these slightly differently. In the create path, we take the set of
|
event. The query server handles these slightly differently. In the create path, we take the set of
|
||||||
entries you wish to create as our candidate set. In modify or delete, we perform an impersonation
|
entries you wish to create as our candidate set. In modify or delete, we perform an impersonation
|
||||||
search, and use the set of entries within your read bounds to generate the candidate set. This
|
search, and use the set of entries within your read bounds to generate the candidate set. This
|
||||||
|
|
|
@ -125,7 +125,7 @@ Implementation ideas for use cases
|
||||||
|
|
||||||
* For identification:
|
* For identification:
|
||||||
* Issue "ID tokens" as an api where you lookup name/uuid and get the userentry + sshkeys + group
|
* Issue "ID tokens" as an api where you lookup name/uuid and get the userentry + sshkeys + group
|
||||||
entries. This allows one-shot caching of relevent types, and groups would not store the member
|
entries. This allows one-shot caching of relevant types, and groups would not store the member
|
||||||
link on the client. Allows the client to "cache" any extra details into the stored record as
|
link on the client. Allows the client to "cache" any extra details into the stored record as
|
||||||
required. This would be used for linux/mac to get uid/gid details and ssh keys for distribution.
|
required. This would be used for linux/mac to get uid/gid details and ssh keys for distribution.
|
||||||
* Would inherit search permissions for connection.
|
* Would inherit search permissions for connection.
|
||||||
|
@ -172,7 +172,7 @@ that have unique cookie keys to prevent forgery of writable master cookies)
|
||||||
of group uuids + names derferenced so that a client can make all authorisation
|
of group uuids + names derferenced so that a client can make all authorisation
|
||||||
decisions from a single datapoint
|
decisions from a single datapoint
|
||||||
|
|
||||||
* Groups require the ability to be ephemeral/temporary or permament.
|
* Groups require the ability to be ephemeral/temporary or permanent.
|
||||||
|
|
||||||
* each token can be unique based on the type of auth (ie 2fa needed to get access
|
* each token can be unique based on the type of auth (ie 2fa needed to get access
|
||||||
to admin groups)
|
to admin groups)
|
||||||
|
@ -180,7 +180,7 @@ to admin groups)
|
||||||
Cookie/Token Auth Considerations
|
Cookie/Token Auth Considerations
|
||||||
--------------------------------
|
--------------------------------
|
||||||
|
|
||||||
* Must prevent replay attacks from occuring at any point during the authentication process
|
* Must prevent replay attacks from occurring at any point during the authentication process
|
||||||
|
|
||||||
* Minimise (but not eliminate) state on the server. This means that an auth process must
|
* Minimise (but not eliminate) state on the server. This means that an auth process must
|
||||||
remain on a single server, but the token granted should be valid on any server.
|
remain on a single server, but the token granted should be valid on any server.
|
||||||
|
@ -243,10 +243,10 @@ struct AuthClientStep {
|
||||||
Vec<AuthDetails>
|
Vec<AuthDetails>
|
||||||
}
|
}
|
||||||
|
|
||||||
The server verifies the credential, and marks that type of credential as failed or fufilled.
|
The server verifies the credential, and marks that type of credential as failed or fulfilled.
|
||||||
On failure of a credential, AuthDenied is immediately sent. On success of a credential
|
On failure of a credential, AuthDenied is immediately sent. On success of a credential
|
||||||
the server can issue AuthSuccess or AuthResponse with new possible challenges. For example,
|
the server can issue AuthSuccess or AuthResponse with new possible challenges. For example,
|
||||||
consider we initiall send "password". The client provides the password. The server follows
|
consider we initially send "password". The client provides the password. The server follows
|
||||||
by "totp" as the next type. The client fails the totp, and is denied.
|
by "totp" as the next type. The client fails the totp, and is denied.
|
||||||
|
|
||||||
If the response is AuthSuccess, an auth token is issued. The auth token is a bearer token
|
If the response is AuthSuccess, an auth token is issued. The auth token is a bearer token
|
||||||
|
@ -289,7 +289,7 @@ Method Two
|
||||||
==========
|
==========
|
||||||
|
|
||||||
Groups define if they are "always issued" or "requestable". All group types define
|
Groups define if they are "always issued" or "requestable". All group types define
|
||||||
requirements to be fufilled for the request such as auth strength, connection
|
requirements to be fulfilled for the request such as auth strength, connection
|
||||||
type, auth location etc.
|
type, auth location etc.
|
||||||
|
|
||||||
In the AuthRequest if you specific no groups, you do the 'minimum' auth required by
|
In the AuthRequest if you specific no groups, you do the 'minimum' auth required by
|
||||||
|
@ -380,8 +380,8 @@ the TLS tunnel?
|
||||||
More Brain Dumping
|
More Brain Dumping
|
||||||
==================
|
==================
|
||||||
|
|
||||||
- need a way to just pw check even if mfa is on (for sudo). Perhaps have a seperate sudo password attr?
|
- need a way to just pw check even if mfa is on (for sudo). Perhaps have a separate sudo password attr?
|
||||||
- ntpassword attr is seperate
|
- ntpassword attr is separate
|
||||||
- a way to check application pw which attaches certain rights (is this just a generalisation of sudo?)
|
- a way to check application pw which attaches certain rights (is this just a generalisation of sudo?)
|
||||||
- the provided token (bearer etc?) contains the "memberof" for the session.
|
- the provided token (bearer etc?) contains the "memberof" for the session.
|
||||||
- How to determine what memberof an api provides? Could be policy object that says "api pw of name X
|
- How to determine what memberof an api provides? Could be policy object that says "api pw of name X
|
||||||
|
@ -395,7 +395,7 @@ More Brain Dumping
|
||||||
- That would make userPassword and webauthn only for webui and api direct access.
|
- That would make userPassword and webauthn only for webui and api direct access.
|
||||||
- All other pw validations would use application pw case.
|
- All other pw validations would use application pw case.
|
||||||
- SSH would just read ssh key - should this have a similar group filter/allow
|
- SSH would just read ssh key - should this have a similar group filter/allow
|
||||||
mechanism like aplication pw?
|
mechanism like application pw?
|
||||||
|
|
||||||
- Groups take a "type"
|
- Groups take a "type"
|
||||||
- credentials also have a "type"
|
- credentials also have a "type"
|
||||||
|
|
|
@ -52,7 +52,7 @@ change.
|
||||||
|
|
||||||
Currently Credentials can have *any* combination of factors.
|
Currently Credentials can have *any* combination of factors.
|
||||||
|
|
||||||
This should be changed to reperesent the valid set of factors.
|
This should be changed to represent the valid set of factors.
|
||||||
|
|
||||||
* Password (only)
|
* Password (only)
|
||||||
* GeneratedPassword
|
* GeneratedPassword
|
||||||
|
|
|
@ -47,7 +47,7 @@ perform the correct transforms over the credential types to prevent data leaks.
|
||||||
The ability to view credentials is bound by the standard search access control rules.
|
The ability to view credentials is bound by the standard search access control rules.
|
||||||
|
|
||||||
The API would return a list of credential details, which is an enum of the possible classes supported
|
The API would return a list of credential details, which is an enum of the possible classes supported
|
||||||
by the server. This ensures during addition of new credetial types or changes we update these protocol
|
by the server. This ensures during addition of new credential types or changes we update these protocol
|
||||||
types.
|
types.
|
||||||
|
|
||||||
This also helps to support future webui elements for credentials.
|
This also helps to support future webui elements for credentials.
|
||||||
|
|
|
@ -45,10 +45,10 @@ If the access exists, a intent token is created into a link which can be provide
|
||||||
|
|
||||||
Exchange of this intent token, creates the time limited credential update session token.
|
Exchange of this intent token, creates the time limited credential update session token.
|
||||||
|
|
||||||
This allows the intent token to have a seperate time window, to the credential update session token.
|
This allows the intent token to have a separate time window, to the credential update session token.
|
||||||
|
|
||||||
If the intent token creates a credential update session, and the credential update session is *not*
|
If the intent token creates a credential update session, and the credential update session is *not*
|
||||||
commited, it can be re-started by the intent token.
|
committed, it can be re-started by the intent token.
|
||||||
|
|
||||||
If the credential update session has been committed, then the intent token can NOT create new
|
If the credential update session has been committed, then the intent token can NOT create new
|
||||||
credential update sessions (it is once-use).
|
credential update sessions (it is once-use).
|
||||||
|
@ -103,7 +103,7 @@ As a result, the built set of changes *is* persisted on the server in the creden
|
||||||
as the user interacts with and builds the set of changes. This allows the server to enforce that the update
|
as the user interacts with and builds the set of changes. This allows the server to enforce that the update
|
||||||
session *must* represent a valid and complete set of compliant credentials before commit.
|
session *must* represent a valid and complete set of compliant credentials before commit.
|
||||||
|
|
||||||
The user may cancel the session at anytime, discarding any set of changes they had inflight. This allows
|
The user may cancel the session at anytime, discarding any set of changes they had in-flight. This allows
|
||||||
another session to now begin.
|
another session to now begin.
|
||||||
|
|
||||||
If the user chooses to commit the changes, the server will assemble the changes into a modification
|
If the user chooses to commit the changes, the server will assemble the changes into a modification
|
||||||
|
@ -140,7 +140,7 @@ so that the server that receives the token can enforce the credential adheres to
|
||||||
If the client successfully enrolls, a new entry for the enrollment is created in the database. This
|
If the client successfully enrolls, a new entry for the enrollment is created in the database. This
|
||||||
allows replication of the new credential to occur.
|
allows replication of the new credential to occur.
|
||||||
|
|
||||||
The main session of the credential update can then check for the existance of this stub uuid in the
|
The main session of the credential update can then check for the existence of this stub uuid in the
|
||||||
db and wait for it to replicate in. This can be checked by the "polling" action.
|
db and wait for it to replicate in. This can be checked by the "polling" action.
|
||||||
|
|
||||||
When it has been replicated in, and polling has found the credential, the credentials are added to the session. The credential
|
When it has been replicated in, and polling has found the credential, the credentials are added to the session. The credential
|
||||||
|
|
|
@ -7,7 +7,7 @@ devices vary from desktops, laptops, tablets, mobile phones and more. Each of th
|
||||||
different security and trust levels, as well as a variety of input methods.
|
different security and trust levels, as well as a variety of input methods.
|
||||||
|
|
||||||
Historically authentication providers have *not* factored in multiple device classes to
|
Historically authentication providers have *not* factored in multiple device classes to
|
||||||
authentication leading to processes that are inconvinent to insecure for humans to handle when they
|
authentication leading to processes that are inconvenient to insecure for humans to handle when they
|
||||||
want to use their account between devices.
|
want to use their account between devices.
|
||||||
|
|
||||||
Example of a Bad Workflow
|
Example of a Bad Workflow
|
||||||
|
@ -51,7 +51,7 @@ Roaming vs Platform Authenticators
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
|
||||||
In our example our laptop and phone both have platform authenticators, which are security devices
|
In our example our laptop and phone both have platform authenticators, which are security devices
|
||||||
bound to the platform (they are inseperable). Rather than using a platform authenticator we *may*
|
bound to the platform (they are inseparable). Rather than using a platform authenticator we *may*
|
||||||
allow a roaming authenticator to be used to bootstrap the phone's platform authenticator. An example
|
allow a roaming authenticator to be used to bootstrap the phone's platform authenticator. An example
|
||||||
of a roaming authenticator is a yubikey, which can be plugged into the laptop, and then disconnected
|
of a roaming authenticator is a yubikey, which can be plugged into the laptop, and then disconnected
|
||||||
and connected to the phone. This changes the steps of the process to be.
|
and connected to the phone. This changes the steps of the process to be.
|
||||||
|
|
|
@ -27,7 +27,7 @@ rather than events that are fully resolved. This way within the changelog
|
||||||
trim window, a server can be downgraded, and it's RUV move backwards, but the missing updates will be "replayed" backwards to it.
|
trim window, a server can be downgraded, and it's RUV move backwards, but the missing updates will be "replayed" backwards to it.
|
||||||
|
|
||||||
Second, it means we have to consider making replication either version (typed)
|
Second, it means we have to consider making replication either version (typed)
|
||||||
data agnostic *or* have CSN's reperesent a dataset version from the server which gates or blocks replication events from newer to older instances until *they* are upgraded.
|
data agnostic *or* have CSN's represent a dataset version from the server which gates or blocks replication events from newer to older instances until *they* are upgraded.
|
||||||
|
|
||||||
Having the version gate does have a good benefit. Imagine we have three servers
|
Having the version gate does have a good benefit. Imagine we have three servers
|
||||||
A, B, C. We upgrade A and B, and they migrate UTF8STRING to XDATA. Server C has
|
A, B, C. We upgrade A and B, and they migrate UTF8STRING to XDATA. Server C has
|
||||||
|
@ -37,8 +37,8 @@ This means that *all changes* from A and B post upgrade will NOT be sent to C. C
|
||||||
may accept changes and will continue to provide them to A and B (provided all
|
may accept changes and will continue to provide them to A and B (provided all
|
||||||
other update resolution steps uphold). If we now revert B, the changes from A will
|
other update resolution steps uphold). If we now revert B, the changes from A will
|
||||||
not flow to B which has been downgraded, but C's changes that were accepted WILL
|
not flow to B which has been downgraded, but C's changes that were accepted WILL
|
||||||
continue to be acceptted by B. Similar with A. This means in a downgrade scenario
|
continue to be accepted by B. Similar with A. This means in a downgrade scenario
|
||||||
that any data writen on upgraded nodes that are downgraded will be lost, but
|
that any data written on upgraded nodes that are downgraded will be lost, but
|
||||||
that all replication as a whole will still be valid. This is good!
|
that all replication as a whole will still be valid. This is good!
|
||||||
|
|
||||||
It does mean we need to consider that we have to upgrade data as it comes in from
|
It does mean we need to consider that we have to upgrade data as it comes in from
|
||||||
|
|
|
@ -136,7 +136,7 @@ account
|
||||||
GET -> list the credentials
|
GET -> list the credentials
|
||||||
DELETE ->
|
DELETE ->
|
||||||
/v1/account/{id}/_credential/{id}/_lock
|
/v1/account/{id}/_credential/{id}/_lock
|
||||||
POST -> lock this credential until time (or null for permament)
|
POST -> lock this credential until time (or null for permanent)
|
||||||
DELETE -> unlock this account
|
DELETE -> unlock this account
|
||||||
/v1/account/{id}/_radius
|
/v1/account/{id}/_radius
|
||||||
GET -> get the accounts radius credentials
|
GET -> get the accounts radius credentials
|
||||||
|
|
|
@ -5,7 +5,7 @@ search term (filter) faster.
|
||||||
|
|
||||||
## World without indexing
|
## World without indexing
|
||||||
|
|
||||||
Almost all databases are built ontop of a key-value storage engine of some nature. In our case we
|
Almost all databases are built on top of a key-value storage engine of some nature. In our case we
|
||||||
are using (feb 2019) sqlite and hopefully SLED in the future.
|
are using (feb 2019) sqlite and hopefully SLED in the future.
|
||||||
|
|
||||||
So our entries that contain sets of avas, these are serialised into a byte format (feb 2019, json
|
So our entries that contain sets of avas, these are serialised into a byte format (feb 2019, json
|
||||||
|
@ -98,7 +98,7 @@ containing 250,000 ids. Even with idl compression, this is still a lot of data!
|
||||||
There tend to be two types of searches against a directory like Kanidm.
|
There tend to be two types of searches against a directory like Kanidm.
|
||||||
|
|
||||||
- Broad searches
|
- Broad searches
|
||||||
- Targetted single entry searches
|
- Targeted single entry searches
|
||||||
|
|
||||||
For broad searches, filter optimising does little - we just have to load those large idls, and use
|
For broad searches, filter optimising does little - we just have to load those large idls, and use
|
||||||
them. (Yes, loading the large idl and using it is still better than full table scan though!)
|
them. (Yes, loading the large idl and using it is still better than full table scan though!)
|
||||||
|
@ -141,13 +141,13 @@ We load the single idl value for name, and then as we are below the test-thresho
|
||||||
and apply the filter to entry ID 1 - yielding a match or no match.
|
and apply the filter to entry ID 1 - yielding a match or no match.
|
||||||
|
|
||||||
Notice in the second, by promoting the "smaller" idl, we were able to save the work of the idl load
|
Notice in the second, by promoting the "smaller" idl, we were able to save the work of the idl load
|
||||||
and intersection as our first equality of "name" was more targetted?
|
and intersection as our first equality of "name" was more targeted?
|
||||||
|
|
||||||
Filter optimisation is about re-arranging these filters in the server using our insight to data to
|
Filter optimisation is about re-arranging these filters in the server using our insight to data to
|
||||||
provide faster searches and avoid indexes that are costly unless they are needed.
|
provide faster searches and avoid indexes that are costly unless they are needed.
|
||||||
|
|
||||||
In this case, we would _demote_ any filter where Eq(class, ...) to the _end_ of the And, because it
|
In this case, we would _demote_ any filter where Eq(class, ...) to the _end_ of the And, because it
|
||||||
is highly likely to be less targetted than the other Eq types. Another example would be promotion of
|
is highly likely to be less targeted than the other Eq types. Another example would be promotion of
|
||||||
Eq filters to the front of an And over a Sub term, wherh Sub indexes tend to be larger and have
|
Eq filters to the front of an And over a Sub term, wherh Sub indexes tend to be larger and have
|
||||||
longer IDLs.
|
longer IDLs.
|
||||||
|
|
||||||
|
@ -182,7 +182,7 @@ the tables as:
|
||||||
|
|
||||||
They will be structured as string, string for both - where the uuid and name column matches the
|
They will be structured as string, string for both - where the uuid and name column matches the
|
||||||
correct direction, and is the primary key. We could use a single table, but if we change to sled we
|
correct direction, and is the primary key. We could use a single table, but if we change to sled we
|
||||||
need to split this, so we pre-empt this change and duplicate the data here.
|
need to split this, so we preempt this change and duplicate the data here.
|
||||||
|
|
||||||
# Indexing States
|
# Indexing States
|
||||||
|
|
||||||
|
|
|
@ -1,7 +1,7 @@
|
||||||
Trust Design and Thoughts
|
Trust Design and Thoughts
|
||||||
-------------------------
|
-------------------------
|
||||||
|
|
||||||
Trust is a process where users and groups of a seperate kanidm instance may be granted access
|
Trust is a process where users and groups of a separate kanidm instance may be granted access
|
||||||
to resources through this system. Trust is a one way concept, but of course, could be implemented
|
to resources through this system. Trust is a one way concept, but of course, could be implemented
|
||||||
twice in each direction to achieve bidirectional trust.
|
twice in each direction to achieve bidirectional trust.
|
||||||
|
|
||||||
|
@ -9,9 +9,9 @@ Why?
|
||||||
----
|
----
|
||||||
|
|
||||||
There are a number of reasons why a trust configuration may be desired. You may have
|
There are a number of reasons why a trust configuration may be desired. You may have
|
||||||
a seperate business to customer instance, where business users should be able to authenticate
|
a separate business to customer instance, where business users should be able to authenticate
|
||||||
to customer resources, but not the inverse. You may have two businesses merge or cooperate and
|
to customer resources, but not the inverse. You may have two businesses merge or cooperate and
|
||||||
require resource sharing. It allows seperation of high value credentials onto different infrastructure.
|
require resource sharing. It allows separation of high value credentials onto different infrastructure.
|
||||||
You could also potentially use trust as a method of sync between
|
You could also potentially use trust as a method of sync between
|
||||||
between a different IDM project and this.
|
between a different IDM project and this.
|
||||||
|
|
||||||
|
@ -50,7 +50,7 @@ There are different ways we can scope a trust out, each with pros-cons. Here are
|
||||||
a whitelist.
|
a whitelist.
|
||||||
* Fractional Replication - similar to the GC in AD, replicate in a subset of your data, but then
|
* Fractional Replication - similar to the GC in AD, replicate in a subset of your data, but then
|
||||||
ask for redirects or other information. This is used with 389 and RO servers where you may only
|
ask for redirects or other information. This is used with 389 and RO servers where you may only
|
||||||
replicate a subset of accounts to branch offices or a seperate backend.
|
replicate a subset of accounts to branch offices or a separate backend.
|
||||||
|
|
||||||
Each of these has pros and cons, good bad, and different models. They each achieve different things. For example,
|
Each of these has pros and cons, good bad, and different models. They each achieve different things. For example,
|
||||||
the Kerberos style trust creates silos where the accounts credential material is stored (in the home
|
the Kerberos style trust creates silos where the accounts credential material is stored (in the home
|
||||||
|
@ -84,7 +84,7 @@ So with a lot of though, I'm going to go with fractional replication.
|
||||||
* Forwarding - I don't want credentials to be forwarded, or sso to be forwarded.
|
* Forwarding - I don't want credentials to be forwarded, or sso to be forwarded.
|
||||||
* Cred Silo - I want this because it means you have defined boundaries of where security material is stored by who.
|
* Cred Silo - I want this because it means you have defined boundaries of where security material is stored by who.
|
||||||
* PII limit - I want this as you can control who-has-what PII on the system side.
|
* PII limit - I want this as you can control who-has-what PII on the system side.
|
||||||
* Group Mgmt - I want this as it enables rbac and familar group management locally for remote and local entries.
|
* Group Mgmt - I want this as it enables rbac and familiar group management locally for remote and local entries.
|
||||||
* Invite Ext - On the fence - cool idea, but not sure how it fits into kanidm with trusts.
|
* Invite Ext - On the fence - cool idea, but not sure how it fits into kanidm with trusts.
|
||||||
* Distributed - I don't want this because it's model is really different to what kani is trying to be
|
* Distributed - I don't want this because it's model is really different to what kani is trying to be
|
||||||
* Client Switched - I don't want this because clients should only know they trust an IDM silo, and that does the rest.
|
* Client Switched - I don't want this because clients should only know they trust an IDM silo, and that does the rest.
|
||||||
|
@ -113,7 +113,7 @@ With the fractional case in mind, this means we have sets of use cases that exis
|
||||||
* RADIUS authentication to a different network infra in the trusting domain (but the Radius creds are local to the site)
|
* RADIUS authentication to a different network infra in the trusting domain (but the Radius creds are local to the site)
|
||||||
* Limiting presence of credentials in cloud (but making public key credentials avail)
|
* Limiting presence of credentials in cloud (but making public key credentials avail)
|
||||||
* Limiting distribution of personal information to untrusted sites
|
* Limiting distribution of personal information to untrusted sites
|
||||||
* Creating administration domains or other business hierachies that may exist in some complex scenarios
|
* Creating administration domains or other business hierarchies that may exist in some complex scenarios
|
||||||
|
|
||||||
We need to consider how to support these use cases of course :)
|
We need to consider how to support these use cases of course :)
|
||||||
|
|
||||||
|
@ -196,7 +196,7 @@ if multiple urls exist in the trustanchor, we should choose randomly which to co
|
||||||
authentications. If a URL is not available, we move to the next URL (failover)
|
authentications. If a URL is not available, we move to the next URL (failover)
|
||||||
|
|
||||||
We could consider in-memory caching these values, but then we have to consider the cache expiry
|
We could consider in-memory caching these values, but then we have to consider the cache expiry
|
||||||
and management of this data. Additionally types like TOTP aren't cachable. I think we should
|
and management of this data. Additionally types like TOTP aren't cacheable. I think we should
|
||||||
avoid caching in these cases.
|
avoid caching in these cases.
|
||||||
|
|
||||||
Auth Scenarios
|
Auth Scenarios
|
||||||
|
@ -257,7 +257,7 @@ Excluding items from Domain B from replicating back
|
||||||
In a situation where domain A trusts B, and inverse B trusts A, then A will contain trust stubs to
|
In a situation where domain A trusts B, and inverse B trusts A, then A will contain trust stubs to
|
||||||
entries in B.
|
entries in B.
|
||||||
|
|
||||||
Due to the use of spn's we can replicate only our entries for domain to the trust reciever.
|
Due to the use of spn's we can replicate only our entries for domain to the trust receiver.
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
|
@ -280,7 +280,7 @@ How do we get the domain at setup time for spn? We already require domain for we
|
||||||
we write this into the system_info?
|
we write this into the system_info?
|
||||||
|
|
||||||
This means we need to determine a difference between a localgroup and a group that will
|
This means we need to determine a difference between a localgroup and a group that will
|
||||||
be synced for trust. This may require a seperate class or label?
|
be synced for trust. This may require a separate class or label?
|
||||||
|
|
||||||
We need to make name -> SPN on groups/accounts that can be sent across a trust boundary.
|
We need to make name -> SPN on groups/accounts that can be sent across a trust boundary.
|
||||||
|
|
||||||
|
@ -304,7 +304,7 @@ is a requirement for replication anyway, and SID regeneration is not a complex t
|
||||||
unlikely that we would ever see duplicates anyway as this is a 32bit field.
|
unlikely that we would ever see duplicates anyway as this is a 32bit field.
|
||||||
|
|
||||||
An alternate option is to have the stub objects generate ids, but to have a trusted_uuid field
|
An alternate option is to have the stub objects generate ids, but to have a trusted_uuid field
|
||||||
that is used for replication checking, and a seperate CSN for trust replication.
|
that is used for replication checking, and a separate CSN for trust replication.
|
||||||
|
|
||||||
|
|
||||||
Webauthn
|
Webauthn
|
||||||
|
|
|
@ -90,7 +90,7 @@ beyond the attribute name:
|
||||||
We will accept (and prefer) that Kanidm attribute names are provided in the LDAP filter for applications
|
We will accept (and prefer) that Kanidm attribute names are provided in the LDAP filter for applications
|
||||||
that can be customised.
|
that can be customised.
|
||||||
|
|
||||||
Compatability Attributes
|
Compatibility Attributes
|
||||||
========================
|
========================
|
||||||
|
|
||||||
Some attributes exist in LDAP that have no direct equivalent in Kanidm. These are often from existing
|
Some attributes exist in LDAP that have no direct equivalent in Kanidm. These are often from existing
|
||||||
|
@ -101,7 +101,7 @@ two are:
|
||||||
* EntryUUID
|
* EntryUUID
|
||||||
|
|
||||||
These should be provided through an ldapCompat class in kanidm, and require no other transformation. They
|
These should be provided through an ldapCompat class in kanidm, and require no other transformation. They
|
||||||
may require generation from the server, as legacy applications expect their existance and kanidm created
|
may require generation from the server, as legacy applications expect their existence and kanidm created
|
||||||
accounts would need the attributes to exist to work with these.
|
accounts would need the attributes to exist to work with these.
|
||||||
|
|
||||||
Entry and Attribute Transformations
|
Entry and Attribute Transformations
|
||||||
|
|
|
@ -69,7 +69,7 @@ This leads to the following log categories:
|
||||||
* The unique event ID is provided in any operation success or failure.
|
* The unique event ID is provided in any operation success or failure.
|
||||||
* Security (aka audit)
|
* Security (aka audit)
|
||||||
* Filtering of security sensitive attributes (via debug/display features)
|
* Filtering of security sensitive attributes (via debug/display features)
|
||||||
* Display of sufficent information to establish a security picture of connected actions via the user's uuid/session id.
|
* Display of sufficient information to establish a security picture of connected actions via the user's uuid/session id.
|
||||||
* Tracking of who-changed-what-when-why
|
* Tracking of who-changed-what-when-why
|
||||||
* Replication
|
* Replication
|
||||||
* TODO
|
* TODO
|
||||||
|
@ -78,7 +78,7 @@ It can be seen pretty quickly that multiple message types are useful across cate
|
||||||
example, the unique event id for all messages, how hard errors affect operation errors
|
example, the unique event id for all messages, how hard errors affect operation errors
|
||||||
or how an operation error can come from a security denial.
|
or how an operation error can come from a security denial.
|
||||||
|
|
||||||
Logging must also remain a seperate thread and async for performance.
|
Logging must also remain a separate thread and async for performance.
|
||||||
|
|
||||||
This means that the best way to declare these logs is a unified log which can be filtered based
|
This means that the best way to declare these logs is a unified log which can be filtered based
|
||||||
on the admins or consumers needs.
|
on the admins or consumers needs.
|
||||||
|
|
|
@ -42,7 +42,7 @@ where the inverse look up becomes N operations to resolve the full structure.
|
||||||
Design
|
Design
|
||||||
------
|
------
|
||||||
|
|
||||||
Due to the nature of this plugin, there is a single attribute - 'member' - whos content is examined
|
Due to the nature of this plugin, there is a single attribute - 'member' - whose content is examined
|
||||||
to build the relationship to others - 'memberOf'. We will examine a single group and user situation
|
to build the relationship to others - 'memberOf'. We will examine a single group and user situation
|
||||||
without nesting. We assume the user already exists, as the situation where the group exists and we add
|
without nesting. We assume the user already exists, as the situation where the group exists and we add
|
||||||
the user can't occur due to refint.
|
the user can't occur due to refint.
|
||||||
|
|
|
@ -16,9 +16,9 @@ Situation
|
||||||
|
|
||||||
We have a user with a device E(nrolled), and a device N(ew) that they wish to be able to use.
|
We have a user with a device E(nrolled), and a device N(ew) that they wish to be able to use.
|
||||||
|
|
||||||
Each device contains a unique webauthn device that is inseperable from the device.
|
Each device contains a unique webauthn device that is inseparable from the device.
|
||||||
|
|
||||||
Each device may be connected to a seperate Kanidm instance - IE we can not assume that
|
Each device may be connected to a separate Kanidm instance - IE we can not assume that
|
||||||
the data in the system may be point-in-time consistent due to replication as an asynchronous
|
the data in the system may be point-in-time consistent due to replication as an asynchronous
|
||||||
process.
|
process.
|
||||||
|
|
||||||
|
@ -91,7 +91,7 @@ Device N may have to wait for replication back for the WebauthnCredential to app
|
||||||
Possible Changes
|
Possible Changes
|
||||||
================
|
================
|
||||||
|
|
||||||
Do not require the approval step, as an OTP has already been provided, which is evidence of possesion
|
Do not require the approval step, as an OTP has already been provided, which is evidence of possession
|
||||||
of an account which has sufficent permissions.
|
of an account which has sufficient permissions.
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -41,7 +41,7 @@ To prevent attackers from bruteforcing these Backup code at a high rate, we need
|
||||||
|
|
||||||
* After 5 incorrect attempts the account is rate limited by an increasing time window within the API. This limit delays the response to the auth (regardless of success)
|
* After 5 incorrect attempts the account is rate limited by an increasing time window within the API. This limit delays the response to the auth (regardless of success)
|
||||||
* After X attempts, the account is soft locked on the affected server only for a time window of Y increasing up to Z.
|
* After X attempts, the account is soft locked on the affected server only for a time window of Y increasing up to Z.
|
||||||
* If the attempts continue, the account is hard locked and signalled to an external system that this has occured.
|
* If the attempts continue, the account is hard locked and signalled to an external system that this has occurred.
|
||||||
(See designs/account_policy.rst#rate-limiting for details)
|
(See designs/account_policy.rst#rate-limiting for details)
|
||||||
|
|
||||||
Access Control
|
Access Control
|
||||||
|
|
|
@ -32,7 +32,7 @@ code and exchanges it for a valid token that may be provided to the client.
|
||||||
The resource server may optionally contact the token introspection endpoint about the
|
The resource server may optionally contact the token introspection endpoint about the
|
||||||
provided oauth token, which yields extra metadata about the identity that holds the
|
provided oauth token, which yields extra metadata about the identity that holds the
|
||||||
token and completed the authorisation. This metadata may include identity information,
|
token and completed the authorisation. This metadata may include identity information,
|
||||||
but also may include extended metadata, sometimes refered to as "claims". Claims are
|
but also may include extended metadata, sometimes referred to as "claims". Claims are
|
||||||
information bound to a token based on properties of the session that may allow
|
information bound to a token based on properties of the session that may allow
|
||||||
the resource server to make extended authorisation decisions without the need
|
the resource server to make extended authorisation decisions without the need
|
||||||
to contact the authorisation server to arbitrate.
|
to contact the authorisation server to arbitrate.
|
||||||
|
@ -42,7 +42,7 @@ In this model, Kanidm will function as the authorisation server.
|
||||||
Kanidm UAT Claims
|
Kanidm UAT Claims
|
||||||
-----------------
|
-----------------
|
||||||
|
|
||||||
To ensure that we can filter and make certain autorisation decisions, the Kanidm UAT
|
To ensure that we can filter and make certain authorisation decisions, the Kanidm UAT
|
||||||
needs to be extended with extra claims similar to the token claims. Since we have the
|
needs to be extended with extra claims similar to the token claims. Since we have the
|
||||||
ability to strongly type these, we can add these to the UAT. These should include.
|
ability to strongly type these, we can add these to the UAT. These should include.
|
||||||
|
|
||||||
|
|
|
@ -55,7 +55,7 @@ provided to be able to take over high privilege kanidm accounts.
|
||||||
For this reason, the ability to import passwords must be limited to:
|
For this reason, the ability to import passwords must be limited to:
|
||||||
|
|
||||||
* A service account with strong credentials
|
* A service account with strong credentials
|
||||||
* high_privilige accounts may NOT have their passwords set in this manner
|
* high_privilege accounts may NOT have their passwords set in this manner
|
||||||
|
|
||||||
Once kanidm implements password badlist checks in the auth path, passwords that have been synced
|
Once kanidm implements password badlist checks in the auth path, passwords that have been synced
|
||||||
into kanidm via this route may not function as they are found in the badlist, causing the account
|
into kanidm via this route may not function as they are found in the badlist, causing the account
|
||||||
|
|
|
@ -39,8 +39,8 @@ of a positive user experience, having MSCHAPv2 is essential.
|
||||||
Nice To Have
|
Nice To Have
|
||||||
------------
|
------------
|
||||||
|
|
||||||
To limit the scope of damage in an attack, RADIUS passwords should be seperate from the main
|
To limit the scope of damage in an attack, RADIUS passwords should be separate from the main
|
||||||
account password due to their weak storage. Because these are seperate and shared between devices
|
account password due to their weak storage. Because these are separate and shared between devices
|
||||||
this does lead to some interesting behaviours we can use.
|
this does lead to some interesting behaviours we can use.
|
||||||
|
|
||||||
Storing the RADIUS password in plaintext now becomes an option, meaning that we can have autoconfiguration
|
Storing the RADIUS password in plaintext now becomes an option, meaning that we can have autoconfiguration
|
||||||
|
@ -61,7 +61,7 @@ With the above in mind, this leads to the following conclusions:
|
||||||
* There is only a single RADIUS configuration profile per-kanidm topology
|
* There is only a single RADIUS configuration profile per-kanidm topology
|
||||||
* A user only requires a single RADIUS infrastructure password as the network is considered a single entity and resources are arbitrated elsewhere.
|
* A user only requires a single RADIUS infrastructure password as the network is considered a single entity and resources are arbitrated elsewhere.
|
||||||
* Groups define what vlan a users belongs to (and possibly other ip resources).
|
* Groups define what vlan a users belongs to (and possibly other ip resources).
|
||||||
* The users RADIUS password is seperate from their main account, and has no other function than RADIUS authentication.
|
* The users RADIUS password is separate from their main account, and has no other function than RADIUS authentication.
|
||||||
* The users RADIUS password can be server-side generated, and have pathways to distribute it to devices that remove the need for human interaction
|
* The users RADIUS password can be server-side generated, and have pathways to distribute it to devices that remove the need for human interaction
|
||||||
|
|
||||||
Design Details
|
Design Details
|
||||||
|
|
|
@ -39,7 +39,7 @@ potentially. This is an argument for the filter-scan method, that checks if any
|
||||||
the class, deleted, and if it does, we do not wrap with the AndNot term.
|
the class, deleted, and if it does, we do not wrap with the AndNot term.
|
||||||
|
|
||||||
|
|
||||||
The best solution is a whole seperate interface (/search/recycle/) that has it's own access controls
|
The best solution is a whole separate interface (/search/recycle/) that has it's own access controls
|
||||||
that is used. By default searches don't look at recycled items (but internal do). This interface would
|
that is used. By default searches don't look at recycled items (but internal do). This interface would
|
||||||
remove that limitation, but would require access controls to prevent read/changes.
|
remove that limitation, but would require access controls to prevent read/changes.
|
||||||
|
|
||||||
|
|
|
@ -10,7 +10,7 @@ At first glance it may seem correct to no-op a change where the state is:
|
||||||
|
|
||||||
with a "purge name; add name william".
|
with a "purge name; add name william".
|
||||||
|
|
||||||
However, this doesn't express the full possibities of the replication topology
|
However, this doesn't express the full possibilities of the replication topology
|
||||||
in the system. The follow events could occur:
|
in the system. The follow events could occur:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
@ -22,9 +22,9 @@ in the system. The follow events could occur:
|
||||||
del: name
|
del: name
|
||||||
n: w
|
n: w
|
||||||
|
|
||||||
The events of DB 1 seem correct in isolation, to no-op the del and re-add, however
|
The events of DB 1 seem correct in isolation, to no-op the delete and re-add, however
|
||||||
when the changelogs will be replayed, they will then cause the events of DB2 to
|
when the changelogs will be replayed, they will then cause the events of DB2 to
|
||||||
be the final state - whet the timing of events on DB 1 should actually be the
|
be the final state - whereas the timing of events on DB 1 should actually be the
|
||||||
final state.
|
final state.
|
||||||
|
|
||||||
To contrast if you no-oped the purge name:
|
To contrast if you no-oped the purge name:
|
||||||
|
@ -166,7 +166,7 @@ To achieve this, we store a list of CID's and what entries were affected within
|
||||||
|
|
||||||
One can imagine a situation where two servers change the entry, but between
|
One can imagine a situation where two servers change the entry, but between
|
||||||
those changes the read-only is supplied the CID. We don't care in what order they did change,
|
those changes the read-only is supplied the CID. We don't care in what order they did change,
|
||||||
only that a change *must* have occured.
|
only that a change *must* have occurred.
|
||||||
|
|
||||||
So example: let's take entry A with server A and B, and read-only R.
|
So example: let's take entry A with server A and B, and read-only R.
|
||||||
|
|
||||||
|
@ -300,7 +300,7 @@ to prevent this situation such as:
|
||||||
A 0/3 A 0/1 A 0/3
|
A 0/3 A 0/1 A 0/3
|
||||||
B 0/1 B 0/4 B 0/1
|
B 0/1 B 0/4 B 0/1
|
||||||
|
|
||||||
In this case, one can imagine B would then supply data, and when A recieved B's changes, it would again
|
In this case, one can imagine B would then supply data, and when A Received B's changes, it would again
|
||||||
supply to R. However, this can be easily avoided by adhering to the following:
|
supply to R. However, this can be easily avoided by adhering to the following:
|
||||||
|
|
||||||
* A server can only supply to a read-only if all of the suppling server's RUV CSN MAX are contained
|
* A server can only supply to a read-only if all of the suppling server's RUV CSN MAX are contained
|
||||||
|
@ -367,7 +367,7 @@ the following:
|
||||||
GRUV A:
|
GRUV A:
|
||||||
R (A: 0/0, )
|
R (A: 0/0, )
|
||||||
|
|
||||||
So A has connected to R and polled the RUV and recieved a 0/0. We now can supply our changes to
|
So A has connected to R and polled the RUV and Received a 0/0. We now can supply our changes to
|
||||||
R:
|
R:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
@ -380,7 +380,7 @@ R:
|
||||||
|
|
||||||
As R is a read-only it has no concept of the changelog, so it sets MIN to MAX.
|
As R is a read-only it has no concept of the changelog, so it sets MIN to MAX.
|
||||||
|
|
||||||
Now, we then poll the RUV again. Protocol wise RUV polling should be seperate to suppling of data!
|
Now, we then poll the RUV again. Protocol wise RUV polling should be separate to suppling of data!
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
|
|
|
@ -19,21 +19,21 @@ has a number of negative cultural connotations, and is not used by this project.
|
||||||
* Read-Write server
|
* Read-Write server
|
||||||
|
|
||||||
This is a server that is fully writable. It accepts external client writes, and these
|
This is a server that is fully writable. It accepts external client writes, and these
|
||||||
writes are propogated to the topology. Many read-write servers can be in a topology
|
writes are propagated to the topology. Many read-write servers can be in a topology
|
||||||
and written to in parallel.
|
and written to in parallel.
|
||||||
|
|
||||||
* Transport Hub
|
* Transport Hub
|
||||||
|
|
||||||
This is a server that is not writeable to clients, but can accept incoming replicated
|
This is a server that is not writeable to clients, but can accept incoming replicated
|
||||||
writes, and then propogates these to other servers. All servers that are directly after
|
writes, and then propagates these to other servers. All servers that are directly after
|
||||||
this server inthe topology must not be a read-write, as writes may not propogate back
|
this server in the topology must not be a read-write, as writes may not propagate back
|
||||||
from the transport hub. IE the following is invalid
|
from the transport hub. IE the following is invalid
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
RW 1 ---> HUB <--- RW 2
|
RW 1 ---> HUB <--- RW 2
|
||||||
|
|
||||||
Note the replication direction in this, and that changes into HUB will not propogate
|
Note the replication direction in this, and that changes into HUB will not propagate
|
||||||
back to RW 1 or RW 2.
|
back to RW 1 or RW 2.
|
||||||
|
|
||||||
* Read-Only server
|
* Read-Only server
|
||||||
|
@ -43,7 +43,7 @@ incoming replicated changes, and has no outbound replication agreements.
|
||||||
|
|
||||||
|
|
||||||
Replication systems are dictated by CAP theorem. This is a theory that states from
|
Replication systems are dictated by CAP theorem. This is a theory that states from
|
||||||
"consistency, availability and paritition tolerance" you may only have two of the
|
"consistency, availability and partition tolerance" you may only have two of the
|
||||||
three at any time.
|
three at any time.
|
||||||
|
|
||||||
* Consistency
|
* Consistency
|
||||||
|
@ -55,12 +55,12 @@ see the latest data.
|
||||||
|
|
||||||
* Availability
|
* Availability
|
||||||
|
|
||||||
This is the property that every request will recieve a non-error response without
|
This is the property that every request will receive a non-error response without
|
||||||
the guarantee that the data is "up to date".
|
the guarantee that the data is "up to date".
|
||||||
|
|
||||||
* Partition Tolerance
|
* Partition Tolerance
|
||||||
|
|
||||||
This is the property that your topology in the face of patition tolerance will
|
This is the property that your topology in the face of partition tolerance will
|
||||||
continue to provide functional services (generally reads).
|
continue to provide functional services (generally reads).
|
||||||
|
|
||||||
Almost all systems expect partition tolerance, so the choice becomes between consistency
|
Almost all systems expect partition tolerance, so the choice becomes between consistency
|
||||||
|
@ -82,7 +82,7 @@ at in a system like Kanidm.
|
||||||
|
|
||||||
Object Level inconsistency occurs when two read-write servers who are partitioned,
|
Object Level inconsistency occurs when two read-write servers who are partitioned,
|
||||||
both allocate the same entry UUID to an entry. Since the uuid is the "primary key"
|
both allocate the same entry UUID to an entry. Since the uuid is the "primary key"
|
||||||
which anchors all other changes, and can not be duplicated, when the paritioning
|
which anchors all other changes, and can not be duplicated, when the partitioning
|
||||||
is resolved, the replication will occur, and one of the two items must be discarded
|
is resolved, the replication will occur, and one of the two items must be discarded
|
||||||
as inconsistent.
|
as inconsistent.
|
||||||
|
|
||||||
|
@ -121,7 +121,7 @@ assertions in a change.
|
||||||
|
|
||||||
The possible entry states are:
|
The possible entry states are:
|
||||||
|
|
||||||
* NonExistant
|
* NonExistent
|
||||||
* Live
|
* Live
|
||||||
* Recycled
|
* Recycled
|
||||||
* Tombstone
|
* Tombstone
|
||||||
|
@ -153,12 +153,12 @@ a CID.
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
create + NonExistant -> Live
|
create + NonExistent -> Live
|
||||||
modify + Live -> Live
|
modify + Live -> Live
|
||||||
recycle + Live -> Recycled
|
recycle + Live -> Recycled
|
||||||
revive + Recycled -> Live
|
revive + Recycled -> Live
|
||||||
tombstoned + Recycled -> Tombstone
|
tombstoned + Recycled -> Tombstone
|
||||||
purge + Tombstone -> NonExistant
|
purge + Tombstone -> NonExistent
|
||||||
|
|
||||||
.. image:: diagrams/object-lifecycle-states.png
|
.. image:: diagrams/object-lifecycle-states.png
|
||||||
:width: 800
|
:width: 800
|
||||||
|
@ -171,12 +171,12 @@ Entry Change Log
|
||||||
|
|
||||||
Within Kanidm id2entry is the primary store of active entry state representation. However
|
Within Kanidm id2entry is the primary store of active entry state representation. However
|
||||||
the content of id2entry is a reflection of the series of modifications and changes that
|
the content of id2entry is a reflection of the series of modifications and changes that
|
||||||
have applied to create that entitiy. As a result id2entry can be considered as an entry
|
have applied to create that entity. As a result id2entry can be considered as an entry
|
||||||
state cache.
|
state cache.
|
||||||
|
|
||||||
The true stable storage and representation for an entry will exist in a seperate Entry
|
The true stable storage and representation for an entry will exist in a separate Entry
|
||||||
Change Log type. Each entry will have it's own internal changelog that represents the
|
Change Log type. Each entry will have it's own internal changelog that represents the
|
||||||
changes that have occured in the entries lifetime and it's relevant state at that time.
|
changes that have occurred in the entries lifetime and it's relevant state at that time.
|
||||||
|
|
||||||
The reason for making a per-entry change log is to allow fine grained testing of the
|
The reason for making a per-entry change log is to allow fine grained testing of the
|
||||||
conflict resolution state machine on a per-entry scale, and then to be able to test
|
conflict resolution state machine on a per-entry scale, and then to be able to test
|
||||||
|
@ -324,7 +324,7 @@ snapshot that describes the entry as the sum of previous changes.
|
||||||
│ │
|
│ │
|
||||||
└─────────────────────────┘
|
└─────────────────────────┘
|
||||||
|
|
||||||
In our example here we would find the snapshot preceeding our newely inserted CID (in this case
|
In our example here we would find the snapshot preceding our newely inserted CID (in this case
|
||||||
our Anchor) and from that we would then replay all subsequent changes to ensure they apply
|
our Anchor) and from that we would then replay all subsequent changes to ensure they apply
|
||||||
correctly (or are rejected as conflicts).
|
correctly (or are rejected as conflicts).
|
||||||
|
|
||||||
|
@ -362,7 +362,7 @@ it's simpler and correct to continue to consider them.
|
||||||
Changelog Comparison - Replication Update Vector (RUV)
|
Changelog Comparison - Replication Update Vector (RUV)
|
||||||
======================================================
|
======================================================
|
||||||
|
|
||||||
A changelog is a single servers knowledge of all changes that have occured in history
|
A changelog is a single servers knowledge of all changes that have occurred in history
|
||||||
of a topology. Of course, the point of replication is that multiple servers are exchanging
|
of a topology. Of course, the point of replication is that multiple servers are exchanging
|
||||||
their changes, and potentially that a server must proxy changes to other servers. For this
|
their changes, and potentially that a server must proxy changes to other servers. For this
|
||||||
to occur we need a method of comparing changelog states, and then allowing fractional
|
to occur we need a method of comparing changelog states, and then allowing fractional
|
||||||
|
@ -518,12 +518,12 @@ re-apply these changes, discarding changes that would be invalid for those state
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
create + NonExistant -> Live
|
create + NonExistent -> Live
|
||||||
modify + Live -> Live
|
modify + Live -> Live
|
||||||
recycle + Live -> Recycled
|
recycle + Live -> Recycled
|
||||||
revive + Recycled -> Live
|
revive + Recycled -> Live
|
||||||
tombstoned + Recycled -> Tombstone
|
tombstoned + Recycled -> Tombstone
|
||||||
purge(*) + Tombstone -> NonExistant
|
purge(*) + Tombstone -> NonExistent
|
||||||
|
|
||||||
Lets now show a conflict case:
|
Lets now show a conflict case:
|
||||||
|
|
||||||
|
@ -538,18 +538,18 @@ Lets now show a conflict case:
|
||||||
|
|
||||||
Notice that both servers create E1. In order to resolve this conflict, we use the only
|
Notice that both servers create E1. In order to resolve this conflict, we use the only
|
||||||
synchronisation mechanism that we possess - time. On Server B at T3 when the changelog
|
synchronisation mechanism that we possess - time. On Server B at T3 when the changelog
|
||||||
of Server A is recieved, the events are replayed, and linearised to:
|
of Server A is Received, the events are replayed, and linearised to:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
T0: NonExistant E1 # For illustration only
|
T0: NonExistent E1 # For illustration only
|
||||||
T1: Create E1 (from A)
|
T1: Create E1 (from A)
|
||||||
T2: Create E1 (from B)
|
T2: Create E1 (from B)
|
||||||
|
|
||||||
As the event at T2 can not be valid, the change at T2 is *skipped* - E1 from B is turned
|
As the event at T2 can not be valid, the change at T2 is *skipped* - E1 from B is turned
|
||||||
into a conflict + recycled entry. See conflict UUID generation above.
|
into a conflict + recycled entry. See conflict UUID generation above.
|
||||||
|
|
||||||
Infact, having this state machine means we can see exactly what can and can not be resolved
|
In fact, having this state machine means we can see exactly what can and can not be resolved
|
||||||
correctly as combinations. Here is the complete list of valid combinations.
|
correctly as combinations. Here is the complete list of valid combinations.
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
|
@ -3,7 +3,7 @@ Resource Limits
|
||||||
|
|
||||||
As security sensitive software, kanidm must be "available" (as defined by
|
As security sensitive software, kanidm must be "available" (as defined by
|
||||||
confidentiality, integrity, and availability). This means that as a service we must
|
confidentiality, integrity, and availability). This means that as a service we must
|
||||||
be able to handle a large volume of potentially malicous traffic, and still able
|
be able to handle a large volume of potentially malicious traffic, and still able
|
||||||
to serve legitimate requests without fault or failure.
|
to serve legitimate requests without fault or failure.
|
||||||
|
|
||||||
To achieve this, the resources of the server must be managed and distributed to allow
|
To achieve this, the resources of the server must be managed and distributed to allow
|
||||||
|
@ -11,7 +11,7 @@ potentially thousands of operations per second, while preventing exhaustion of t
|
||||||
resources.
|
resources.
|
||||||
|
|
||||||
Kanidm is structured as a database, where each request requires a process
|
Kanidm is structured as a database, where each request requires a process
|
||||||
to resolve that query into an answer. This could be a request for authetication
|
to resolve that query into an answer. This could be a request for authentication
|
||||||
which is a true/false response, or a request for an identity so that we can
|
which is a true/false response, or a request for an identity so that we can
|
||||||
determine their groups for authorisation, or even just a request to find
|
determine their groups for authorisation, or even just a request to find
|
||||||
someone's email address in a corporate directory context.
|
someone's email address in a corporate directory context.
|
||||||
|
|
|
@ -16,7 +16,7 @@ during ProtoEntry <-> Entry transformations. This means that renames of objects,
|
||||||
references, but does mean they continue to render their linkage correctly.
|
references, but does mean they continue to render their linkage correctly.
|
||||||
* We can implement native referential integrity for the types rather than relying on admin and
|
* We can implement native referential integrity for the types rather than relying on admin and
|
||||||
plugin configuration to match the internal types.
|
plugin configuration to match the internal types.
|
||||||
* User defined classes will inherit referential behavious by using
|
* User defined classes will inherit referential behaviour by using
|
||||||
the correct schema attribute types.
|
the correct schema attribute types.
|
||||||
|
|
||||||
Implementation
|
Implementation
|
||||||
|
|
|
@ -44,7 +44,7 @@ metadata of the "creation" of the session, this is why we use the stub form that
|
||||||
its expiry.
|
its expiry.
|
||||||
|
|
||||||
On a replication attribute conflict, an expired state will always "overrule" an active state, even
|
On a replication attribute conflict, an expired state will always "overrule" an active state, even
|
||||||
if the CID of expiry preceeds that of the active state. We merge the expiry into the metadata in
|
if the CID of expiry precedes that of the active state. We merge the expiry into the metadata in
|
||||||
this case.
|
this case.
|
||||||
|
|
||||||
Token Usage / Revocation
|
Token Usage / Revocation
|
||||||
|
@ -58,8 +58,8 @@ Both are described, but we have chosen to use positive validation with limited i
|
||||||
Positive Validation
|
Positive Validation
|
||||||
-------------------
|
-------------------
|
||||||
|
|
||||||
This is a positive validation of the validity of a session. The abscence of a positive session
|
This is a positive validation of the validity of a session. The absence of a positive session
|
||||||
existance, is what implies revocation.
|
existence, is what implies revocation.
|
||||||
|
|
||||||
The session will have a "grace window", to account for replication delay. This is so that if the
|
The session will have a "grace window", to account for replication delay. This is so that if the
|
||||||
session is used on another kanidm server which has not yet received the latest revocation list
|
session is used on another kanidm server which has not yet received the latest revocation list
|
||||||
|
@ -87,7 +87,7 @@ Clean Up
|
||||||
^^^^^^^^
|
^^^^^^^^
|
||||||
|
|
||||||
Sessions can only be cleaned up once a sufficient replication window has passed, and the session is in an expired state,
|
Sessions can only be cleaned up once a sufficient replication window has passed, and the session is in an expired state,
|
||||||
since the abscence of the session also implies revocation has occured.
|
since the absence of the session also implies revocation has occurred.
|
||||||
This way once the changelog window is passed, we assume the specific session in question can be removed.
|
This way once the changelog window is passed, we assume the specific session in question can be removed.
|
||||||
|
|
||||||
An active session *should never* be deleted, it *must* pass through the expired state first. This is so that
|
An active session *should never* be deleted, it *must* pass through the expired state first. This is so that
|
||||||
|
@ -107,13 +107,13 @@ When a session is invalidated, it's session id is added to a "site-wide" revocat
|
||||||
the maximum time of use of that session id.
|
the maximum time of use of that session id.
|
||||||
|
|
||||||
When a session is check as part of a standard UAT check, or an OAuth 2.0 refresh, if the session
|
When a session is check as part of a standard UAT check, or an OAuth 2.0 refresh, if the session
|
||||||
id is present in the revocation list, it is denied access. Abscence from the revocation list implies
|
id is present in the revocation list, it is denied access. Absence from the revocation list implies
|
||||||
the session remains valid.
|
the session remains valid.
|
||||||
|
|
||||||
This method requires no gracewindow, since the replication of the revocation list will be bound to the
|
This method requires no gracewindow, since the replication of the revocation list will be bound to the
|
||||||
performance of replication and it's distribution.
|
performance of replication and it's distribution.
|
||||||
|
|
||||||
The risk is that all sessions *must* have a maximum life, so that their existance in the revocation
|
The risk is that all sessions *must* have a maximum life, so that their existence in the revocation
|
||||||
list is not unbounded. This version may have a greater risk of disk/memory usage due to the size of
|
list is not unbounded. This version may have a greater risk of disk/memory usage due to the size of
|
||||||
the list that may exist in large deployments.
|
the list that may exist in large deployments.
|
||||||
|
|
||||||
|
@ -137,7 +137,7 @@ and intervention. As a result, it is safer and more thorough for us to provide a
|
||||||
system, which accurately describes the exact state of what is valid at a point in time.
|
system, which accurately describes the exact state of what is valid at a point in time.
|
||||||
|
|
||||||
The specific restore scenario is that a token is issued at time A. A backup is taken now at time B.
|
The specific restore scenario is that a token is issued at time A. A backup is taken now at time B.
|
||||||
Next the user revokes the token at time C, and replication has not yet occured. At this point the backup
|
Next the user revokes the token at time C, and replication has not yet occurred. At this point the backup
|
||||||
from time B was restored.
|
from time B was restored.
|
||||||
|
|
||||||
In this scenario, without access to the token itself, or without scouring logs to find the session
|
In this scenario, without access to the token itself, or without scouring logs to find the session
|
||||||
|
@ -187,9 +187,9 @@ A "worst case" scenario is when we involve system failure along with an attempte
|
||||||
have three kanidm servers in replication.
|
have three kanidm servers in replication.
|
||||||
|
|
||||||
* Refresh Token A is stolen, but not used used.
|
* Refresh Token A is stolen, but not used used.
|
||||||
* Token A expires. The refesh is sent to Server 1. Token B is issued.
|
* Token A expires. The refresh is sent to Server 1. Token B is issued.
|
||||||
* Before replication can occur, Server 1 goes down.
|
* Before replication can occur, Server 1 goes down.
|
||||||
* Stolen refesh Token A is exchanged on Server 3.
|
* Stolen refresh Token A is exchanged on Server 3.
|
||||||
* Token B is used on Server 2.
|
* Token B is used on Server 2.
|
||||||
* Replication between server 2 and 3 occurs.
|
* Replication between server 2 and 3 occurs.
|
||||||
|
|
||||||
|
@ -199,9 +199,9 @@ legitimate token B can continue to be used.
|
||||||
To achieve this we need to determine an order of the events. Let's assume a better scenario first.
|
To achieve this we need to determine an order of the events. Let's assume a better scenario first.
|
||||||
|
|
||||||
* Refresh Token A is stolen, but not used used.
|
* Refresh Token A is stolen, but not used used.
|
||||||
* Token A expires. The refesh is sent to Server 1. Token B is issued.
|
* Token A expires. The refresh is sent to Server 1. Token B is issued.
|
||||||
* Token B is used on Server 1.
|
* Token B is used on Server 1.
|
||||||
* Stolen refesh Token A is exchanged on Server 1.
|
* Stolen refresh Token A is exchanged on Server 1.
|
||||||
|
|
||||||
We store a "refresh id" in the refresh token, and a issued-by id in the access token. Additionally
|
We store a "refresh id" in the refresh token, and a issued-by id in the access token. Additionally
|
||||||
we store an issued-at timestamp (from the replication CID) in both.
|
we store an issued-at timestamp (from the replication CID) in both.
|
||||||
|
@ -219,16 +219,16 @@ gracewindow to assume that our issuance was *valid*.
|
||||||
In this design we can see the following would occur.
|
In this design we can see the following would occur.
|
||||||
|
|
||||||
* Refresh Token A is stolen, but not used used.
|
* Refresh Token A is stolen, but not used used.
|
||||||
* Token A expires. The refesh is sent to Server 1. Token B is issued. (This updates the issued-by id)
|
* Token A expires. The refresh is sent to Server 1. Token B is issued. (This updates the issued-by id)
|
||||||
* Token B is used on Server 1. (valid, issued-by id matches)
|
* Token B is used on Server 1. (valid, issued-by id matches)
|
||||||
* Stolen refesh Token A is exchanged on Server 1. (invalid, not the currently defined refresh token)
|
* Stolen refresh Token A is exchanged on Server 1. (invalid, not the currently defined refresh token)
|
||||||
|
|
||||||
In the first case.
|
In the first case.
|
||||||
|
|
||||||
* Refresh Token A is stolen, but not used used.
|
* Refresh Token A is stolen, but not used used.
|
||||||
* Token A expires. The refesh is sent to Server 1. Token B is issued. (updates the issued-by id)
|
* Token A expires. The refresh is sent to Server 1. Token B is issued. (updates the issued-by id)
|
||||||
* Before replication can occur, Server 1 goes down.
|
* Before replication can occur, Server 1 goes down.
|
||||||
* Stolen refesh Token A is exchanged on Server 3. Token C is issued (updates the issued-by id)
|
* Stolen refresh Token A is exchanged on Server 3. Token C is issued (updates the issued-by id)
|
||||||
* Token B is used on Server 2. (valid, matches the current defined issued-by id)
|
* Token B is used on Server 2. (valid, matches the current defined issued-by id)
|
||||||
* Token B is used on Server 3. (valid, within gracewindow even though issued-by is incorrect)
|
* Token B is used on Server 3. (valid, within gracewindow even though issued-by is incorrect)
|
||||||
* Replication between server 2 and 3 occurs. (Conflict occurs in session. Second issued-by is revoked, meaning token C is now invalid)
|
* Replication between server 2 and 3 occurs. (Conflict occurs in session. Second issued-by is revoked, meaning token C is now invalid)
|
||||||
|
|
|
@ -21,7 +21,7 @@ This will allow filtering on sudo=true, meaning that certain default access cont
|
||||||
altered to enforce that they require sudo mode.
|
altered to enforce that they require sudo mode.
|
||||||
|
|
||||||
Some accounts by default represent a high level of privilege. These should have implicit sudo
|
Some accounts by default represent a high level of privilege. These should have implicit sudo
|
||||||
granted when they are autheticated. This will be based on a group membership idm_hp_implicit_sudo
|
granted when they are authenticated. This will be based on a group membership idm_hp_implicit_sudo
|
||||||
and should only apply to admin/idm_admin by default. This will pin the sudo expiry to the expiry
|
and should only apply to admin/idm_admin by default. This will pin the sudo expiry to the expiry
|
||||||
time of the session (rather than a shorter time).
|
time of the session (rather than a shorter time).
|
||||||
|
|
||||||
|
|
|
@ -54,7 +54,7 @@ It was considered to provide default ACP's that would protect system items. This
|
||||||
* it would require a "deny" acp type, and I do not wish to create this, as people could then create their own deny rules (always incorrect!)
|
* it would require a "deny" acp type, and I do not wish to create this, as people could then create their own deny rules (always incorrect!)
|
||||||
* There would be a lot of acp's involved in this protection (but acp's are expressive enough to provide it!)
|
* There would be a lot of acp's involved in this protection (but acp's are expressive enough to provide it!)
|
||||||
* The acp's would need a self-referencing acp to protect themselves from modification.
|
* The acp's would need a self-referencing acp to protect themselves from modification.
|
||||||
* Having a seperate plugin to protect this will be faster than acp processing because we check less filters (But this is not a strong argument)
|
* Having a separate plugin to protect this will be faster than acp processing because we check less filters (But this is not a strong argument)
|
||||||
* the plugin can provide targeted error messages about why they were denied, rather than a generic acp denied message.
|
* the plugin can provide targeted error messages about why they were denied, rather than a generic acp denied message.
|
||||||
* the plugin can provide detailed testing of edge cases in a confined manner
|
* the plugin can provide detailed testing of edge cases in a confined manner
|
||||||
|
|
||||||
|
|
|
@ -15,7 +15,7 @@ by extracting the last 32 bits.
|
||||||
Why only gid number?
|
Why only gid number?
|
||||||
--------------------
|
--------------------
|
||||||
|
|
||||||
It's a common misconception that uid is the only seperation on linux that matters. When a user
|
It's a common misconception that uid is the only separation on linux that matters. When a user
|
||||||
account exists, it has a primary user id AND a primary group id. Default umask grants rw to any
|
account exists, it has a primary user id AND a primary group id. Default umask grants rw to any
|
||||||
member of the same primary group id, which leads to misconfigurations where an admin in the intent
|
member of the same primary group id, which leads to misconfigurations where an admin in the intent
|
||||||
of saying "all users belong to default_users" ends up granting all users the right to read and write
|
of saying "all users belong to default_users" ends up granting all users the right to read and write
|
||||||
|
@ -28,7 +28,7 @@ SSSD's dynamic gid allocation from AD and FreeIPA) make effort to assign a user-
|
||||||
to combat this issue.
|
to combat this issue.
|
||||||
|
|
||||||
Instead of creating a group per account, we instead *imply* that the gidnumber *is* the uidnumber,
|
Instead of creating a group per account, we instead *imply* that the gidnumber *is* the uidnumber,
|
||||||
and that a posixaccount *implies* the existance of a user private group that the pam/nsswitch
|
and that a posixaccount *implies* the existence of a user private group that the pam/nsswitch
|
||||||
tools will generate on the client. This also guarantees that posixgroups will never conflict or
|
tools will generate on the client. This also guarantees that posixgroups will never conflict or
|
||||||
overlap with the uid namespace with weth attr uniqueness plugin.
|
overlap with the uid namespace with weth attr uniqueness plugin.
|
||||||
|
|
||||||
|
|
|
@ -72,7 +72,7 @@ origin = "https://idm.example.com:8443"
|
||||||
# Defaults to "" (no path set)
|
# Defaults to "" (no path set)
|
||||||
# path = "/var/lib/kanidm/backups/"
|
# path = "/var/lib/kanidm/backups/"
|
||||||
#
|
#
|
||||||
# The schedule to run online backups. All times are interpretted in UTC.
|
# The schedule to run online backups. All times are interpreted in UTC.
|
||||||
# The format of the cron expression is:
|
# The format of the cron expression is:
|
||||||
#
|
#
|
||||||
# sec min hour day of month month day of week year
|
# sec min hour day of month month day of week year
|
||||||
|
|
|
@ -14,7 +14,7 @@ bindaddress = "[::]:8443"
|
||||||
# To preserve the original IP of the caller, these systems
|
# To preserve the original IP of the caller, these systems
|
||||||
# will often add a header such as "Forwarded" or
|
# will often add a header such as "Forwarded" or
|
||||||
# "X-Forwarded-For". If set to true, then this header is
|
# "X-Forwarded-For". If set to true, then this header is
|
||||||
# respected as the "authoritive" source of the IP of the
|
# respected as the "authoritative" source of the IP of the
|
||||||
# connected client. If you are not using a load balancer
|
# connected client. If you are not using a load balancer
|
||||||
# then you should leave this value as default.
|
# then you should leave this value as default.
|
||||||
# Defaults to false
|
# Defaults to false
|
||||||
|
|
|
@ -90,7 +90,7 @@ if ( $LastExitCode -ne 0 ){
|
||||||
exit 1
|
exit 1
|
||||||
}
|
}
|
||||||
|
|
||||||
Write-Output "Generating the certficate signing request"
|
Write-Output "Generating the certificate signing request"
|
||||||
openssl req -sha256 -config "${ALTNAME_FILE}" -days 31 -new -extensions v3_req -key "${KEYFILE}" -out "${CSRFILE}"
|
openssl req -sha256 -config "${ALTNAME_FILE}" -days 31 -new -extensions v3_req -key "${KEYFILE}" -out "${CSRFILE}"
|
||||||
if ( $LastExitCode -ne 0 ){
|
if ( $LastExitCode -ne 0 ){
|
||||||
exit 1
|
exit 1
|
||||||
|
|
|
@ -162,7 +162,7 @@ openssl req -batch -config "${CANAME_FILE}" \
|
||||||
echo "Generating the server private key..."
|
echo "Generating the server private key..."
|
||||||
openssl ecparam -genkey -name prime256v1 -noout -out "${KEYFILE}"
|
openssl ecparam -genkey -name prime256v1 -noout -out "${KEYFILE}"
|
||||||
|
|
||||||
echo "Generating the certficate signing request..."
|
echo "Generating the certificate signing request..."
|
||||||
openssl req -sha256 -new \
|
openssl req -sha256 -new \
|
||||||
-batch \
|
-batch \
|
||||||
-config "${ALTNAME_FILE}" -extensions v3_req \
|
-config "${ALTNAME_FILE}" -extensions v3_req \
|
||||||
|
|
|
@ -145,7 +145,7 @@ git commit -m 'Commit message' change_file.rs ...
|
||||||
git push <myfork/origin> <feature-branch-name>
|
git push <myfork/origin> <feature-branch-name>
|
||||||
```
|
```
|
||||||
|
|
||||||
If you receive advice or make further changes, just keep commiting to the branch, and pushing to
|
If you receive advice or make further changes, just keep committing to the branch, and pushing to
|
||||||
your branch. When we are happy with the code, we'll merge in GitHub, meaning you can now clean up
|
your branch. When we are happy with the code, we'll merge in GitHub, meaning you can now clean up
|
||||||
your branch.
|
your branch.
|
||||||
|
|
||||||
|
@ -307,7 +307,7 @@ To speed up testing across platforms, we're leveraging GitHub actions to build c
|
||||||
use.
|
use.
|
||||||
|
|
||||||
Whenever code is merged with the `master` branch of Kanidm, containers are automatically built for
|
Whenever code is merged with the `master` branch of Kanidm, containers are automatically built for
|
||||||
`kanidmd` and `radius`. Sometimes they fail to build, but we'll try to keep them avilable.
|
`kanidmd` and `radius`. Sometimes they fail to build, but we'll try to keep them available.
|
||||||
|
|
||||||
To find information on the packages,
|
To find information on the packages,
|
||||||
[visit the Kanidm packages page](https://github.com/orgs/kanidm/packages?repo_name=kanidm).
|
[visit the Kanidm packages page](https://github.com/orgs/kanidm/packages?repo_name=kanidm).
|
||||||
|
|
|
@ -5,7 +5,7 @@ for these data. As a result, there are many concepts and important details to un
|
||||||
|
|
||||||
## Service Accounts vs Person Accounts
|
## Service Accounts vs Person Accounts
|
||||||
|
|
||||||
Kanidm seperates accounts into two types. Person accounts (or persons) are intended for use by
|
Kanidm separates accounts into two types. Person accounts (or persons) are intended for use by
|
||||||
humans that will access the system in an interactive way. Service accounts are intended for use by
|
humans that will access the system in an interactive way. Service accounts are intended for use by
|
||||||
computers or services that need to identify themself to Kanidm. Generally a person or group of
|
computers or services that need to identify themself to Kanidm. Generally a person or group of
|
||||||
persons will be responsible for and will manage service accounts. Because of this distinction these
|
persons will be responsible for and will manage service accounts. Because of this distinction these
|
||||||
|
@ -32,7 +32,7 @@ There are two builtin system administration accounts.
|
||||||
|
|
||||||
`admin` is the default service account which has privileges to configure and administer kanidm as a
|
`admin` is the default service account which has privileges to configure and administer kanidm as a
|
||||||
whole. This account can manage access controls, schema, integrations and more. However the `admin`
|
whole. This account can manage access controls, schema, integrations and more. However the `admin`
|
||||||
can not manage persons by default to seperate the priviliges. As this is a service account is is
|
can not manage persons by default to separate the privileges. As this is a service account is is
|
||||||
intended for limited use.
|
intended for limited use.
|
||||||
|
|
||||||
`idm_admin` is the default service account which has privileges to create persons and to manage
|
`idm_admin` is the default service account which has privileges to create persons and to manage
|
||||||
|
@ -42,7 +42,7 @@ Both the `admin` and the `idm_admin` user should _NOT_ be used for daily activit
|
||||||
initial system configuration, and for disaster recovery scenarios. You should delegate permissions
|
initial system configuration, and for disaster recovery scenarios. You should delegate permissions
|
||||||
as required to named user accounts instead.
|
as required to named user accounts instead.
|
||||||
|
|
||||||
The majority of the builtin groups are privilige groups that provide rights over Kanidm
|
The majority of the builtin groups are privilege groups that provide rights over Kanidm
|
||||||
administrative actions. These include groups for account management, person management (personal and
|
administrative actions. These include groups for account management, person management (personal and
|
||||||
sensitive data), group management, and more.
|
sensitive data), group management, and more.
|
||||||
|
|
||||||
|
|
|
@ -10,7 +10,7 @@ override even if applicable. They should only be created by system access profil
|
||||||
changes must be denied.
|
changes must be denied.
|
||||||
|
|
||||||
Access profiles are stored as entries and are dynamically loaded into a structure that is more
|
Access profiles are stored as entries and are dynamically loaded into a structure that is more
|
||||||
efficent for use at runtime. `Schema` and its transactions are a similar implementation.
|
efficient for use at runtime. `Schema` and its transactions are a similar implementation.
|
||||||
|
|
||||||
## Search Requirements
|
## Search Requirements
|
||||||
|
|
||||||
|
@ -28,7 +28,7 @@ An example:
|
||||||
> `legalName`), and their public `email`.
|
> `legalName`), and their public `email`.
|
||||||
|
|
||||||
Worded a bit differently. You need permission over the scope of entries, you need to be able to read
|
Worded a bit differently. You need permission over the scope of entries, you need to be able to read
|
||||||
the attribute to filter on it, and you need to be able to read the attribute to recieve it in the
|
the attribute to filter on it, and you need to be able to read the attribute to receive it in the
|
||||||
result entry.
|
result entry.
|
||||||
|
|
||||||
If Alice searches for `(&(name=william)(secretdata=x))`, we should not allow this to proceed because
|
If Alice searches for `(&(name=william)(secretdata=x))`, we should not allow this to proceed because
|
||||||
|
@ -74,7 +74,7 @@ acp class user: Pres(name) allow, Pres(desc) deny. Invert and Append
|
||||||
|
|
||||||
So the filter now is:
|
So the filter now is:
|
||||||
|
|
||||||
```
|
```text
|
||||||
And: {
|
And: {
|
||||||
AndNot: {
|
AndNot: {
|
||||||
Eq("class", "user")
|
Eq("class", "user")
|
||||||
|
@ -90,7 +90,7 @@ This would now only allow access to the `name` and `description` of the class `g
|
||||||
|
|
||||||
If we extend this to a third, this would work. A more complex example:
|
If we extend this to a third, this would work. A more complex example:
|
||||||
|
|
||||||
```
|
```text
|
||||||
search {
|
search {
|
||||||
action: allow
|
action: allow
|
||||||
targetscope: Eq("class", "group")
|
targetscope: Eq("class", "group")
|
||||||
|
@ -153,7 +153,7 @@ An example:
|
||||||
## Create Requirements
|
## Create Requirements
|
||||||
|
|
||||||
A `create` profile defines the following limits to what objects can be created, through the
|
A `create` profile defines the following limits to what objects can be created, through the
|
||||||
combination of filters and atttributes.
|
combination of filters and attributes.
|
||||||
|
|
||||||
An example:
|
An example:
|
||||||
|
|
||||||
|
@ -211,7 +211,7 @@ CHANGE: Receiver should be a group, and should be single value/multivalue? Can _
|
||||||
|
|
||||||
Example profiles:
|
Example profiles:
|
||||||
|
|
||||||
```
|
```text
|
||||||
search {
|
search {
|
||||||
action: allow
|
action: allow
|
||||||
receiver: Eq("memberof", "admins")
|
receiver: Eq("memberof", "admins")
|
||||||
|
@ -344,7 +344,7 @@ exist! However, each one must still list their respective actions to allow prope
|
||||||
The set of access controls is checked, and the set where receiver matches the current identified
|
The set of access controls is checked, and the set where receiver matches the current identified
|
||||||
user is collected. These then are added to the users requested search as:
|
user is collected. These then are added to the users requested search as:
|
||||||
|
|
||||||
```
|
```text
|
||||||
And(<User Search Request>, Or(<Set of Search Profile Filters))
|
And(<User Search Request>, Or(<Set of Search Profile Filters))
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -356,7 +356,7 @@ set of attrs has to be checked to determine what of that entry can be displayed.
|
||||||
three entries, A, B, C. An ACI that allows read of "name" on A, B exists, and a read of "mail" on B,
|
three entries, A, B, C. An ACI that allows read of "name" on A, B exists, and a read of "mail" on B,
|
||||||
C. The correct behaviour is then:
|
C. The correct behaviour is then:
|
||||||
|
|
||||||
```
|
```text
|
||||||
A: name
|
A: name
|
||||||
B: name, mail
|
B: name, mail
|
||||||
C: mail
|
C: mail
|
||||||
|
@ -370,7 +370,7 @@ faster method, but initially a simple version is needed.
|
||||||
|
|
||||||
Delete is similar to search, however there is the risk that the user may say something like:
|
Delete is similar to search, however there is the risk that the user may say something like:
|
||||||
|
|
||||||
```
|
```text
|
||||||
Pres("class").
|
Pres("class").
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
@ -26,13 +26,13 @@ the features that satisfy it.
|
||||||
|
|
||||||
### Refactor of default access controls
|
### Refactor of default access controls
|
||||||
|
|
||||||
The current default privileges will need to be refactored to improve seperation of privilege and
|
The current default privileges will need to be refactored to improve separation of privilege and
|
||||||
improved delegation of finer access rights.
|
improved delegation of finer access rights.
|
||||||
|
|
||||||
### Access profiles target specifiers instead of filters
|
### Access profiles target specifiers instead of filters
|
||||||
|
|
||||||
Access profiles should target a list of groups for who the access profile applies to, and who
|
Access profiles should target a list of groups for who the access profile applies to, and who
|
||||||
recieves the access it is granting.
|
receives the access it is granting.
|
||||||
|
|
||||||
Alternately an access profile could target "self" so that self-update rules can still be expressed.
|
Alternately an access profile could target "self" so that self-update rules can still be expressed.
|
||||||
|
|
||||||
|
@ -50,10 +50,10 @@ resolve step in some cases.
|
||||||
|
|
||||||
### Filter based groups
|
### Filter based groups
|
||||||
|
|
||||||
These are groups who's members are dynamicly allocated based on a filter query. This allows a
|
These are groups who's members are dynamically allocated based on a filter query. This allows a
|
||||||
similar level of dynamic group management as we have currently with access profiles, but with the
|
similar level of dynamic group management as we have currently with access profiles, but with the
|
||||||
additional ability for them to be used outside of the access control context. This is the "bridge"
|
additional ability for them to be used outside of the access control context. This is the "bridge"
|
||||||
allowing us to move from filter based access controls to "group" targetted.
|
allowing us to move from filter based access controls to "group" targeted.
|
||||||
|
|
||||||
A risk of filter based groups is "infinite churn" because of recursion. This can occur if you had a
|
A risk of filter based groups is "infinite churn" because of recursion. This can occur if you had a
|
||||||
rule such a "and not memberof = self" on a dynamic group. Because of this, filters on dynamic groups
|
rule such a "and not memberof = self" on a dynamic group. Because of this, filters on dynamic groups
|
||||||
|
@ -83,9 +83,9 @@ mode and this enforces rules on session expiry.
|
||||||
|
|
||||||
## Access Control Use Cases
|
## Access Control Use Cases
|
||||||
|
|
||||||
### Default Roles / Seperation of Privilege
|
### Default Roles / Separation of Privilege
|
||||||
|
|
||||||
By default we attempt to seperate privileges so that "no single account" has complete authority over
|
By default we attempt to separate privileges so that "no single account" has complete authority over
|
||||||
the system.
|
the system.
|
||||||
|
|
||||||
Satisfied by:
|
Satisfied by:
|
||||||
|
@ -156,7 +156,7 @@ Satisfied by:
|
||||||
For ux/ui integration, being able to list oauth2 applications that are accessible to the user would
|
For ux/ui integration, being able to list oauth2 applications that are accessible to the user would
|
||||||
be a good feature. To limit "who" can see the oauth2 applications that an account can access a way
|
be a good feature. To limit "who" can see the oauth2 applications that an account can access a way
|
||||||
to "allow read" but by proxy of the related users of the oauth2 service. This will require access
|
to "allow read" but by proxy of the related users of the oauth2 service. This will require access
|
||||||
controls to be able to interept the oauth2 config and provide rights based on that.
|
controls to be able to interpret the oauth2 config and provide rights based on that.
|
||||||
|
|
||||||
Satisfied by:
|
Satisfied by:
|
||||||
|
|
||||||
|
|
|
@ -1,7 +1,7 @@
|
||||||
# Oauth2 Application Listing
|
# Oauth2 Application Listing
|
||||||
|
|
||||||
A feature of some other IDM systems is to also double as a portal to linked applications. This
|
A feature of some other IDM systems is to also double as a portal to linked applications. This
|
||||||
allows a convinent access point for users to discover and access linked applications without having
|
allows a convenient access point for users to discover and access linked applications without having
|
||||||
to navigate to them manually. This naturally works quite well since it means that the user is
|
to navigate to them manually. This naturally works quite well since it means that the user is
|
||||||
already authenticated, and the IDM becomes the single "gateway" to accessing other applications.
|
already authenticated, and the IDM becomes the single "gateway" to accessing other applications.
|
||||||
|
|
||||||
|
|
|
@ -5,7 +5,7 @@ this, we need the capability to have "pluggable" synchronisation drivers. This i
|
||||||
deployments will be able to use our generic versions, or may have customisations they wish to
|
deployments will be able to use our generic versions, or may have customisations they wish to
|
||||||
perform that are unique to them.
|
perform that are unique to them.
|
||||||
|
|
||||||
To achieve this we need a layer of seperation - This effectively becomes an "extract, transform,
|
To achieve this we need a layer of separation - This effectively becomes an "extract, transform,
|
||||||
load" process. In addition this process must be _stateful_ where it can be run multiple times or
|
load" process. In addition this process must be _stateful_ where it can be run multiple times or
|
||||||
even continuously and it will bring kanidm into synchronisation.
|
even continuously and it will bring kanidm into synchronisation.
|
||||||
|
|
||||||
|
@ -106,7 +106,7 @@ For this reason a syncprovider is a derivative of a service account, which also
|
||||||
the _state_ of the synchronisation operation. An example of this is that LDAP syncrepl provides a
|
the _state_ of the synchronisation operation. An example of this is that LDAP syncrepl provides a
|
||||||
cookie defining the "state" of what has been "consumed up to" by the ETL bridge. During the load
|
cookie defining the "state" of what has been "consumed up to" by the ETL bridge. During the load
|
||||||
phase the modified entries _and_ the cookie are persisted. This means that if the operation fails
|
phase the modified entries _and_ the cookie are persisted. This means that if the operation fails
|
||||||
the cookie also rolls back allowing a retry of the sync. If it suceeds the next sync knows that
|
the cookie also rolls back allowing a retry of the sync. If it succeeds the next sync knows that
|
||||||
kanidm is in the correct state. Graphically:
|
kanidm is in the correct state. Graphically:
|
||||||
|
|
||||||
┌────────────┐ ┌────────────┐ ┌────────────┐
|
┌────────────┐ ┌────────────┐ ┌────────────┐
|
||||||
|
@ -128,7 +128,7 @@ kanidm is in the correct state. Graphically:
|
||||||
└────────────┘ └────────────┘ └────────────┘
|
└────────────┘ └────────────┘ └────────────┘
|
||||||
|
|
||||||
At any point the operation _may_ fail, so by locking the state with the upload of entries this
|
At any point the operation _may_ fail, so by locking the state with the upload of entries this
|
||||||
guarantees correct upload has suceeded and persisted. A success really means it!
|
guarantees correct upload has succeeded and persisted. A success really means it!
|
||||||
|
|
||||||
## SCIM
|
## SCIM
|
||||||
|
|
||||||
|
|
|
@ -49,7 +49,7 @@ This question is normally asked because people want to setup multiple Kanidm ser
|
||||||
single database.
|
single database.
|
||||||
|
|
||||||
Kanidm does not use SQL as a _database_. Kanidm uses SQL as a durable key-value store and Kanidm
|
Kanidm does not use SQL as a _database_. Kanidm uses SQL as a durable key-value store and Kanidm
|
||||||
implements it's own database, caching, querying, optimisation and indexing ontop of that key-value
|
implements it's own database, caching, querying, optimisation and indexing on top of that key-value
|
||||||
store.
|
store.
|
||||||
|
|
||||||
As a result, because Kanidm specifically implements it's own cache layer above the key-value store
|
As a result, because Kanidm specifically implements it's own cache layer above the key-value store
|
||||||
|
|
|
@ -44,7 +44,7 @@ For this reason, when you search the LDAP interface, Kanidm will make some mappi
|
||||||
- All other entries are direct subordinates of the domain\_info for DN purposes.
|
- All other entries are direct subordinates of the domain\_info for DN purposes.
|
||||||
- Distinguished Names (DNs) are generated from the spn, name, or uuid attribute.
|
- Distinguished Names (DNs) are generated from the spn, name, or uuid attribute.
|
||||||
- Bind DNs can be remapped and rewritten, and may not even be a DN during bind.
|
- Bind DNs can be remapped and rewritten, and may not even be a DN during bind.
|
||||||
- The '\*' and '+' operators can not be used in conjuction with attribute lists in searches.
|
- The '\*' and '+' operators can not be used in conjunction with attribute lists in searches.
|
||||||
|
|
||||||
These decisions were made to make the path as simple and effective as possible, relying more on the
|
These decisions were made to make the path as simple and effective as possible, relying more on the
|
||||||
Kanidm query and filter system than attempting to generate a tree-like representation of data. As
|
Kanidm query and filter system than attempting to generate a tree-like representation of data. As
|
||||||
|
|
|
@ -30,7 +30,7 @@ code and exchanges it for a valid token that may be provided to the user's brows
|
||||||
The resource server may then optionally contact the token introspection endpoint of the
|
The resource server may then optionally contact the token introspection endpoint of the
|
||||||
authorisation server about the provided OAuth token, which yields extra metadata about the identity
|
authorisation server about the provided OAuth token, which yields extra metadata about the identity
|
||||||
that holds the token from the authorisation. This metadata may include identity information, but
|
that holds the token from the authorisation. This metadata may include identity information, but
|
||||||
also may include extended metadata, sometimes refered to as "claims". Claims are information bound
|
also may include extended metadata, sometimes referred to as "claims". Claims are information bound
|
||||||
to a token based on properties of the session that may allow the resource server to make extended
|
to a token based on properties of the session that may allow the resource server to make extended
|
||||||
authorisation decisions without the need to contact the authorisation server to arbitrate.
|
authorisation decisions without the need to contact the authorisation server to arbitrate.
|
||||||
|
|
||||||
|
|
|
@ -216,7 +216,7 @@ password required pam_kanidm.so
|
||||||
|
|
||||||
# /etc/pam.d/common-session-pc
|
# /etc/pam.d/common-session-pc
|
||||||
# Controls setup of the user session once a successful authentication and authorisation has
|
# Controls setup of the user session once a successful authentication and authorisation has
|
||||||
# occured.
|
# occurred.
|
||||||
session optional pam_systemd.so
|
session optional pam_systemd.so
|
||||||
session required pam_limits.so
|
session required pam_limits.so
|
||||||
session optional pam_unix.so try_first_pass
|
session optional pam_unix.so try_first_pass
|
||||||
|
|
|
@ -56,7 +56,7 @@ We believe this is a reasonable decision and is a low risk to security because:
|
||||||
|
|
||||||
### Service Accounts Do Not Have Radius Access
|
### Service Accounts Do Not Have Radius Access
|
||||||
|
|
||||||
Due to the design of service accounts, they do not have access to radius for credential assignemnt.
|
Due to the design of service accounts, they do not have access to radius for credential assignment.
|
||||||
If you require RADIUS usage with a service account you _may_ need to use EAP-TLS or some other
|
If you require RADIUS usage with a service account you _may_ need to use EAP-TLS or some other
|
||||||
authentication method.
|
authentication method.
|
||||||
|
|
||||||
|
|
|
@ -48,7 +48,7 @@ kanidm system pw-badlist upload "path/to/badlist" [...]
|
||||||
|
|
||||||
Multiple bad lists can be listed and uploaded at once. These are preprocessed to identify and remove
|
Multiple bad lists can be listed and uploaded at once. These are preprocessed to identify and remove
|
||||||
passwords that zxcvbn and our password rules would already have eliminated. That helps to make the
|
passwords that zxcvbn and our password rules would already have eliminated. That helps to make the
|
||||||
bad list more efficent to operate over at run time.
|
bad list more efficient to operate over at run time.
|
||||||
|
|
||||||
## Password Rotation
|
## Password Rotation
|
||||||
|
|
||||||
|
|
|
@ -31,14 +31,14 @@ schema will block the creation).
|
||||||
Due to the requirement that a user have a UPG for security, many systems create these as two
|
Due to the requirement that a user have a UPG for security, many systems create these as two
|
||||||
independent items. For example in /etc/passwd and /etc/group:
|
independent items. For example in /etc/passwd and /etc/group:
|
||||||
|
|
||||||
```
|
```text
|
||||||
# passwd
|
# passwd
|
||||||
william:x:654401105:654401105::/home/william:/bin/zsh
|
william:x:654401105:654401105::/home/william:/bin/zsh
|
||||||
# group
|
# group
|
||||||
william:x:654401105:
|
william:x:654401105:
|
||||||
```
|
```
|
||||||
|
|
||||||
Other systems like FreeIPA use a plugin that generates a UPG as a seperate group entry on creation
|
Other systems like FreeIPA use a plugin that generates a UPG as a separate group entry on creation
|
||||||
of the account. This means there are two entries for an account, and they must be kept in lock-step.
|
of the account. This means there are two entries for an account, and they must be kept in lock-step.
|
||||||
|
|
||||||
Kanidm does neither of these. As the GID number of the user must be unique, and a user implies the
|
Kanidm does neither of these. As the GID number of the user must be unique, and a user implies the
|
||||||
|
@ -122,8 +122,8 @@ separate type of membership for POSIX members required.
|
||||||
Due to the way that Podman operates, in some cases using the Kanidm client inside non-root
|
Due to the way that Podman operates, in some cases using the Kanidm client inside non-root
|
||||||
containers with Kanidm accounts may fail with an error such as:
|
containers with Kanidm accounts may fail with an error such as:
|
||||||
|
|
||||||
```
|
```text
|
||||||
ERRO[0000] cannot find UID/GID for user NAME: No subuid ranges found for user "NAME" in /etc/subuid
|
ERROR[0000] cannot find UID/GID for user NAME: No subuid ranges found for user "NAME" in /etc/subuid
|
||||||
```
|
```
|
||||||
|
|
||||||
This is a fault in Podman and how it attempts to provide non-root containers, when UID/GIDs are
|
This is a fault in Podman and how it attempts to provide non-root containers, when UID/GIDs are
|
||||||
|
|
|
@ -80,7 +80,7 @@ revive group1 // no members
|
||||||
```
|
```
|
||||||
|
|
||||||
These issues could be looked at again in the future, but for now we think that deletes of groups is
|
These issues could be looked at again in the future, but for now we think that deletes of groups is
|
||||||
rare - we expect recycle bin to save you in "opps" moments, and in a majority of cases you may
|
rare - we expect recycle bin to save you in "oops" moments, and in a majority of cases you may
|
||||||
delete a group or a user and then restore them. To handle this series of steps requires extra code
|
delete a group or a user and then restore them. To handle this series of steps requires extra code
|
||||||
complexity in how we flag operations. For more, see
|
complexity in how we flag operations. For more, see
|
||||||
[This issue on github](https://github.com/kanidm/kanidm/issues/177).
|
[This issue on github](https://github.com/kanidm/kanidm/issues/177).
|
||||||
|
|
|
@ -85,7 +85,7 @@ appear into Kanidm. This is affected by how frequently you choose to run the syn
|
||||||
|
|
||||||
If the sync tool fails, you can investigate details in the Kanidmd server output.
|
If the sync tool fails, you can investigate details in the Kanidmd server output.
|
||||||
|
|
||||||
The sync tool can run "indefinetly" if you wish for Kanidm to always import data from the external
|
The sync tool can run "indefinitely" if you wish for Kanidm to always import data from the external
|
||||||
source.
|
source.
|
||||||
|
|
||||||
## Finalisting the Sync Account
|
## Finalisting the Sync Account
|
||||||
|
|
|
@ -472,7 +472,7 @@ impl KanidmClient {
|
||||||
let matching = ver == EXPECT_VERSION;
|
let matching = ver == EXPECT_VERSION;
|
||||||
|
|
||||||
if !matching {
|
if !matching {
|
||||||
warn!(server_version = ?ver, client_version = ?EXPECT_VERSION, "Mismatched client and server version - features may not work, or other unforseen errors may occur.")
|
warn!(server_version = ?ver, client_version = ?EXPECT_VERSION, "Mismatched client and server version - features may not work, or other unforeseen errors may occur.")
|
||||||
}
|
}
|
||||||
|
|
||||||
debug_assert!(matching);
|
debug_assert!(matching);
|
||||||
|
|
|
@ -116,7 +116,7 @@ pub struct AccessTokenResponse {
|
||||||
#[serde(skip_serializing_if = "Option::is_none")]
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
pub refresh_token: Option<String>,
|
pub refresh_token: Option<String>,
|
||||||
#[serde(skip_serializing_if = "Option::is_none")]
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
/// Space seperated list of scopes that were approved, if this differs from the
|
/// Space separated list of scopes that were approved, if this differs from the
|
||||||
/// original request.
|
/// original request.
|
||||||
pub scope: Option<String>,
|
pub scope: Option<String>,
|
||||||
#[serde(skip_serializing_if = "Option::is_none")]
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
@ -219,7 +219,7 @@ pub enum SubjectType {
|
||||||
|
|
||||||
#[derive(Serialize, Deserialize, Debug, PartialEq, Eq)]
|
#[derive(Serialize, Deserialize, Debug, PartialEq, Eq)]
|
||||||
#[serde(rename_all = "UPPERCASE")]
|
#[serde(rename_all = "UPPERCASE")]
|
||||||
// WE REFUSE TO SUPPORT NONE. DONT EVEN ASK. IT WONT HAPPEN.
|
// WE REFUSE TO SUPPORT NONE. DONT EVEN ASK. IT WON'T HAPPEN.
|
||||||
pub enum IdTokenSignAlg {
|
pub enum IdTokenSignAlg {
|
||||||
ES256,
|
ES256,
|
||||||
RS256,
|
RS256,
|
||||||
|
|
|
@ -53,7 +53,7 @@ pub struct ScimSyncPerson {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Need to allow this because clippy is broken and doesn't realise scimentry is out of crate
|
// Need to allow this because clippy is broken and doesn't realise scimentry is out of crate
|
||||||
// so this can't be fufilled
|
// so this can't be fulfilled
|
||||||
#[allow(clippy::from_over_into)]
|
#[allow(clippy::from_over_into)]
|
||||||
impl Into<ScimEntry> for ScimSyncPerson {
|
impl Into<ScimEntry> for ScimSyncPerson {
|
||||||
fn into(self) -> ScimEntry {
|
fn into(self) -> ScimEntry {
|
||||||
|
@ -107,7 +107,7 @@ pub struct ScimExternalMember {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Need to allow this because clippy is broken and doesn't realise scimentry is out of crate
|
// Need to allow this because clippy is broken and doesn't realise scimentry is out of crate
|
||||||
// so this can't be fufilled
|
// so this can't be fulfilled
|
||||||
#[allow(clippy::from_over_into)]
|
#[allow(clippy::from_over_into)]
|
||||||
impl Into<ScimComplexAttr> for ScimExternalMember {
|
impl Into<ScimComplexAttr> for ScimExternalMember {
|
||||||
fn into(self) -> ScimComplexAttr {
|
fn into(self) -> ScimComplexAttr {
|
||||||
|
@ -135,7 +135,7 @@ pub struct ScimSyncGroup {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Need to allow this because clippy is broken and doesn't realise scimentry is out of crate
|
// Need to allow this because clippy is broken and doesn't realise scimentry is out of crate
|
||||||
// so this can't be fufilled
|
// so this can't be fulfilled
|
||||||
#[allow(clippy::from_over_into)]
|
#[allow(clippy::from_over_into)]
|
||||||
impl Into<ScimEntry> for ScimSyncGroup {
|
impl Into<ScimEntry> for ScimSyncGroup {
|
||||||
fn into(self) -> ScimEntry {
|
fn into(self) -> ScimEntry {
|
||||||
|
|
|
@ -261,7 +261,7 @@ impl PartialEq for OperationError {
|
||||||
}
|
}
|
||||||
|
|
||||||
/* ===== higher level types ===== */
|
/* ===== higher level types ===== */
|
||||||
// These are all types that are conceptually layers ontop of entry and
|
// These are all types that are conceptually layers on top of entry and
|
||||||
// friends. They allow us to process more complex requests and provide
|
// friends. They allow us to process more complex requests and provide
|
||||||
// domain specific fields for the purposes of IDM, over the normal
|
// domain specific fields for the purposes of IDM, over the normal
|
||||||
// entry/ava/filter types. These related deeply to schema.
|
// entry/ava/filter types. These related deeply to schema.
|
||||||
|
@ -733,7 +733,7 @@ pub struct BackupCodesView {
|
||||||
/* ===== low level proto types ===== */
|
/* ===== low level proto types ===== */
|
||||||
|
|
||||||
// ProtoEntry vs Entry
|
// ProtoEntry vs Entry
|
||||||
// There is a good future reason for this seperation. It allows changing
|
// There is a good future reason for this separation. It allows changing
|
||||||
// the in memory server core entry type, without affecting the protoEntry type
|
// the in memory server core entry type, without affecting the protoEntry type
|
||||||
//
|
//
|
||||||
|
|
||||||
|
|
|
@ -339,7 +339,7 @@ eap {
|
||||||
|
|
||||||
#
|
#
|
||||||
# You can selectively disable TLS versions for
|
# You can selectively disable TLS versions for
|
||||||
# compatability with old client devices.
|
# compatibility with old client devices.
|
||||||
#
|
#
|
||||||
# If your system has OpenSSL 1.1.0 or greater, do NOT
|
# If your system has OpenSSL 1.1.0 or greater, do NOT
|
||||||
# use these. Instead, set tls_min_version and
|
# use these. Instead, set tls_min_version and
|
||||||
|
|
|
@ -663,11 +663,11 @@ async fn totp_enroll_prompt(session_token: &CUSessionToken, client: &KanidmClien
|
||||||
}) => totp_secret,
|
}) => totp_secret,
|
||||||
Ok(status) => {
|
Ok(status) => {
|
||||||
debug!(?status);
|
debug!(?status);
|
||||||
eprintln!("An error occured -> InvalidState");
|
eprintln!("An error occurred -> InvalidState");
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
Err(e) => {
|
Err(e) => {
|
||||||
eprintln!("An error occured -> {:?}", e);
|
eprintln!("An error occurred -> {:?}", e);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
@ -728,7 +728,7 @@ async fn totp_enroll_prompt(session_token: &CUSessionToken, client: &KanidmClien
|
||||||
.idm_account_credential_update_cancel_mfareg(session_token)
|
.idm_account_credential_update_cancel_mfareg(session_token)
|
||||||
.await
|
.await
|
||||||
{
|
{
|
||||||
eprintln!("An error occured -> {:?}", e);
|
eprintln!("An error occurred -> {:?}", e);
|
||||||
} else {
|
} else {
|
||||||
println!("success");
|
println!("success");
|
||||||
}
|
}
|
||||||
|
@ -781,7 +781,7 @@ async fn totp_enroll_prompt(session_token: &CUSessionToken, client: &KanidmClien
|
||||||
.idm_account_credential_update_accept_sha1_totp(session_token)
|
.idm_account_credential_update_accept_sha1_totp(session_token)
|
||||||
.await
|
.await
|
||||||
{
|
{
|
||||||
eprintln!("An error occured -> {:?}", e);
|
eprintln!("An error occurred -> {:?}", e);
|
||||||
} else {
|
} else {
|
||||||
println!("success");
|
println!("success");
|
||||||
}
|
}
|
||||||
|
@ -792,7 +792,7 @@ async fn totp_enroll_prompt(session_token: &CUSessionToken, client: &KanidmClien
|
||||||
.idm_account_credential_update_cancel_mfareg(session_token)
|
.idm_account_credential_update_cancel_mfareg(session_token)
|
||||||
.await
|
.await
|
||||||
{
|
{
|
||||||
eprintln!("An error occured -> {:?}", e);
|
eprintln!("An error occurred -> {:?}", e);
|
||||||
} else {
|
} else {
|
||||||
println!("success");
|
println!("success");
|
||||||
}
|
}
|
||||||
|
@ -802,11 +802,11 @@ async fn totp_enroll_prompt(session_token: &CUSessionToken, client: &KanidmClien
|
||||||
}
|
}
|
||||||
Ok(status) => {
|
Ok(status) => {
|
||||||
debug!(?status);
|
debug!(?status);
|
||||||
eprintln!("An error occured -> InvalidState");
|
eprintln!("An error occurred -> InvalidState");
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
Err(e) => {
|
Err(e) => {
|
||||||
eprintln!("An error occured -> {:?}", e);
|
eprintln!("An error occurred -> {:?}", e);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -825,11 +825,11 @@ async fn passkey_enroll_prompt(session_token: &CUSessionToken, client: &KanidmCl
|
||||||
}) => pk_reg,
|
}) => pk_reg,
|
||||||
Ok(status) => {
|
Ok(status) => {
|
||||||
debug!(?status);
|
debug!(?status);
|
||||||
eprintln!("An error occured -> InvalidState");
|
eprintln!("An error occurred -> InvalidState");
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
Err(e) => {
|
Err(e) => {
|
||||||
eprintln!("An error occured -> {:?}", e);
|
eprintln!("An error occurred -> {:?}", e);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
@ -860,7 +860,7 @@ async fn passkey_enroll_prompt(session_token: &CUSessionToken, client: &KanidmCl
|
||||||
{
|
{
|
||||||
Ok(_) => println!("success"),
|
Ok(_) => println!("success"),
|
||||||
Err(e) => {
|
Err(e) => {
|
||||||
eprintln!("An error occured -> {:?}", e);
|
eprintln!("An error occurred -> {:?}", e);
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
@ -943,7 +943,7 @@ async fn credential_update_exec(
|
||||||
{
|
{
|
||||||
Ok(status) => display_status(status),
|
Ok(status) => display_status(status),
|
||||||
Err(e) => {
|
Err(e) => {
|
||||||
eprintln!("An error occured -> {:?}", e);
|
eprintln!("An error occurred -> {:?}", e);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -970,7 +970,7 @@ async fn credential_update_exec(
|
||||||
eprintln!(" - {}", fb_item)
|
eprintln!(" - {}", fb_item)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
_ => eprintln!("An error occured -> {:?}", e),
|
_ => eprintln!("An error occurred -> {:?}", e),
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
println!("Successfully reset password.");
|
println!("Successfully reset password.");
|
||||||
|
@ -987,7 +987,7 @@ async fn credential_update_exec(
|
||||||
.idm_account_credential_update_remove_totp(&session_token)
|
.idm_account_credential_update_remove_totp(&session_token)
|
||||||
.await
|
.await
|
||||||
{
|
{
|
||||||
eprintln!("An error occured -> {:?}", e);
|
eprintln!("An error occurred -> {:?}", e);
|
||||||
} else {
|
} else {
|
||||||
println!("success");
|
println!("success");
|
||||||
}
|
}
|
||||||
|
@ -1012,10 +1012,10 @@ async fn credential_update_exec(
|
||||||
}
|
}
|
||||||
Ok(status) => {
|
Ok(status) => {
|
||||||
debug!(?status);
|
debug!(?status);
|
||||||
eprintln!("An error occured -> InvalidState");
|
eprintln!("An error occurred -> InvalidState");
|
||||||
}
|
}
|
||||||
Err(e) => {
|
Err(e) => {
|
||||||
eprintln!("An error occured -> {:?}", e);
|
eprintln!("An error occurred -> {:?}", e);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -1029,7 +1029,7 @@ async fn credential_update_exec(
|
||||||
.idm_account_credential_update_primary_remove(&session_token)
|
.idm_account_credential_update_primary_remove(&session_token)
|
||||||
.await
|
.await
|
||||||
{
|
{
|
||||||
eprintln!("An error occured -> {:?}", e);
|
eprintln!("An error occurred -> {:?}", e);
|
||||||
} else {
|
} else {
|
||||||
println!("success");
|
println!("success");
|
||||||
}
|
}
|
||||||
|
@ -1055,7 +1055,7 @@ async fn credential_update_exec(
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
Err(e) => {
|
Err(e) => {
|
||||||
eprintln!("An error occured pulling existing credentials -> {:?}", e);
|
eprintln!("An error occurred pulling existing credentials -> {:?}", e);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
let uuid_s: String = Input::new()
|
let uuid_s: String = Input::new()
|
||||||
|
@ -1071,13 +1071,13 @@ async fn credential_update_exec(
|
||||||
.interact_text()
|
.interact_text()
|
||||||
.expect("Failed to interact with interactive session");
|
.expect("Failed to interact with interactive session");
|
||||||
|
|
||||||
// Remeber, if it's NOT a valid uuid, it must have been empty as a termination.
|
// Remember, if it's NOT a valid uuid, it must have been empty as a termination.
|
||||||
if let Ok(uuid) = Uuid::parse_str(&uuid_s) {
|
if let Ok(uuid) = Uuid::parse_str(&uuid_s) {
|
||||||
if let Err(e) = client
|
if let Err(e) = client
|
||||||
.idm_account_credential_update_passkey_remove(&session_token, uuid)
|
.idm_account_credential_update_passkey_remove(&session_token, uuid)
|
||||||
.await
|
.await
|
||||||
{
|
{
|
||||||
eprintln!("An error occured -> {:?}", e);
|
eprintln!("An error occurred -> {:?}", e);
|
||||||
} else {
|
} else {
|
||||||
println!("success");
|
println!("success");
|
||||||
}
|
}
|
||||||
|
@ -1099,7 +1099,7 @@ async fn credential_update_exec(
|
||||||
.idm_account_credential_update_commit(&session_token)
|
.idm_account_credential_update_commit(&session_token)
|
||||||
.await
|
.await
|
||||||
{
|
{
|
||||||
eprintln!("An error occured -> {:?}", e);
|
eprintln!("An error occurred -> {:?}", e);
|
||||||
} else {
|
} else {
|
||||||
println!("success");
|
println!("success");
|
||||||
}
|
}
|
||||||
|
|
|
@ -79,7 +79,7 @@ impl SynchOpt {
|
||||||
.default(false)
|
.default(false)
|
||||||
.with_prompt("Do you want to continue? This operation can NOT be undone.")
|
.with_prompt("Do you want to continue? This operation can NOT be undone.")
|
||||||
.interact()
|
.interact()
|
||||||
.unwrap()
|
.expect("Failed to get a valid response!")
|
||||||
{
|
{
|
||||||
info!("No changes were made");
|
info!("No changes were made");
|
||||||
return;
|
return;
|
||||||
|
@ -96,7 +96,7 @@ impl SynchOpt {
|
||||||
.default(false)
|
.default(false)
|
||||||
.with_prompt("Do you want to continue? This operation can NOT be undone.")
|
.with_prompt("Do you want to continue? This operation can NOT be undone.")
|
||||||
.interact()
|
.interact()
|
||||||
.unwrap()
|
.expect("Failed to get a valid response!")
|
||||||
{
|
{
|
||||||
info!("No changes were made");
|
info!("No changes were made");
|
||||||
return;
|
return;
|
||||||
|
|
|
@ -704,7 +704,7 @@ async fn main() {
|
||||||
if let Err(e) =
|
if let Err(e) =
|
||||||
handle_task_client(socket, &task_channel_tx, &mut task_channel_rx).await
|
handle_task_client(socket, &task_channel_tx, &mut task_channel_rx).await
|
||||||
{
|
{
|
||||||
error!("Task client error occured; error = {:?}", e);
|
error!("Task client error occurred; error = {:?}", e);
|
||||||
}
|
}
|
||||||
// If they DC we go back to accept.
|
// If they DC we go back to accept.
|
||||||
}
|
}
|
||||||
|
@ -727,7 +727,7 @@ async fn main() {
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
if let Err(e) = handle_client(socket, cachelayer_ref.clone(), &tc_tx).await
|
if let Err(e) = handle_client(socket, cachelayer_ref.clone(), &tc_tx).await
|
||||||
{
|
{
|
||||||
error!("handle_client error occured; error = {:?}", e);
|
error!("handle_client error occurred; error = {:?}", e);
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
|
@ -171,9 +171,9 @@ impl<'a> DbTxn<'a> {
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn commit(mut self) -> Result<(), ()> {
|
pub fn commit(mut self) -> Result<(), ()> {
|
||||||
// debug!("Commiting BE txn");
|
// debug!("Committing BE txn");
|
||||||
if self.committed {
|
if self.committed {
|
||||||
error!("Invalid state, SQL transaction was already commited!");
|
error!("Invalid state, SQL transaction was already committed!");
|
||||||
return Err(());
|
return Err(());
|
||||||
}
|
}
|
||||||
self.committed = true;
|
self.committed = true;
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
//! This module contains the server's async tasks that are called from the various frontend
|
//! This module contains the server's async tasks that are called from the various frontend
|
||||||
//! components to conduct operations. These are seperated based on protocol versions and
|
//! components to conduct operations. These are separated based on protocol versions and
|
||||||
//! if they are read or write transactions internally.
|
//! if they are read or write transactions internally.
|
||||||
|
|
||||||
pub mod v1_read;
|
pub mod v1_read;
|
||||||
|
|
|
@ -55,7 +55,7 @@ impl QueryServerReadV1 {
|
||||||
&(*x_ref)
|
&(*x_ref)
|
||||||
}
|
}
|
||||||
|
|
||||||
// The server only recieves "Message" structures, which
|
// The server only receives "Message" structures, which
|
||||||
// are whole self contained DB operations with all parsing
|
// are whole self contained DB operations with all parsing
|
||||||
// required complete. We still need to do certain validation steps, but
|
// required complete. We still need to do certain validation steps, but
|
||||||
// at this point our just is just to route to do_<action>
|
// at this point our just is just to route to do_<action>
|
||||||
|
|
|
@ -87,7 +87,7 @@ impl<State: Clone + Send + Sync + 'static> tide::Middleware<State> for StaticCon
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Default)]
|
#[derive(Default)]
|
||||||
/// Adds the folloing headers to responses
|
/// Adds the following headers to responses
|
||||||
/// - x-frame-options
|
/// - x-frame-options
|
||||||
/// - x-content-type-options
|
/// - x-content-type-options
|
||||||
/// - cross-origin-resource-policy
|
/// - cross-origin-resource-policy
|
||||||
|
|
|
@ -1,6 +1,6 @@
|
||||||
///! Route-mapping magic for tide
|
///! Route-mapping magic for tide
|
||||||
///
|
///
|
||||||
/// Instead of adding routes with (for example) the .post method you add them with .mapped_post, pasing an instance of [RouteMap] and it'll do the rest...
|
/// Instead of adding routes with (for example) the .post method you add them with .mapped_post, passing an instance of [RouteMap] and it'll do the rest...
|
||||||
use serde::{Deserialize, Serialize};
|
use serde::{Deserialize, Serialize};
|
||||||
use tide::{Endpoint, Route};
|
use tide::{Endpoint, Route};
|
||||||
|
|
||||||
|
|
|
@ -265,7 +265,7 @@ pub async fn json_rest_event_delete_attr(
|
||||||
mut req: tide::Request<AppState>,
|
mut req: tide::Request<AppState>,
|
||||||
filter: Filter<FilterInvalid>,
|
filter: Filter<FilterInvalid>,
|
||||||
uuid_or_name: String,
|
uuid_or_name: String,
|
||||||
// Seperate for account_delete_id_radius
|
// Separate for account_delete_id_radius
|
||||||
attr: String,
|
attr: String,
|
||||||
) -> tide::Result {
|
) -> tide::Result {
|
||||||
let uat = req.get_current_uat();
|
let uat = req.get_current_uat();
|
||||||
|
|
|
@ -128,7 +128,7 @@ impl IntervalActor {
|
||||||
)
|
)
|
||||||
.await
|
.await
|
||||||
{
|
{
|
||||||
error!(?e, "An online backup error occured.");
|
error!(?e, "An online backup error occurred.");
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -123,7 +123,7 @@ async fn tls_acceptor(
|
||||||
match accept_result {
|
match accept_result {
|
||||||
Ok((tcpstream, client_socket_addr)) => {
|
Ok((tcpstream, client_socket_addr)) => {
|
||||||
// Start the event
|
// Start the event
|
||||||
// From the parms we need to create an SslContext.
|
// From the parameters we need to create an SslContext.
|
||||||
let mut tlsstream = match Ssl::new(tls_parms.context())
|
let mut tlsstream = match Ssl::new(tls_parms.context())
|
||||||
.and_then(|tls_obj| SslStream::new(tls_obj, tcpstream))
|
.and_then(|tls_obj| SslStream::new(tls_obj, tcpstream))
|
||||||
{
|
{
|
||||||
|
|
|
@ -95,7 +95,7 @@ fn setup_backend_vacuum(
|
||||||
|
|
||||||
// TODO #54: We could move most of the be/schema/qs setup and startup
|
// TODO #54: We could move most of the be/schema/qs setup and startup
|
||||||
// outside of this call, then pass in "what we need" in a cloneable
|
// outside of this call, then pass in "what we need" in a cloneable
|
||||||
// form, this way we could have seperate Idm vs Qs threads, and dedicated
|
// form, this way we could have separate Idm vs Qs threads, and dedicated
|
||||||
// threads for write vs read
|
// threads for write vs read
|
||||||
async fn setup_qs_idms(
|
async fn setup_qs_idms(
|
||||||
be: Backend,
|
be: Backend,
|
||||||
|
@ -456,7 +456,7 @@ pub async fn domain_rename_core(config: &Configuration) {
|
||||||
match r {
|
match r {
|
||||||
Ok(_) => info!("Domain Rename Success!"),
|
Ok(_) => info!("Domain Rename Success!"),
|
||||||
Err(e) => {
|
Err(e) => {
|
||||||
error!("Domain Rename Failed - Rollback has occured: {:?}", e);
|
error!("Domain Rename Failed - Rollback has occurred: {:?}", e);
|
||||||
std::process::exit(1);
|
std::process::exit(1);
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
@ -529,7 +529,7 @@ pub async fn recover_account_core(config: &Configuration, name: &str) {
|
||||||
Ok(new_pw) => match idms_prox_write.commit() {
|
Ok(new_pw) => match idms_prox_write.commit() {
|
||||||
Ok(_) => new_pw,
|
Ok(_) => new_pw,
|
||||||
Err(e) => {
|
Err(e) => {
|
||||||
error!("A critical error during commit occured {:?}", e);
|
error!("A critical error during commit occurred {:?}", e);
|
||||||
std::process::exit(1);
|
std::process::exit(1);
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
@ -587,7 +587,7 @@ impl CoreHandle {
|
||||||
impl Drop for CoreHandle {
|
impl Drop for CoreHandle {
|
||||||
fn drop(&mut self) {
|
fn drop(&mut self) {
|
||||||
if !self.clean_shutdown {
|
if !self.clean_shutdown {
|
||||||
eprintln!("⚠️ UNCLEAN SHUTDOWN OCCURED ⚠️ ");
|
eprintln!("⚠️ UNCLEAN SHUTDOWN OCCURRED ⚠️ ");
|
||||||
}
|
}
|
||||||
// Can't enable yet until we clean up unix_int cache layer test
|
// Can't enable yet until we clean up unix_int cache layer test
|
||||||
// debug_assert!(self.clean_shutdown);
|
// debug_assert!(self.clean_shutdown);
|
||||||
|
|
|
@ -41,7 +41,7 @@ ldap3_proto.workspace = true
|
||||||
libc.workspace = true
|
libc.workspace = true
|
||||||
libsqlite3-sys.workspace = true
|
libsqlite3-sys.workspace = true
|
||||||
num_enum.workspace = true
|
num_enum.workspace = true
|
||||||
# We need to explicitly ask for openssl-sys so that we get the version propogated
|
# We need to explicitly ask for openssl-sys so that we get the version propagated
|
||||||
# into the build.rs for legacy feature checks.
|
# into the build.rs for legacy feature checks.
|
||||||
openssl-sys.workspace = true
|
openssl-sys.workspace = true
|
||||||
openssl.workspace = true
|
openssl.workspace = true
|
||||||
|
|
|
@ -742,7 +742,7 @@ impl<'a> IdlArcSqliteWriteTransaction<'a> {
|
||||||
/// Index Slope Analysis. For the purpose of external modules you can consider this as a
|
/// Index Slope Analysis. For the purpose of external modules you can consider this as a
|
||||||
/// module that generates "weights" for each index that we have. Smaller values are faster
|
/// module that generates "weights" for each index that we have. Smaller values are faster
|
||||||
/// indexes - larger values are more costly ones. This is not intended to yield perfect
|
/// indexes - larger values are more costly ones. This is not intended to yield perfect
|
||||||
/// weights. The intent is to seperate over obviously more effective indexes rather than
|
/// weights. The intent is to separate over obviously more effective indexes rather than
|
||||||
/// to min-max the fine tuning of these. Consider name=foo vs class=*. name=foo will always
|
/// to min-max the fine tuning of these. Consider name=foo vs class=*. name=foo will always
|
||||||
/// be better than class=*, but comparing name=foo to spn=foo is "much over muchness" since
|
/// be better than class=*, but comparing name=foo to spn=foo is "much over muchness" since
|
||||||
/// both are really fast.
|
/// both are really fast.
|
||||||
|
@ -755,7 +755,7 @@ impl<'a> IdlArcSqliteWriteTransaction<'a> {
|
||||||
*
|
*
|
||||||
* Since we have the filter2idl threshold, we want to find "what is the smallest
|
* Since we have the filter2idl threshold, we want to find "what is the smallest
|
||||||
* and most unique index asap so we can exit faster". This allows us to avoid
|
* and most unique index asap so we can exit faster". This allows us to avoid
|
||||||
* loading larger most costly indexs that either have large idls, high variation
|
* loading larger most costly indexes that either have large idls, high variation
|
||||||
* or few keys and are likely to miss and have to go out to disk.
|
* or few keys and are likely to miss and have to go out to disk.
|
||||||
*
|
*
|
||||||
* A few methods were proposed, but thanks to advice from Perri Boulton (psychology
|
* A few methods were proposed, but thanks to advice from Perri Boulton (psychology
|
||||||
|
@ -874,7 +874,7 @@ impl<'a> IdlArcSqliteWriteTransaction<'a> {
|
||||||
* the "slopeyness" aka the jank of the line, or more precisely, the angle.
|
* the "slopeyness" aka the jank of the line, or more precisely, the angle.
|
||||||
*
|
*
|
||||||
* Now we need a way to numerically compare these lines. Since the points could be
|
* Now we need a way to numerically compare these lines. Since the points could be
|
||||||
* anywere on our graph:
|
* anywhere on our graph:
|
||||||
*
|
*
|
||||||
* |
|
* |
|
||||||
* 4 + *
|
* 4 + *
|
||||||
|
@ -905,7 +905,7 @@ impl<'a> IdlArcSqliteWriteTransaction<'a> {
|
||||||
* ───────────┼
|
* ───────────┼
|
||||||
* nkeys
|
* nkeys
|
||||||
*
|
*
|
||||||
* Since this is right angled we can use arctan to work out the degress of the line. This
|
* Since this is right angled we can use arctan to work out the degrees of the line. This
|
||||||
* gives us a value from 1.0 to 90.0 (We clamp to a minimum of 1.0, because we use 0 as "None"
|
* gives us a value from 1.0 to 90.0 (We clamp to a minimum of 1.0, because we use 0 as "None"
|
||||||
* in the NonZeroU8 type in filter.rs, which allows ZST optimisation)
|
* in the NonZeroU8 type in filter.rs, which allows ZST optimisation)
|
||||||
*
|
*
|
||||||
|
@ -914,7 +914,7 @@ impl<'a> IdlArcSqliteWriteTransaction<'a> {
|
||||||
* to minimise this loss and then we convert.
|
* to minimise this loss and then we convert.
|
||||||
*
|
*
|
||||||
* And there we have it! A slope factor of the index! A way to compare these sets quickly
|
* And there we have it! A slope factor of the index! A way to compare these sets quickly
|
||||||
* at query optimisation time to minimse index access.
|
* at query optimisation time to minimise index access.
|
||||||
*/
|
*/
|
||||||
let slopes: HashMap<_, _> = data
|
let slopes: HashMap<_, _> = data
|
||||||
.into_iter()
|
.into_iter()
|
||||||
|
@ -938,7 +938,7 @@ impl<'a> IdlArcSqliteWriteTransaction<'a> {
|
||||||
let l: u32 = data.len().try_into().unwrap_or(u32::MAX);
|
let l: u32 = data.len().try_into().unwrap_or(u32::MAX);
|
||||||
let c = f64::from(l);
|
let c = f64::from(l);
|
||||||
let mean = data.iter().take(u32::MAX as usize).sum::<f64>() / c;
|
let mean = data.iter().take(u32::MAX as usize).sum::<f64>() / c;
|
||||||
let varience: f64 = data
|
let variance: f64 = data
|
||||||
.iter()
|
.iter()
|
||||||
.take(u32::MAX as usize)
|
.take(u32::MAX as usize)
|
||||||
.map(|len| {
|
.map(|len| {
|
||||||
|
@ -948,7 +948,7 @@ impl<'a> IdlArcSqliteWriteTransaction<'a> {
|
||||||
.sum::<f64>()
|
.sum::<f64>()
|
||||||
/ (c - 1.0);
|
/ (c - 1.0);
|
||||||
|
|
||||||
let sd = varience.sqrt();
|
let sd = variance.sqrt();
|
||||||
|
|
||||||
// This is saying ~85% of values will be at least this len or less.
|
// This is saying ~85% of values will be at least this len or less.
|
||||||
let sd_1 = mean + sd;
|
let sd_1 = mean + sd;
|
||||||
|
@ -956,14 +956,14 @@ impl<'a> IdlArcSqliteWriteTransaction<'a> {
|
||||||
} else if data.len() == 1 {
|
} else if data.len() == 1 {
|
||||||
(1.0, data[0])
|
(1.0, data[0])
|
||||||
} else {
|
} else {
|
||||||
// Cant resolve.
|
// Can't resolve.
|
||||||
return IdxSlope::MAX;
|
return IdxSlope::MAX;
|
||||||
};
|
};
|
||||||
|
|
||||||
// Now we know sd_1 and number of keys. We can use this as a triangle to work out
|
// Now we know sd_1 and number of keys. We can use this as a triangle to work out
|
||||||
// the angle along the hypotenuse. We use this angle - or slope - to show which
|
// the angle along the hypotenuse. We use this angle - or slope - to show which
|
||||||
// elements have the smallest sd_1 and most keys available. Then because this
|
// elements have the smallest sd_1 and most keys available. Then because this
|
||||||
// is bound between 0.0 -> 90.0, we "unfurl" this around a half circle by multipling
|
// is bound between 0.0 -> 90.0, we "unfurl" this around a half circle by multiplying
|
||||||
// by 2. This gives us a little more precision when we drop the decimal point.
|
// by 2. This gives us a little more precision when we drop the decimal point.
|
||||||
let sf = (sd_1 / n_keys).atan().to_degrees() * 2.8;
|
let sf = (sd_1 / n_keys).atan().to_degrees() * 2.8;
|
||||||
|
|
||||||
|
|
|
@ -513,7 +513,7 @@ pub trait BackendTransaction {
|
||||||
FilterResolved::Inclusion(l, _) => {
|
FilterResolved::Inclusion(l, _) => {
|
||||||
// For inclusion to be valid, every term must have *at least* one element present.
|
// For inclusion to be valid, every term must have *at least* one element present.
|
||||||
// This really relies on indexing, and so it's internal only - generally only
|
// This really relies on indexing, and so it's internal only - generally only
|
||||||
// for fully indexed existance queries, such as from refint.
|
// for fully indexed existence queries, such as from refint.
|
||||||
|
|
||||||
// This has a lot in common with an And and Or but not really quite either.
|
// This has a lot in common with an And and Or but not really quite either.
|
||||||
let mut plan = Vec::new();
|
let mut plan = Vec::new();
|
||||||
|
@ -787,7 +787,7 @@ pub trait BackendTransaction {
|
||||||
|
|
||||||
// Check the other entry:attr indexes are valid
|
// Check the other entry:attr indexes are valid
|
||||||
//
|
//
|
||||||
// This is acutally pretty hard to check, because we can check a value *should*
|
// This is actually pretty hard to check, because we can check a value *should*
|
||||||
// exist, but not that a value should NOT be present in the index. Thought needed ...
|
// exist, but not that a value should NOT be present in the index. Thought needed ...
|
||||||
|
|
||||||
// Got here? Ok!
|
// Got here? Ok!
|
||||||
|
@ -1101,7 +1101,7 @@ impl<'a> BackendWriteTransaction<'a> {
|
||||||
let id_list: IDLBitRange = tombstones.iter().map(|e| e.get_id()).collect();
|
let id_list: IDLBitRange = tombstones.iter().map(|e| e.get_id()).collect();
|
||||||
|
|
||||||
// Ensure nothing here exists in the RUV index, else it means
|
// Ensure nothing here exists in the RUV index, else it means
|
||||||
// we didn't trim properly, or some other state violation has occured.
|
// we didn't trim properly, or some other state violation has occurred.
|
||||||
if !((&ruv_idls & &id_list).is_empty()) {
|
if !((&ruv_idls & &id_list).is_empty()) {
|
||||||
admin_error!("RUV still contains entries that are going to be removed.");
|
admin_error!("RUV still contains entries that are going to be removed.");
|
||||||
return Err(OperationError::ReplInvalidRUVState);
|
return Err(OperationError::ReplInvalidRUVState);
|
||||||
|
@ -1770,7 +1770,7 @@ impl Backend {
|
||||||
*/
|
*/
|
||||||
}
|
}
|
||||||
|
|
||||||
// What are the possible actions we'll recieve here?
|
// What are the possible actions we'll receive here?
|
||||||
|
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
mod tests {
|
mod tests {
|
||||||
|
@ -2150,7 +2150,7 @@ mod tests {
|
||||||
|
|
||||||
match result {
|
match result {
|
||||||
Err(e) => {
|
Err(e) => {
|
||||||
// if the error is the file is not found, thats what we want so continue,
|
// if the error is the file is not found, that's what we want so continue,
|
||||||
// otherwise return the error
|
// otherwise return the error
|
||||||
match e.kind() {
|
match e.kind() {
|
||||||
std::io::ErrorKind::NotFound => {}
|
std::io::ErrorKind::NotFound => {}
|
||||||
|
@ -2205,7 +2205,7 @@ mod tests {
|
||||||
|
|
||||||
match result {
|
match result {
|
||||||
Err(e) => {
|
Err(e) => {
|
||||||
// if the error is the file is not found, thats what we want so continue,
|
// if the error is the file is not found, that's what we want so continue,
|
||||||
// otherwise return the error
|
// otherwise return the error
|
||||||
match e.kind() {
|
match e.kind() {
|
||||||
std::io::ErrorKind::NotFound => {}
|
std::io::ErrorKind::NotFound => {}
|
||||||
|
|
|
@ -259,7 +259,7 @@ pub const JSON_IDM_ACP_PEOPLE_MANAGE_PRIV_V1: &str = r#"{
|
||||||
// 31 - password import modification priv
|
// 31 - password import modification priv
|
||||||
// right now, create requires you to have access to every attribute in a single snapshot,
|
// right now, create requires you to have access to every attribute in a single snapshot,
|
||||||
// so people will need to two step (create then import pw). Later we could add another
|
// so people will need to two step (create then import pw). Later we could add another
|
||||||
// acp that allows the create here too? Should it be seperate?
|
// acp that allows the create here too? Should it be separate?
|
||||||
pub const JSON_IDM_ACP_PEOPLE_ACCOUNT_PASSWORD_IMPORT_PRIV_V1: &str = r#"{
|
pub const JSON_IDM_ACP_PEOPLE_ACCOUNT_PASSWORD_IMPORT_PRIV_V1: &str = r#"{
|
||||||
"attrs": {
|
"attrs": {
|
||||||
"class": [
|
"class": [
|
||||||
|
|
|
@ -445,7 +445,7 @@ pub const JSON_IDM_HP_SYNC_ACCOUNT_MANAGE_PRIV: &str = r#"{
|
||||||
"class": ["group", "object"],
|
"class": ["group", "object"],
|
||||||
"name": ["idm_hp_sync_account_manage_priv"],
|
"name": ["idm_hp_sync_account_manage_priv"],
|
||||||
"uuid": ["00000000-0000-0000-0000-000000000037"],
|
"uuid": ["00000000-0000-0000-0000-000000000037"],
|
||||||
"description": ["Builtin IDM Group for managing sychronisation from external identity sources"],
|
"description": ["Builtin IDM Group for managing synchronisation from external identity sources"],
|
||||||
"member": [
|
"member": [
|
||||||
"00000000-0000-0000-0000-000000000019"
|
"00000000-0000-0000-0000-000000000019"
|
||||||
]
|
]
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
/// Default entries for system_config
|
/// Default entries for system_config
|
||||||
/// This is seperated because the password badlist section may become very long
|
/// This is separated because the password badlist section may become very long
|
||||||
pub const JSON_SYSTEM_CONFIG_V1: &str = r####"{
|
pub const JSON_SYSTEM_CONFIG_V1: &str = r####"{
|
||||||
"attrs": {
|
"attrs": {
|
||||||
"class": ["object", "system_config", "system"],
|
"class": ["object", "system_config", "system"],
|
||||||
|
|
|
@ -128,7 +128,7 @@ impl TryFrom<&str> for Password {
|
||||||
// As we may add more algos, we keep the match algo single for later.
|
// As we may add more algos, we keep the match algo single for later.
|
||||||
#[allow(clippy::single_match)]
|
#[allow(clippy::single_match)]
|
||||||
fn try_from(value: &str) -> Result<Self, Self::Error> {
|
fn try_from(value: &str) -> Result<Self, Self::Error> {
|
||||||
// There is probably a more efficent way to try this given different types?
|
// There is probably a more efficient way to try this given different types?
|
||||||
|
|
||||||
// test django - algo$salt$hash
|
// test django - algo$salt$hash
|
||||||
let django_pbkdf: Vec<&str> = value.split('$').collect();
|
let django_pbkdf: Vec<&str> = value.split('$').collect();
|
||||||
|
@ -1201,7 +1201,7 @@ mod tests {
|
||||||
*/
|
*/
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* wbrown - 20221104 - I tried to programatically enable the legacy provider, but
|
* wbrown - 20221104 - I tried to programmatically enable the legacy provider, but
|
||||||
* it consistently "did nothing at all", meaning we have to rely on users to enable
|
* it consistently "did nothing at all", meaning we have to rely on users to enable
|
||||||
* this for this test.
|
* this for this test.
|
||||||
*/
|
*/
|
||||||
|
|
|
@ -267,7 +267,7 @@ impl Default for Entry<EntryInit, EntryNew> {
|
||||||
impl Entry<EntryInit, EntryNew> {
|
impl Entry<EntryInit, EntryNew> {
|
||||||
pub fn new() -> Self {
|
pub fn new() -> Self {
|
||||||
Entry {
|
Entry {
|
||||||
// This means NEVER COMMITED
|
// This means NEVER COMMITTED
|
||||||
valid: EntryInit,
|
valid: EntryInit,
|
||||||
state: EntryNew,
|
state: EntryNew,
|
||||||
attrs: Map::new(),
|
attrs: Map::new(),
|
||||||
|
@ -418,7 +418,7 @@ impl Entry<EntryInit, EntryNew> {
|
||||||
}
|
}
|
||||||
"index" => {
|
"index" => {
|
||||||
valueset::from_value_iter(
|
valueset::from_value_iter(
|
||||||
vs.into_iter().map(|v| Value::new_indexs(v.as_str())
|
vs.into_iter().map(|v| Value::new_indexes(v.as_str())
|
||||||
.unwrap_or_else(|| {
|
.unwrap_or_else(|| {
|
||||||
warn!("WARNING: Allowing syntax incorrect attribute to be presented UTF8 string");
|
warn!("WARNING: Allowing syntax incorrect attribute to be presented UTF8 string");
|
||||||
Value::new_utf8(v)
|
Value::new_utf8(v)
|
||||||
|
@ -474,7 +474,7 @@ impl Entry<EntryInit, EntryNew> {
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
ia => {
|
ia => {
|
||||||
warn!("WARNING: Allowing invalid attribute {} to be interpretted as UTF8 string. YOU MAY ENCOUNTER ODD BEHAVIOUR!!!", ia);
|
warn!("WARNING: Allowing invalid attribute {} to be interpreted as UTF8 string. YOU MAY ENCOUNTER ODD BEHAVIOUR!!!", ia);
|
||||||
valueset::from_value_iter(
|
valueset::from_value_iter(
|
||||||
vs.into_iter().map(|v| Value::new_utf8(v))
|
vs.into_iter().map(|v| Value::new_utf8(v))
|
||||||
)
|
)
|
||||||
|
@ -811,7 +811,7 @@ impl<STATE> Entry<EntryInvalid, STATE> {
|
||||||
// be in the may/must set, and would FAIL our normal checks anyway.
|
// be in the may/must set, and would FAIL our normal checks anyway.
|
||||||
|
|
||||||
// The set of "may" is a combination of may and must, since we have already
|
// The set of "may" is a combination of may and must, since we have already
|
||||||
// asserted that all must requirements are fufilled. This allows us to
|
// asserted that all must requirements are fulfilled. This allows us to
|
||||||
// perform extended attribute checking in a single pass.
|
// perform extended attribute checking in a single pass.
|
||||||
let may: Result<Map<&AttrString, &SchemaAttribute>, _> = classes
|
let may: Result<Map<&AttrString, &SchemaAttribute>, _> = classes
|
||||||
.iter()
|
.iter()
|
||||||
|
@ -1048,7 +1048,7 @@ type IdxDiff<'a> =
|
||||||
Vec<Result<(&'a AttrString, IndexType, String), (&'a AttrString, IndexType, String)>>;
|
Vec<Result<(&'a AttrString, IndexType, String), (&'a AttrString, IndexType, String)>>;
|
||||||
|
|
||||||
impl<VALID> Entry<VALID, EntryCommitted> {
|
impl<VALID> Entry<VALID, EntryCommitted> {
|
||||||
/// If this entry has ever been commited to disk, retrieve it's database id number.
|
/// If this entry has ever been committed to disk, retrieve it's database id number.
|
||||||
pub fn get_id(&self) -> u64 {
|
pub fn get_id(&self) -> u64 {
|
||||||
self.state.id
|
self.state.id
|
||||||
}
|
}
|
||||||
|
@ -1147,7 +1147,7 @@ impl Entry<EntrySealed, EntryCommitted> {
|
||||||
}
|
}
|
||||||
|
|
||||||
#[inline]
|
#[inline]
|
||||||
/// Given this entry, determine it's relative distinguished named for LDAP compatability.
|
/// Given this entry, determine it's relative distinguished named for LDAP compatibility.
|
||||||
pub(crate) fn get_uuid2rdn(&self) -> String {
|
pub(crate) fn get_uuid2rdn(&self) -> String {
|
||||||
self.attrs
|
self.attrs
|
||||||
.get("spn")
|
.get("spn")
|
||||||
|
@ -1420,7 +1420,7 @@ impl Entry<EntrySealed, EntryCommitted> {
|
||||||
changes
|
changes
|
||||||
}
|
}
|
||||||
(Some(pre_vs), Some(post_vs)) => {
|
(Some(pre_vs), Some(post_vs)) => {
|
||||||
// it exists in both, we need to work out the differents within the attr.
|
// it exists in both, we need to work out the difference within the attr.
|
||||||
|
|
||||||
let mut pre_idx_keys = pre_vs.generate_idx_eq_keys();
|
let mut pre_idx_keys = pre_vs.generate_idx_eq_keys();
|
||||||
pre_idx_keys.sort_unstable();
|
pre_idx_keys.sort_unstable();
|
||||||
|
@ -1973,7 +1973,7 @@ impl<VALID, STATE> Entry<VALID, STATE> {
|
||||||
/// multivalue in schema - IE this will *not* fail if the attribute is
|
/// multivalue in schema - IE this will *not* fail if the attribute is
|
||||||
/// empty, yielding and empty array instead.
|
/// empty, yielding and empty array instead.
|
||||||
///
|
///
|
||||||
/// However, the converstion to IndexType is fallaible, so in case of a failure
|
/// However, the conversion to IndexType is fallaible, so in case of a failure
|
||||||
/// to convert, an Err is returned.
|
/// to convert, an Err is returned.
|
||||||
#[inline(always)]
|
#[inline(always)]
|
||||||
pub(crate) fn get_ava_opt_index(&self, attr: &str) -> Option<Vec<IndexType>> {
|
pub(crate) fn get_ava_opt_index(&self, attr: &str) -> Option<Vec<IndexType>> {
|
||||||
|
@ -2374,7 +2374,7 @@ where
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Remove an attribute-value pair from this entry. If the ava doesn't exist, we
|
/// Remove an attribute-value pair from this entry. If the ava doesn't exist, we
|
||||||
/// don't do anything else since we are asserting the abscence of a value.
|
/// don't do anything else since we are asserting the absence of a value.
|
||||||
pub(crate) fn remove_ava(&mut self, attr: &str, value: &PartialValue) {
|
pub(crate) fn remove_ava(&mut self, attr: &str, value: &PartialValue) {
|
||||||
self.valid
|
self.valid
|
||||||
.eclog
|
.eclog
|
||||||
|
|
|
@ -11,7 +11,7 @@
|
||||||
//!
|
//!
|
||||||
//! An "event" is generally then passed to the `QueryServer` for processing.
|
//! An "event" is generally then passed to the `QueryServer` for processing.
|
||||||
//! By making these fully self contained units, it means that we can assert
|
//! By making these fully self contained units, it means that we can assert
|
||||||
//! at event creation time we have all the correct data requried to proceed
|
//! at event creation time we have all the correct data required to proceed
|
||||||
//! with the operation, and a clear path to know how to transform events between
|
//! with the operation, and a clear path to know how to transform events between
|
||||||
//! various types.
|
//! various types.
|
||||||
|
|
||||||
|
|
|
@ -607,7 +607,7 @@ impl FilterComp {
|
||||||
// This probably needs some rework
|
// This probably needs some rework
|
||||||
|
|
||||||
// Getting this each recursion could be slow. Maybe
|
// Getting this each recursion could be slow. Maybe
|
||||||
// we need an inner functon that passes the reference?
|
// we need an inner function that passes the reference?
|
||||||
let schema_attributes = schema.get_attributes();
|
let schema_attributes = schema.get_attributes();
|
||||||
// We used to check the attr_name by normalising it (lowercasing)
|
// We used to check the attr_name by normalising it (lowercasing)
|
||||||
// but should we? I think we actually should just call a special
|
// but should we? I think we actually should just call a special
|
||||||
|
@ -1110,7 +1110,7 @@ impl FilterResolved {
|
||||||
}
|
}
|
||||||
// We set the compound filters slope factor to "None" here, because when we do
|
// We set the compound filters slope factor to "None" here, because when we do
|
||||||
// optimise we'll actually fill in the correct slope factors after we sort those
|
// optimise we'll actually fill in the correct slope factors after we sort those
|
||||||
// inner terms in a more optimial way.
|
// inner terms in a more optimal way.
|
||||||
FilterComp::Or(vs) => {
|
FilterComp::Or(vs) => {
|
||||||
let fi: Option<Vec<_>> = vs
|
let fi: Option<Vec<_>> = vs
|
||||||
.into_iter()
|
.into_iter()
|
||||||
|
|
|
@ -237,7 +237,7 @@ impl Account {
|
||||||
let cot = OffsetDateTime::unix_epoch() + ct;
|
let cot = OffsetDateTime::unix_epoch() + ct;
|
||||||
|
|
||||||
let vmin = if let Some(vft) = valid_from {
|
let vmin = if let Some(vft) = valid_from {
|
||||||
// If current time greater than strat time window
|
// If current time greater than start time window
|
||||||
vft <= &cot
|
vft <= &cot
|
||||||
} else {
|
} else {
|
||||||
// We have no time, not expired.
|
// We have no time, not expired.
|
||||||
|
@ -428,7 +428,7 @@ impl Account {
|
||||||
|
|
||||||
pub(crate) fn existing_credential_id_list(&self) -> Option<Vec<CredentialID>> {
|
pub(crate) fn existing_credential_id_list(&self) -> Option<Vec<CredentialID>> {
|
||||||
// TODO!!!
|
// TODO!!!
|
||||||
// Used in registrations only for disallowing exsiting credentials.
|
// Used in registrations only for disallowing existing credentials.
|
||||||
None
|
None
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -493,7 +493,7 @@ impl Account {
|
||||||
pub struct DestroySessionTokenEvent {
|
pub struct DestroySessionTokenEvent {
|
||||||
// Who initiated this?
|
// Who initiated this?
|
||||||
pub ident: Identity,
|
pub ident: Identity,
|
||||||
// Who is it targetting?
|
// Who is it targeting?
|
||||||
pub target: Uuid,
|
pub target: Uuid,
|
||||||
// Which token id.
|
// Which token id.
|
||||||
pub token_id: Uuid,
|
pub token_id: Uuid,
|
||||||
|
@ -617,7 +617,7 @@ impl<'a> IdmServerProxyWriteTransaction<'a> {
|
||||||
pub struct ListUserAuthTokenEvent {
|
pub struct ListUserAuthTokenEvent {
|
||||||
// Who initiated this?
|
// Who initiated this?
|
||||||
pub ident: Identity,
|
pub ident: Identity,
|
||||||
// Who is it targetting?
|
// Who is it targeting?
|
||||||
pub target: Uuid,
|
pub target: Uuid,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -32,7 +32,7 @@ use crate::idm::AuthState;
|
||||||
use crate::prelude::*;
|
use crate::prelude::*;
|
||||||
|
|
||||||
// Each CredHandler takes one or more credentials and determines if the
|
// Each CredHandler takes one or more credentials and determines if the
|
||||||
// handlers requirements can be 100% fufilled. This is where MFA or other
|
// handlers requirements can be 100% fulfilled. This is where MFA or other
|
||||||
// auth policies would exist, but each credHandler has to be a whole
|
// auth policies would exist, but each credHandler has to be a whole
|
||||||
// encapsulated unit of function.
|
// encapsulated unit of function.
|
||||||
|
|
||||||
|
@ -534,6 +534,7 @@ impl CredHandler {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[allow(clippy::large_enum_variant)]
|
||||||
#[derive(Clone)]
|
#[derive(Clone)]
|
||||||
/// This interleaves with the client auth step. The client sends an "init"
|
/// This interleaves with the client auth step. The client sends an "init"
|
||||||
/// and we go to the init state, sending back the list of what can proceed.
|
/// and we go to the init state, sending back the list of what can proceed.
|
||||||
|
@ -672,7 +673,7 @@ impl AuthSession {
|
||||||
// time: &Duration,
|
// time: &Duration,
|
||||||
// webauthn: &WebauthnCore,
|
// webauthn: &WebauthnCore,
|
||||||
) -> Result<AuthState, OperationError> {
|
) -> Result<AuthState, OperationError> {
|
||||||
// Given some auth mech, select which credential(s) are apropriate
|
// Given some auth mech, select which credential(s) are appropriate
|
||||||
// and attempt to use them.
|
// and attempt to use them.
|
||||||
|
|
||||||
// Today we only select one, but later we could have *multiple* that
|
// Today we only select one, but later we could have *multiple* that
|
||||||
|
@ -702,7 +703,7 @@ impl AuthSession {
|
||||||
(
|
(
|
||||||
None,
|
None,
|
||||||
Err(OperationError::InvalidAuthState(
|
Err(OperationError::InvalidAuthState(
|
||||||
"unable to negotitate credentials".to_string(),
|
"unable to negotiate credentials".to_string(),
|
||||||
)),
|
)),
|
||||||
)
|
)
|
||||||
} else {
|
} else {
|
||||||
|
@ -860,7 +861,7 @@ impl AuthSession {
|
||||||
//
|
//
|
||||||
// The lockouts could also be an in-memory concept too?
|
// The lockouts could also be an in-memory concept too?
|
||||||
|
|
||||||
// If this suceeds audit?
|
// If this succeeds audit?
|
||||||
// If success, to authtoken?
|
// If success, to authtoken?
|
||||||
|
|
||||||
response
|
response
|
||||||
|
|
|
@ -212,7 +212,7 @@ pub(crate) type CredentialUpdateSessionMutex = Arc<Mutex<CredentialUpdateSession
|
||||||
pub struct InitCredentialUpdateIntentEvent {
|
pub struct InitCredentialUpdateIntentEvent {
|
||||||
// Who initiated this?
|
// Who initiated this?
|
||||||
pub ident: Identity,
|
pub ident: Identity,
|
||||||
// Who is it targetting?
|
// Who is it targeting?
|
||||||
pub target: Uuid,
|
pub target: Uuid,
|
||||||
// How long is it valid for?
|
// How long is it valid for?
|
||||||
pub max_ttl: Option<Duration>,
|
pub max_ttl: Option<Duration>,
|
||||||
|
@ -418,7 +418,7 @@ impl<'a> IdmServerProxyWriteTransaction<'a> {
|
||||||
|
|
||||||
// Mark that we have created an intent token on the user.
|
// Mark that we have created an intent token on the user.
|
||||||
// ⚠️ -- remember, there is a risk, very low, but still a risk of collision of the intent_id.
|
// ⚠️ -- remember, there is a risk, very low, but still a risk of collision of the intent_id.
|
||||||
// instead of enforcing unique, which would divulge that the collision occured, we
|
// instead of enforcing unique, which would divulge that the collision occurred, we
|
||||||
// write anyway, and instead on the intent access path we invalidate IF the collision
|
// write anyway, and instead on the intent access path we invalidate IF the collision
|
||||||
// occurs.
|
// occurs.
|
||||||
let mut modlist = ModifyList::new_append(
|
let mut modlist = ModifyList::new_append(
|
||||||
|
@ -589,7 +589,7 @@ impl<'a> IdmServerProxyWriteTransaction<'a> {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
None => {
|
None => {
|
||||||
admin_error!("Corruption may have occured - index yielded an entry for intent_id, but the entry does not contain that intent_id");
|
admin_error!("Corruption may have occurred - index yielded an entry for intent_id, but the entry does not contain that intent_id");
|
||||||
return Err(OperationError::InvalidState);
|
return Err(OperationError::InvalidState);
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
@ -1970,7 +1970,7 @@ mod tests {
|
||||||
|
|
||||||
let cutxn = idms.cred_update_transaction();
|
let cutxn = idms.cred_update_transaction();
|
||||||
|
|
||||||
// Now fake going back in time .... allows the tokne to decrypt, but the sesion
|
// Now fake going back in time .... allows the tokne to decrypt, but the session
|
||||||
// is gone anyway!
|
// is gone anyway!
|
||||||
let c_status = cutxn
|
let c_status = cutxn
|
||||||
.credential_update_status(&cust, ct)
|
.credential_update_status(&cust, ct)
|
||||||
|
@ -2264,7 +2264,7 @@ mod tests {
|
||||||
));
|
));
|
||||||
|
|
||||||
// Now good to go, we need to now add our backup codes.
|
// Now good to go, we need to now add our backup codes.
|
||||||
// Whats the right way to get these back?
|
// What's the right way to get these back?
|
||||||
let c_status = cutxn
|
let c_status = cutxn
|
||||||
.credential_primary_init_backup_codes(&cust, ct)
|
.credential_primary_init_backup_codes(&cust, ct)
|
||||||
.expect("Failed to update the primary cred password");
|
.expect("Failed to update the primary cred password");
|
||||||
|
@ -2386,7 +2386,7 @@ mod tests {
|
||||||
|
|
||||||
let c_status = cutxn
|
let c_status = cutxn
|
||||||
.credential_update_cancel_mfareg(&cust, ct)
|
.credential_update_cancel_mfareg(&cust, ct)
|
||||||
.expect("Failed to cancel inflight totp change");
|
.expect("Failed to cancel in-flight totp change");
|
||||||
|
|
||||||
assert!(matches!(c_status.mfaregstate, MfaRegStateStatus::None));
|
assert!(matches!(c_status.mfaregstate, MfaRegStateStatus::None));
|
||||||
assert!(c_status.can_commit);
|
assert!(c_status.can_commit);
|
||||||
|
@ -2404,7 +2404,7 @@ mod tests {
|
||||||
|
|
||||||
// - setup webauthn
|
// - setup webauthn
|
||||||
// - remove webauthn
|
// - remove webauthn
|
||||||
// - test mulitple webauthn token.
|
// - test multiple webauthn token.
|
||||||
|
|
||||||
#[idm_test]
|
#[idm_test]
|
||||||
async fn test_idm_credential_update_onboarding_create_new_passkey(
|
async fn test_idm_credential_update_onboarding_create_new_passkey(
|
||||||
|
@ -2445,7 +2445,7 @@ mod tests {
|
||||||
|
|
||||||
assert!(matches!(c_status.mfaregstate, MfaRegStateStatus::None));
|
assert!(matches!(c_status.mfaregstate, MfaRegStateStatus::None));
|
||||||
assert!(matches!(
|
assert!(matches!(
|
||||||
// Shuld be none.
|
// Should be none.
|
||||||
c_status.primary.as_ref(),
|
c_status.primary.as_ref(),
|
||||||
None
|
None
|
||||||
));
|
));
|
||||||
|
|
|
@ -50,12 +50,12 @@ macro_rules! try_from_account_e {
|
||||||
let f = filter!(f_or(
|
let f = filter!(f_or(
|
||||||
riter.map(|u| f_eq("uuid", PartialValue::Uuid(u))).collect()
|
riter.map(|u| f_eq("uuid", PartialValue::Uuid(u))).collect()
|
||||||
));
|
));
|
||||||
let ges: Vec<_> = $qs.internal_search(f).map_err(|e| {
|
let group_entries: Vec<_> = $qs.internal_search(f).map_err(|e| {
|
||||||
admin_error!(?e, "internal search failed");
|
admin_error!(?e, "internal search failed");
|
||||||
e
|
e
|
||||||
})?;
|
})?;
|
||||||
// Now convert the group entries to groups.
|
// Now convert the group entries to groups.
|
||||||
let groups: Result<Vec<_>, _> = ges
|
let groups: Result<Vec<_>, _> = group_entries
|
||||||
.iter()
|
.iter()
|
||||||
.map(|e| Group::try_from_entry(e.as_ref()))
|
.map(|e| Group::try_from_entry(e.as_ref()))
|
||||||
.collect();
|
.collect();
|
||||||
|
|
|
@ -720,7 +720,7 @@ mod tests {
|
||||||
.unwrap()
|
.unwrap()
|
||||||
.is_none());
|
.is_none());
|
||||||
|
|
||||||
// Non-existant and invalid DNs
|
// Non-existent and invalid DNs
|
||||||
assert!(task::block_on(ldaps.do_bind(
|
assert!(task::block_on(ldaps.do_bind(
|
||||||
idms,
|
idms,
|
||||||
"spn=admin@example.com,dc=clownshoes,dc=example,dc=com",
|
"spn=admin@example.com,dc=clownshoes,dc=example,dc=com",
|
||||||
|
|
|
@ -1,4 +1,4 @@
|
||||||
//! The Identity Management components that are layered ontop of the [QueryServer](crate::server::QueryServer). These allow
|
//! The Identity Management components that are layered on top of the [QueryServer](crate::server::QueryServer). These allow
|
||||||
//! rich and expressive events and transformations that are lowered into the correct/relevant
|
//! rich and expressive events and transformations that are lowered into the correct/relevant
|
||||||
//! actions in the [QueryServer](crate::server::QueryServer). Generally this is where "Identity Management" policy and code
|
//! actions in the [QueryServer](crate::server::QueryServer). Generally this is where "Identity Management" policy and code
|
||||||
//! is implemented.
|
//! is implemented.
|
||||||
|
|
|
@ -789,7 +789,7 @@ impl<'a> IdmServerProxyReadTransaction<'a> {
|
||||||
Vec::with_capacity(0)
|
Vec::with_capacity(0)
|
||||||
};
|
};
|
||||||
|
|
||||||
// Subseqent we then return an encrypted session handle which allows
|
// Subsequent we then return an encrypted session handle which allows
|
||||||
// the user to indicate their consent to this authorisation.
|
// the user to indicate their consent to this authorisation.
|
||||||
//
|
//
|
||||||
// This session handle is what we use in "permit" to generate the redirect.
|
// This session handle is what we use in "permit" to generate the redirect.
|
||||||
|
@ -1566,7 +1566,7 @@ fn parse_basic_authz(client_authz: &str) -> Result<(String, String), Oauth2Error
|
||||||
Oauth2Error::AuthenticationRequired
|
Oauth2Error::AuthenticationRequired
|
||||||
})?;
|
})?;
|
||||||
let secret = split_iter.next().ok_or_else(|| {
|
let secret = split_iter.next().ok_or_else(|| {
|
||||||
admin_error!("Basic authz invalid format (missing ':' seperator?)");
|
admin_error!("Basic authz invalid format (missing ':' separator?)");
|
||||||
Oauth2Error::AuthenticationRequired
|
Oauth2Error::AuthenticationRequired
|
||||||
})?;
|
})?;
|
||||||
|
|
||||||
|
@ -2514,7 +2514,7 @@ mod tests {
|
||||||
assert!(matches!(e, Oauth2Error::AuthenticationRequired));
|
assert!(matches!(e, Oauth2Error::AuthenticationRequired));
|
||||||
assert!(idms_prox_write.commit().is_ok());
|
assert!(idms_prox_write.commit().is_ok());
|
||||||
|
|
||||||
// Now submit a non-existant/invalid token. Does not affect our tokens validity.
|
// Now submit a non-existent/invalid token. Does not affect our tokens validity.
|
||||||
let mut idms_prox_write = task::block_on(idms.proxy_write(ct));
|
let mut idms_prox_write = task::block_on(idms.proxy_write(ct));
|
||||||
let revoke_request = TokenRevokeRequest {
|
let revoke_request = TokenRevokeRequest {
|
||||||
token: "this is an invalid token, nothing will happen!".to_string(),
|
token: "this is an invalid token, nothing will happen!".to_string(),
|
||||||
|
|
|
@ -74,7 +74,7 @@ impl RadiusAccount {
|
||||||
let cot = OffsetDateTime::unix_epoch() + ct;
|
let cot = OffsetDateTime::unix_epoch() + ct;
|
||||||
|
|
||||||
let vmin = if let Some(vft) = &self.valid_from {
|
let vmin = if let Some(vft) = &self.valid_from {
|
||||||
// If current time greater than strat time window
|
// If current time greater than start time window
|
||||||
vft < &cot
|
vft < &cot
|
||||||
} else {
|
} else {
|
||||||
// We have no time, not expired.
|
// We have no time, not expired.
|
||||||
|
|
|
@ -98,7 +98,7 @@ impl SyncAccount {
|
||||||
pub struct GenerateScimSyncTokenEvent {
|
pub struct GenerateScimSyncTokenEvent {
|
||||||
// Who initiated this?
|
// Who initiated this?
|
||||||
pub ident: Identity,
|
pub ident: Identity,
|
||||||
// Who is it targetting?
|
// Who is it targeting?
|
||||||
pub target: Uuid,
|
pub target: Uuid,
|
||||||
// The label
|
// The label
|
||||||
pub label: String,
|
pub label: String,
|
||||||
|
@ -247,7 +247,7 @@ impl<'a> IdmServerProxyWriteTransaction<'a> {
|
||||||
})?;
|
})?;
|
||||||
|
|
||||||
let sync_account = SyncAccount::try_from_entry_rw(&entry).map_err(|e| {
|
let sync_account = SyncAccount::try_from_entry_rw(&entry).map_err(|e| {
|
||||||
admin_error!(?e, "Failed to covert sync account");
|
admin_error!(?e, "Failed to convert sync account");
|
||||||
e
|
e
|
||||||
})?;
|
})?;
|
||||||
let sync_uuid = sync_account.uuid;
|
let sync_uuid = sync_account.uuid;
|
||||||
|
@ -290,7 +290,7 @@ impl<'a> IdmServerProxyWriteTransaction<'a> {
|
||||||
// Importantly, we have to do this for items that are in the recycle bin!
|
// Importantly, we have to do this for items that are in the recycle bin!
|
||||||
|
|
||||||
// First, get the set of uuids that exist. We need this so we have the set of uuids we'll
|
// First, get the set of uuids that exist. We need this so we have the set of uuids we'll
|
||||||
// be deleteing *at the end*.
|
// be deleting *at the end*.
|
||||||
let f_all_sync = filter_all!(f_and!([
|
let f_all_sync = filter_all!(f_and!([
|
||||||
f_eq("class", PVCLASS_SYNC_OBJECT.clone()),
|
f_eq("class", PVCLASS_SYNC_OBJECT.clone()),
|
||||||
f_eq("sync_parent_uuid", PartialValue::Refer(sync_uuid))
|
f_eq("sync_parent_uuid", PartialValue::Refer(sync_uuid))
|
||||||
|
@ -370,7 +370,7 @@ impl<'a> IdmServerProxyWriteTransaction<'a> {
|
||||||
})?;
|
})?;
|
||||||
|
|
||||||
let sync_account = SyncAccount::try_from_entry_rw(&entry).map_err(|e| {
|
let sync_account = SyncAccount::try_from_entry_rw(&entry).map_err(|e| {
|
||||||
admin_error!(?e, "Failed to covert sync account");
|
admin_error!(?e, "Failed to convert sync account");
|
||||||
e
|
e
|
||||||
})?;
|
})?;
|
||||||
let sync_uuid = sync_account.uuid;
|
let sync_uuid = sync_account.uuid;
|
||||||
|
@ -413,7 +413,7 @@ impl<'a> IdmServerProxyWriteTransaction<'a> {
|
||||||
// Importantly, we have to do this for items that are in the recycle bin!
|
// Importantly, we have to do this for items that are in the recycle bin!
|
||||||
|
|
||||||
// First, get the set of uuids that exist. We need this so we have the set of uuids we'll
|
// First, get the set of uuids that exist. We need this so we have the set of uuids we'll
|
||||||
// be deleteing *at the end*.
|
// be deleting *at the end*.
|
||||||
let f_all_sync = filter_all!(f_and!([
|
let f_all_sync = filter_all!(f_and!([
|
||||||
f_eq("class", PVCLASS_SYNC_OBJECT.clone()),
|
f_eq("class", PVCLASS_SYNC_OBJECT.clone()),
|
||||||
f_eq("sync_parent_uuid", PartialValue::Refer(sync_uuid))
|
f_eq("sync_parent_uuid", PartialValue::Refer(sync_uuid))
|
||||||
|
@ -649,7 +649,7 @@ impl<'a> IdmServerProxyWriteTransaction<'a> {
|
||||||
if fail {
|
if fail {
|
||||||
return Err(OperationError::InvalidEntryState);
|
return Err(OperationError::InvalidEntryState);
|
||||||
}
|
}
|
||||||
// From that set of entries, parition to entries that exist and are
|
// From that set of entries, partition to entries that exist and are
|
||||||
// present, and entries that do not yet exist.
|
// present, and entries that do not yet exist.
|
||||||
//
|
//
|
||||||
// We can't easily parititon here because we need to iterate over the
|
// We can't easily parititon here because we need to iterate over the
|
||||||
|
@ -690,7 +690,7 @@ impl<'a> IdmServerProxyWriteTransaction<'a> {
|
||||||
//
|
//
|
||||||
// For entries that do exist, mod their external_id
|
// For entries that do exist, mod their external_id
|
||||||
//
|
//
|
||||||
// Basicly we just set this up as a batch modify and submit it.
|
// Basically we just set this up as a batch modify and submit it.
|
||||||
self.qs_write
|
self.qs_write
|
||||||
.internal_batch_modify(change_entries.iter().filter_map(|(u, scim_ent)| {
|
.internal_batch_modify(change_entries.iter().filter_map(|(u, scim_ent)| {
|
||||||
// If the entry has an external id
|
// If the entry has an external id
|
||||||
|
|
|
@ -902,7 +902,7 @@ impl<'a> IdmServerAuthTransaction<'a> {
|
||||||
ae: &AuthEvent,
|
ae: &AuthEvent,
|
||||||
ct: Duration,
|
ct: Duration,
|
||||||
) -> Result<AuthResult, OperationError> {
|
) -> Result<AuthResult, OperationError> {
|
||||||
trace!(?ae, "Recieved");
|
trace!(?ae, "Received");
|
||||||
// Match on the auth event, to see what we need to do.
|
// Match on the auth event, to see what we need to do.
|
||||||
|
|
||||||
match &ae.step {
|
match &ae.step {
|
||||||
|
@ -1654,7 +1654,7 @@ impl<'a> IdmServerProxyWriteTransaction<'a> {
|
||||||
})?
|
})?
|
||||||
};
|
};
|
||||||
|
|
||||||
// If we got here, then pre-apply succedded, and that means access control
|
// If we got here, then pre-apply succeeded, and that means access control
|
||||||
// passed. Now we can do the extra checks.
|
// passed. Now we can do the extra checks.
|
||||||
|
|
||||||
// Check the password quality.
|
// Check the password quality.
|
||||||
|
@ -1733,7 +1733,7 @@ impl<'a> IdmServerProxyWriteTransaction<'a> {
|
||||||
})?
|
})?
|
||||||
};
|
};
|
||||||
|
|
||||||
// If we got here, then pre-apply succedded, and that means access control
|
// If we got here, then pre-apply succeeded, and that means access control
|
||||||
// passed. Now we can do the extra checks.
|
// passed. Now we can do the extra checks.
|
||||||
|
|
||||||
self.check_password_quality(pce.cleartext.as_str(), account.related_inputs().as_slice())
|
self.check_password_quality(pce.cleartext.as_str(), account.related_inputs().as_slice())
|
||||||
|
@ -2291,7 +2291,7 @@ mod tests {
|
||||||
}
|
}
|
||||||
_ => {
|
_ => {
|
||||||
error!(
|
error!(
|
||||||
"A critical error has occured! We have a non-continue result!"
|
"A critical error has occurred! We have a non-continue result!"
|
||||||
);
|
);
|
||||||
panic!();
|
panic!();
|
||||||
}
|
}
|
||||||
|
@ -2301,7 +2301,7 @@ mod tests {
|
||||||
}
|
}
|
||||||
Err(e) => {
|
Err(e) => {
|
||||||
// Should not occur!
|
// Should not occur!
|
||||||
error!("A critical error has occured! {:?}", e);
|
error!("A critical error has occurred! {:?}", e);
|
||||||
panic!();
|
panic!();
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
@ -2338,14 +2338,14 @@ mod tests {
|
||||||
}
|
}
|
||||||
_ => {
|
_ => {
|
||||||
error!(
|
error!(
|
||||||
"A critical error has occured! We have a non-continue result!"
|
"A critical error has occurred! We have a non-continue result!"
|
||||||
);
|
);
|
||||||
panic!();
|
panic!();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
Err(e) => {
|
Err(e) => {
|
||||||
error!("A critical error has occured! {:?}", e);
|
error!("A critical error has occurred! {:?}", e);
|
||||||
// Should not occur!
|
// Should not occur!
|
||||||
panic!();
|
panic!();
|
||||||
}
|
}
|
||||||
|
@ -2379,14 +2379,14 @@ mod tests {
|
||||||
}
|
}
|
||||||
_ => {
|
_ => {
|
||||||
error!(
|
error!(
|
||||||
"A critical error has occured! We have a non-succcess result!"
|
"A critical error has occurred! We have a non-succcess result!"
|
||||||
);
|
);
|
||||||
panic!();
|
panic!();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
Err(e) => {
|
Err(e) => {
|
||||||
error!("A critical error has occured! {:?}", e);
|
error!("A critical error has occurred! {:?}", e);
|
||||||
// Should not occur!
|
// Should not occur!
|
||||||
panic!();
|
panic!();
|
||||||
}
|
}
|
||||||
|
@ -2518,13 +2518,13 @@ mod tests {
|
||||||
token
|
token
|
||||||
}
|
}
|
||||||
_ => {
|
_ => {
|
||||||
error!("A critical error has occured! We have a non-succcess result!");
|
error!("A critical error has occurred! We have a non-succcess result!");
|
||||||
panic!();
|
panic!();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
Err(e) => {
|
Err(e) => {
|
||||||
error!("A critical error has occured! {:?}", e);
|
error!("A critical error has occurred! {:?}", e);
|
||||||
// Should not occur!
|
// Should not occur!
|
||||||
panic!();
|
panic!();
|
||||||
}
|
}
|
||||||
|
@ -2587,14 +2587,14 @@ mod tests {
|
||||||
}
|
}
|
||||||
_ => {
|
_ => {
|
||||||
error!(
|
error!(
|
||||||
"A critical error has occured! We have a non-succcess result!"
|
"A critical error has occurred! We have a non-succcess result!"
|
||||||
);
|
);
|
||||||
panic!();
|
panic!();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
Err(e) => {
|
Err(e) => {
|
||||||
error!("A critical error has occured! {:?}", e);
|
error!("A critical error has occurred! {:?}", e);
|
||||||
// Should not occur!
|
// Should not occur!
|
||||||
panic!();
|
panic!();
|
||||||
}
|
}
|
||||||
|
@ -2644,14 +2644,14 @@ mod tests {
|
||||||
}
|
}
|
||||||
_ => {
|
_ => {
|
||||||
error!(
|
error!(
|
||||||
"A critical error has occured! We have a non-denied result!"
|
"A critical error has occurred! We have a non-denied result!"
|
||||||
);
|
);
|
||||||
panic!();
|
panic!();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
Err(e) => {
|
Err(e) => {
|
||||||
error!("A critical error has occured! {:?}", e);
|
error!("A critical error has occurred! {:?}", e);
|
||||||
// Should not occur!
|
// Should not occur!
|
||||||
panic!();
|
panic!();
|
||||||
}
|
}
|
||||||
|
@ -2958,7 +2958,7 @@ mod tests {
|
||||||
assert!(idms_prox_write.commit().is_ok());
|
assert!(idms_prox_write.commit().is_ok());
|
||||||
|
|
||||||
// And auth should now fail due to the lack of PW material (note that
|
// And auth should now fail due to the lack of PW material (note that
|
||||||
// softlocking WONT kick in because the cred_uuid is gone!)
|
// softlocking WON'T kick in because the cred_uuid is gone!)
|
||||||
let mut idms_auth = idms.auth();
|
let mut idms_auth = idms.auth();
|
||||||
let a3 = task::block_on(
|
let a3 = task::block_on(
|
||||||
idms_auth.auth_unix(&uuae_good, Duration::from_secs(TEST_CURRENT_TIME)),
|
idms_auth.auth_unix(&uuae_good, Duration::from_secs(TEST_CURRENT_TIME)),
|
||||||
|
@ -3119,7 +3119,7 @@ mod tests {
|
||||||
fn test_idm_account_valid_from_expire() {
|
fn test_idm_account_valid_from_expire() {
|
||||||
run_idm_test!(
|
run_idm_test!(
|
||||||
|qs: &QueryServer, idms: &IdmServer, _idms_delayed: &mut IdmServerDelayed| {
|
|qs: &QueryServer, idms: &IdmServer, _idms_delayed: &mut IdmServerDelayed| {
|
||||||
// Any account taht is not yet valrid / expired can't auth.
|
// Any account that is not yet valrid / expired can't auth.
|
||||||
|
|
||||||
task::block_on(init_admin_w_password(qs, TEST_PASSWORD))
|
task::block_on(init_admin_w_password(qs, TEST_PASSWORD))
|
||||||
.expect("Failed to setup admin account");
|
.expect("Failed to setup admin account");
|
||||||
|
@ -3327,14 +3327,14 @@ mod tests {
|
||||||
}
|
}
|
||||||
_ => {
|
_ => {
|
||||||
error!(
|
error!(
|
||||||
"A critical error has occured! We have a non-denied result!"
|
"A critical error has occurred! We have a non-denied result!"
|
||||||
);
|
);
|
||||||
panic!();
|
panic!();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
Err(e) => {
|
Err(e) => {
|
||||||
error!("A critical error has occured! {:?}", e);
|
error!("A critical error has occurred! {:?}", e);
|
||||||
panic!();
|
panic!();
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
@ -3416,14 +3416,14 @@ mod tests {
|
||||||
}
|
}
|
||||||
_ => {
|
_ => {
|
||||||
error!(
|
error!(
|
||||||
"A critical error has occured! We have a non-succcess result!"
|
"A critical error has occurred! We have a non-succcess result!"
|
||||||
);
|
);
|
||||||
panic!();
|
panic!();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
Err(e) => {
|
Err(e) => {
|
||||||
error!("A critical error has occured! {:?}", e);
|
error!("A critical error has occurred! {:?}", e);
|
||||||
// Should not occur!
|
// Should not occur!
|
||||||
panic!();
|
panic!();
|
||||||
}
|
}
|
||||||
|
@ -3489,14 +3489,14 @@ mod tests {
|
||||||
}
|
}
|
||||||
_ => {
|
_ => {
|
||||||
error!(
|
error!(
|
||||||
"A critical error has occured! We have a non-denied result!"
|
"A critical error has occurred! We have a non-denied result!"
|
||||||
);
|
);
|
||||||
panic!();
|
panic!();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
Err(e) => {
|
Err(e) => {
|
||||||
error!("A critical error has occured! {:?}", e);
|
error!("A critical error has occurred! {:?}", e);
|
||||||
panic!();
|
panic!();
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
@ -3525,14 +3525,14 @@ mod tests {
|
||||||
}
|
}
|
||||||
_ => {
|
_ => {
|
||||||
error!(
|
error!(
|
||||||
"A critical error has occured! We have a non-denied result!"
|
"A critical error has occurred! We have a non-denied result!"
|
||||||
);
|
);
|
||||||
panic!();
|
panic!();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
Err(e) => {
|
Err(e) => {
|
||||||
error!("A critical error has occured! {:?}", e);
|
error!("A critical error has occurred! {:?}", e);
|
||||||
panic!();
|
panic!();
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
|
@ -135,14 +135,14 @@ impl ServiceAccount {
|
||||||
pub struct ListApiTokenEvent {
|
pub struct ListApiTokenEvent {
|
||||||
// Who initiated this?
|
// Who initiated this?
|
||||||
pub ident: Identity,
|
pub ident: Identity,
|
||||||
// Who is it targetting?
|
// Who is it targeting?
|
||||||
pub target: Uuid,
|
pub target: Uuid,
|
||||||
}
|
}
|
||||||
|
|
||||||
pub struct GenerateApiTokenEvent {
|
pub struct GenerateApiTokenEvent {
|
||||||
// Who initiated this?
|
// Who initiated this?
|
||||||
pub ident: Identity,
|
pub ident: Identity,
|
||||||
// Who is it targetting?
|
// Who is it targeting?
|
||||||
pub target: Uuid,
|
pub target: Uuid,
|
||||||
// The label
|
// The label
|
||||||
pub label: String,
|
pub label: String,
|
||||||
|
@ -169,7 +169,7 @@ impl GenerateApiTokenEvent {
|
||||||
pub struct DestroyApiTokenEvent {
|
pub struct DestroyApiTokenEvent {
|
||||||
// Who initiated this?
|
// Who initiated this?
|
||||||
pub ident: Identity,
|
pub ident: Identity,
|
||||||
// Who is it targetting?
|
// Who is it targeting?
|
||||||
pub target: Uuid,
|
pub target: Uuid,
|
||||||
// Which token id.
|
// Which token id.
|
||||||
pub token_id: Uuid,
|
pub token_id: Uuid,
|
||||||
|
@ -204,7 +204,7 @@ impl<'a> IdmServerProxyWriteTransaction<'a> {
|
||||||
let session_id = Uuid::new_v4();
|
let session_id = Uuid::new_v4();
|
||||||
let issued_at = time::OffsetDateTime::unix_epoch() + ct;
|
let issued_at = time::OffsetDateTime::unix_epoch() + ct;
|
||||||
|
|
||||||
// Normalise to UTC incase it was provided as something else.
|
// Normalise to UTC in case it was provided as something else.
|
||||||
let expiry = gte.expiry.map(|odt| odt.to_offset(time::UtcOffset::UTC));
|
let expiry = gte.expiry.map(|odt| odt.to_offset(time::UtcOffset::UTC));
|
||||||
|
|
||||||
let purpose = if gte.read_write {
|
let purpose = if gte.read_write {
|
||||||
|
|
|
@ -372,9 +372,13 @@ macro_rules! try_from_account_group_e {
|
||||||
f_eq("class", PVCLASS_GROUP.clone()),
|
f_eq("class", PVCLASS_GROUP.clone()),
|
||||||
f_or(riter.map(|u| f_eq("uuid", PartialValue::Uuid(u))).collect())
|
f_or(riter.map(|u| f_eq("uuid", PartialValue::Uuid(u))).collect())
|
||||||
]));
|
]));
|
||||||
let ges: Vec<_> = $qs.internal_search(f)?;
|
let group_entries: Vec<_> = $qs.internal_search(f)?;
|
||||||
let groups: Result<Vec<_>, _> = iter::once(Ok(upg))
|
let groups: Result<Vec<_>, _> = iter::once(Ok(upg))
|
||||||
.chain(ges.iter().map(|e| UnixGroup::try_from_entry(e.as_ref())))
|
.chain(
|
||||||
|
group_entries
|
||||||
|
.iter()
|
||||||
|
.map(|e| UnixGroup::try_from_entry(e.as_ref())),
|
||||||
|
)
|
||||||
.collect();
|
.collect();
|
||||||
groups
|
groups
|
||||||
}
|
}
|
||||||
|
|
|
@ -88,8 +88,8 @@ impl Plugin for Base {
|
||||||
// an internal operation.
|
// an internal operation.
|
||||||
if !ce.ident.is_internal() {
|
if !ce.ident.is_internal() {
|
||||||
// TODO: We can't lazy const this as you can't borrow the type down to what
|
// TODO: We can't lazy const this as you can't borrow the type down to what
|
||||||
// range and contains on btreeset need, but can we possibly make these constly
|
// range and contains on btreeset need, but can we possibly make these
|
||||||
// part of the struct somehow at init. rather than needing to parse a lot?
|
// part of the struct at init. rather than needing to parse a lot?
|
||||||
// The internal set is bounded by: UUID_ADMIN -> UUID_ANONYMOUS
|
// The internal set is bounded by: UUID_ADMIN -> UUID_ANONYMOUS
|
||||||
// Sadly we need to allocate these to strings to make references, sigh.
|
// Sadly we need to allocate these to strings to make references, sigh.
|
||||||
let overlap: usize = cand_uuid.range(UUID_ADMIN..UUID_ANONYMOUS).count();
|
let overlap: usize = cand_uuid.range(UUID_ADMIN..UUID_ANONYMOUS).count();
|
||||||
|
@ -143,7 +143,7 @@ impl Plugin for Base {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
Err(e) => {
|
Err(e) => {
|
||||||
admin_error!("Error occured checking UUID existance. {:?}", e);
|
admin_error!("Error occurred checking UUID existence. {:?}", e);
|
||||||
return Err(e);
|
return Err(e);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -335,7 +335,7 @@ mod tests {
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
// check unparseable uuid
|
// check unparsable uuid
|
||||||
#[test]
|
#[test]
|
||||||
fn test_pre_create_uuid_invalid() {
|
fn test_pre_create_uuid_invalid() {
|
||||||
let preload: Vec<Entry<EntryInit, EntryNew>> = Vec::new();
|
let preload: Vec<Entry<EntryInit, EntryNew>> = Vec::new();
|
||||||
|
|
|
@ -58,7 +58,7 @@ impl Domain {
|
||||||
if e.attribute_equality("class", &PVCLASS_DOMAIN_INFO)
|
if e.attribute_equality("class", &PVCLASS_DOMAIN_INFO)
|
||||||
&& e.attribute_equality("uuid", &PVUUID_DOMAIN_INFO)
|
&& e.attribute_equality("uuid", &PVUUID_DOMAIN_INFO)
|
||||||
{
|
{
|
||||||
// We always set this, because the DB uuid is authorative.
|
// We always set this, because the DB uuid is authoritative.
|
||||||
let u = Value::Uuid(qs.get_domain_uuid());
|
let u = Value::Uuid(qs.get_domain_uuid());
|
||||||
e.set_ava("domain_uuid", once(u));
|
e.set_ava("domain_uuid", once(u));
|
||||||
trace!("plugin_domain: Applying uuid transform");
|
trace!("plugin_domain: Applying uuid transform");
|
||||||
|
|
|
@ -98,7 +98,7 @@ fn do_memberof(
|
||||||
fn apply_memberof(
|
fn apply_memberof(
|
||||||
qs: &mut QueryServerWriteTransaction,
|
qs: &mut QueryServerWriteTransaction,
|
||||||
// TODO: Experiment with HashSet/BTreeSet here instead of vec.
|
// TODO: Experiment with HashSet/BTreeSet here instead of vec.
|
||||||
// May require https://github.com/rust-lang/rust/issues/62924 to allow poping
|
// May require https://github.com/rust-lang/rust/issues/62924 to allow popping
|
||||||
mut group_affect: Vec<Uuid>,
|
mut group_affect: Vec<Uuid>,
|
||||||
) -> Result<(), OperationError> {
|
) -> Result<(), OperationError> {
|
||||||
trace!(" => entering apply_memberof");
|
trace!(" => entering apply_memberof");
|
||||||
|
@ -189,7 +189,7 @@ fn apply_memberof(
|
||||||
trace!("=> processing affected uuid {:?}", auuid);
|
trace!("=> processing affected uuid {:?}", auuid);
|
||||||
debug_assert!(!tgte.attribute_equality("class", &PVCLASS_GROUP));
|
debug_assert!(!tgte.attribute_equality("class", &PVCLASS_GROUP));
|
||||||
do_memberof(qs, auuid, &mut tgte)?;
|
do_memberof(qs, auuid, &mut tgte)?;
|
||||||
// Only write if a change occured.
|
// Only write if a change occurred.
|
||||||
if pre.get_ava_set("memberof") != tgte.get_ava_set("memberof")
|
if pre.get_ava_set("memberof") != tgte.get_ava_set("memberof")
|
||||||
|| pre.get_ava_set("directmemberof") != tgte.get_ava_set("directmemberof")
|
|| pre.get_ava_set("directmemberof") != tgte.get_ava_set("directmemberof")
|
||||||
{
|
{
|
||||||
|
|
|
@ -51,7 +51,7 @@ trait Plugin {
|
||||||
|
|
||||||
fn post_create(
|
fn post_create(
|
||||||
_qs: &mut QueryServerWriteTransaction,
|
_qs: &mut QueryServerWriteTransaction,
|
||||||
// List of what we commited that was valid?
|
// List of what we committed that was valid?
|
||||||
_cand: &[EntrySealedCommitted],
|
_cand: &[EntrySealedCommitted],
|
||||||
_ce: &CreateEvent,
|
_ce: &CreateEvent,
|
||||||
) -> Result<(), OperationError> {
|
) -> Result<(), OperationError> {
|
||||||
|
|
|
@ -47,7 +47,7 @@ impl ReferentialIntegrity {
|
||||||
e
|
e
|
||||||
})?;
|
})?;
|
||||||
|
|
||||||
// Is the existance of all id's confirmed?
|
// Is the existence of all id's confirmed?
|
||||||
if b {
|
if b {
|
||||||
Ok(())
|
Ok(())
|
||||||
} else {
|
} else {
|
||||||
|
@ -70,7 +70,7 @@ impl Plugin for ReferentialIntegrity {
|
||||||
//
|
//
|
||||||
// There is a situation to account for which is that a create or mod
|
// There is a situation to account for which is that a create or mod
|
||||||
// may introduce the entry which is also to be referenced in the same
|
// may introduce the entry which is also to be referenced in the same
|
||||||
// transaction. Rather than have seperate verification paths - one to
|
// transaction. Rather than have separate verification paths - one to
|
||||||
// check the UUID is in the cand set, and one to check the UUID exists
|
// check the UUID is in the cand set, and one to check the UUID exists
|
||||||
// in the DB, we do the "correct" thing, write to the DB, and then assert
|
// in the DB, we do the "correct" thing, write to the DB, and then assert
|
||||||
// that the DB content is complete and valid instead.
|
// that the DB content is complete and valid instead.
|
||||||
|
|
|
@ -191,7 +191,7 @@ impl Spn {
|
||||||
);
|
);
|
||||||
|
|
||||||
// All we do is purge spn, and allow the plugin to recreate. Neat! It's also all still
|
// All we do is purge spn, and allow the plugin to recreate. Neat! It's also all still
|
||||||
// within the transaction, just incase!
|
// within the transaction, just in case!
|
||||||
qs.internal_modify(
|
qs.internal_modify(
|
||||||
&filter!(f_or!([
|
&filter!(f_or!([
|
||||||
f_eq("class", PVCLASS_GROUP.clone()),
|
f_eq("class", PVCLASS_GROUP.clone()),
|
||||||
|
|
|
@ -20,7 +20,7 @@ pub struct EntryChangelog {
|
||||||
/// A subtle and important piece of information is that an anchor can be considered
|
/// A subtle and important piece of information is that an anchor can be considered
|
||||||
/// as the "state as existing between two Cid's". This means for Cid X, this state is
|
/// as the "state as existing between two Cid's". This means for Cid X, this state is
|
||||||
/// the "moment before X". This is important, as for a create we define the initial anchor
|
/// the "moment before X". This is important, as for a create we define the initial anchor
|
||||||
/// as "nothing". It's means for the anchor at time X, that changes that occured at time
|
/// as "nothing". It's means for the anchor at time X, that changes that occurred at time
|
||||||
/// X have NOT been replayed and applied!
|
/// X have NOT been replayed and applied!
|
||||||
anchors: BTreeMap<Cid, State>,
|
anchors: BTreeMap<Cid, State>,
|
||||||
changes: BTreeMap<Cid, Change>,
|
changes: BTreeMap<Cid, Change>,
|
||||||
|
@ -34,7 +34,7 @@ impl fmt::Display for EntryChangelog {
|
||||||
}
|
}
|
||||||
*/
|
*/
|
||||||
|
|
||||||
/// A change defines the transitions that occured within this Cid (transaction). A change is applied
|
/// A change defines the transitions that occurred within this Cid (transaction). A change is applied
|
||||||
/// as a whole, or rejected during the replay process.
|
/// as a whole, or rejected during the replay process.
|
||||||
#[derive(Debug, Clone)]
|
#[derive(Debug, Clone)]
|
||||||
pub struct Change {
|
pub struct Change {
|
||||||
|
@ -512,7 +512,7 @@ impl EntryChangelog {
|
||||||
/*
|
/*
|
||||||
fn insert_anchor(&mut self, cid: Cid, entry_state: State) {
|
fn insert_anchor(&mut self, cid: Cid, entry_state: State) {
|
||||||
// When we insert an anchor, we have to remove all subsequent anchors (but not
|
// When we insert an anchor, we have to remove all subsequent anchors (but not
|
||||||
// the preceeding ones.)
|
// the preceding ones.)
|
||||||
let _ = self.anchors.split_off(&cid);
|
let _ = self.anchors.split_off(&cid);
|
||||||
self.anchors.insert(cid.clone(), entry_state);
|
self.anchors.insert(cid.clone(), entry_state);
|
||||||
}
|
}
|
||||||
|
@ -521,7 +521,7 @@ impl EntryChangelog {
|
||||||
pub fn trim_up_to(&mut self, cid: &Cid) -> Result<(), OperationError> {
|
pub fn trim_up_to(&mut self, cid: &Cid) -> Result<(), OperationError> {
|
||||||
// Build a new anchor that is equal or less than this cid.
|
// Build a new anchor that is equal or less than this cid.
|
||||||
// In other words, the cid we are trimming to, should be remaining
|
// In other words, the cid we are trimming to, should be remaining
|
||||||
// in the CL, and we should have an anchor that preceeds it.
|
// in the CL, and we should have an anchor that precedes it.
|
||||||
let (entry_state, rejected) = self.replay(Unbounded, Excluded(cid)).map_err(|e| {
|
let (entry_state, rejected) = self.replay(Unbounded, Excluded(cid)).map_err(|e| {
|
||||||
error!(?e);
|
error!(?e);
|
||||||
e
|
e
|
||||||
|
|
|
@ -155,7 +155,7 @@ impl<'a> ReplicationUpdateVectorTransaction for ReplicationUpdateVectorReadTrans
|
||||||
impl<'a> ReplicationUpdateVectorWriteTransaction<'a> {
|
impl<'a> ReplicationUpdateVectorWriteTransaction<'a> {
|
||||||
pub fn rebuild(&mut self, entries: &[Arc<EntrySealedCommitted>]) -> Result<(), OperationError> {
|
pub fn rebuild(&mut self, entries: &[Arc<EntrySealedCommitted>]) -> Result<(), OperationError> {
|
||||||
// Entries and their internal changelogs are the "source of truth" for all changes
|
// Entries and their internal changelogs are the "source of truth" for all changes
|
||||||
// that have ever occured and are stored on this server. So we use them to rebuild our RUV
|
// that have ever occurred and are stored on this server. So we use them to rebuild our RUV
|
||||||
// here!
|
// here!
|
||||||
let mut rebuild_ruv: BTreeMap<Cid, IDLBitRange> = BTreeMap::new();
|
let mut rebuild_ruv: BTreeMap<Cid, IDLBitRange> = BTreeMap::new();
|
||||||
|
|
||||||
|
|
|
@ -71,7 +71,7 @@ pub struct SchemaReadTransaction {
|
||||||
ref_cache: CowCellReadTxn<HashMap<AttrString, SchemaAttribute>>,
|
ref_cache: CowCellReadTxn<HashMap<AttrString, SchemaAttribute>>,
|
||||||
}
|
}
|
||||||
|
|
||||||
/// An item reperesenting an attribute and the rules that enforce it. These rules enforce if an
|
/// An item representing an attribute and the rules that enforce it. These rules enforce if an
|
||||||
/// attribute on an [`Entry`] may be single or multi value, must be unique amongst all other types
|
/// attribute on an [`Entry`] may be single or multi value, must be unique amongst all other types
|
||||||
/// of this attribute, if the attribute should be [`indexed`], and what type of data [`syntax`] it may hold.
|
/// of this attribute, if the attribute should be [`indexed`], and what type of data [`syntax`] it may hold.
|
||||||
///
|
///
|
||||||
|
@ -287,7 +287,7 @@ impl SchemaAttribute {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// An item reperesenting a class and the rules for that class. These rules enforce that an
|
/// An item representing a class and the rules for that class. These rules enforce that an
|
||||||
/// [`Entry`]'s avas conform to a set of requirements, giving structure to an entry about
|
/// [`Entry`]'s avas conform to a set of requirements, giving structure to an entry about
|
||||||
/// what avas must or may exist. The kanidm project provides attributes in `systemmust` and
|
/// what avas must or may exist. The kanidm project provides attributes in `systemmust` and
|
||||||
/// `systemmay`, which can not be altered. An administrator may extend these in the `must`
|
/// `systemmay`, which can not be altered. An administrator may extend these in the `must`
|
||||||
|
@ -1026,7 +1026,7 @@ impl<'a> SchemaWriteTransaction<'a> {
|
||||||
name: AttrString::from("acp_receiver_group"),
|
name: AttrString::from("acp_receiver_group"),
|
||||||
uuid: UUID_SCHEMA_ATTR_ACP_RECEIVER_GROUP,
|
uuid: UUID_SCHEMA_ATTR_ACP_RECEIVER_GROUP,
|
||||||
description: String::from(
|
description: String::from(
|
||||||
"The group that recieves this access control to allow access",
|
"The group that receives this access control to allow access",
|
||||||
),
|
),
|
||||||
multivalue: false,
|
multivalue: false,
|
||||||
unique: false,
|
unique: false,
|
||||||
|
@ -1059,7 +1059,7 @@ impl<'a> SchemaWriteTransaction<'a> {
|
||||||
name: AttrString::from("acp_search_attr"),
|
name: AttrString::from("acp_search_attr"),
|
||||||
uuid: UUID_SCHEMA_ATTR_ACP_SEARCH_ATTR,
|
uuid: UUID_SCHEMA_ATTR_ACP_SEARCH_ATTR,
|
||||||
description: String::from(
|
description: String::from(
|
||||||
"The attributes that may be viewed or searched by the reciever on targetscope.",
|
"The attributes that may be viewed or searched by the receiver on targetscope.",
|
||||||
),
|
),
|
||||||
multivalue: true,
|
multivalue: true,
|
||||||
unique: false,
|
unique: false,
|
||||||
|
@ -1558,7 +1558,7 @@ impl<'a> SchemaWriteTransaction<'a> {
|
||||||
name: AttrString::from("memberof"),
|
name: AttrString::from("memberof"),
|
||||||
uuid: UUID_SCHEMA_CLASS_MEMBEROF,
|
uuid: UUID_SCHEMA_CLASS_MEMBEROF,
|
||||||
description: String::from(
|
description: String::from(
|
||||||
"Class that is dynamically added to recepients of memberof or directmemberof",
|
"Class that is dynamically added to recipients of memberof or directmemberof",
|
||||||
),
|
),
|
||||||
systemmay: vec![
|
systemmay: vec![
|
||||||
AttrString::from("memberof"),
|
AttrString::from("memberof"),
|
||||||
|
@ -2448,7 +2448,7 @@ mod tests {
|
||||||
fn test_schema_filter_validation() {
|
fn test_schema_filter_validation() {
|
||||||
let schema_outer = Schema::new().expect("failed to create schema");
|
let schema_outer = Schema::new().expect("failed to create schema");
|
||||||
let schema = schema_outer.read();
|
let schema = schema_outer.read();
|
||||||
// Test non existant attr name
|
// Test non existent attr name
|
||||||
let f_mixed = filter_all!(f_eq("nonClAsS", PartialValue::new_class("attributetype")));
|
let f_mixed = filter_all!(f_eq("nonClAsS", PartialValue::new_class("attributetype")));
|
||||||
assert_eq!(
|
assert_eq!(
|
||||||
f_mixed.validate(&schema),
|
f_mixed.validate(&schema),
|
||||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue