* fix typos and misspellings
* use proper capitalization
* Apply suggestions from code review
---------

Co-authored-by: James Hodgkinson <james@terminaloutcomes.com>
This commit is contained in:
Alin Trăistaru 2024-07-18 05:22:20 +02:00 committed by GitHub
parent 90002f5db7
commit 562f352516
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
72 changed files with 102 additions and 102 deletions

View file

@ -104,7 +104,7 @@ finish our production components and the stability of the API's for longer term
- Minimum TLS key length enforcement on server code. - Minimum TLS key length enforcement on server code.
- Improvements to exit code returns on CLI commands. - Improvements to exit code returns on CLI commands.
- Credential reset link timeout issues resolved. - Credential reset link timeout issues resolved.
- Removed a lot of uses of `unwrap` and `expect` to improve reliabilty. - Removed a lot of uses of `unwrap` and `expect` to improve reliability.
- Account policy framework is now in place. - Account policy framework is now in place.
## 2023-05-01 - Kanidm 1.1.0-beta13 ## 2023-05-01 - Kanidm 1.1.0-beta13
@ -333,7 +333,7 @@ bring the project this far! 🎉 🦀
- Dynamic menus on CLI for auth factors when choices exist - Dynamic menus on CLI for auth factors when choices exist
- Better handle missing resources for web ui elements at server startup - Better handle missing resources for web ui elements at server startup
- Add WAL checkpointing to improve disk usage - Add WAL checkpointing to improve disk usage
- Oauth2 user interface flows for simple authorisation scenarioes - Oauth2 user interface flows for simple authorisation scenarios
- Improve entry memory usage based on valueset rewrite - Improve entry memory usage based on valueset rewrite
- Allow online backups to be scheduled and taken - Allow online backups to be scheduled and taken
- Reliability improvements for unixd components with missing sockets - Reliability improvements for unixd components with missing sockets

View file

@ -20,7 +20,7 @@ possible harm that an attacker may make if they gain access to these roles.
Kanidm supports [privilege access mode](../accounts/authentication_and_credentials.md) so that Kanidm supports [privilege access mode](../accounts/authentication_and_credentials.md) so that
high-level permissions can be assigned to users who must reauthenticate before using those high-level permissions can be assigned to users who must reauthenticate before using those
privileges. The privileges then are only accessible for a short period of time. This can allow you privileges. The privileges then are only accessible for a short period of time. This can allow you
to assign high level permissions to regular persions accounts rather than requiring separete to assign high level permissions to regular person accounts rather than requiring separate
privilege access accounts (PAA) or privileged access workstations (PAW). privilege access accounts (PAA) or privileged access workstations (PAW).
## Assigning Permissions to Service Accounts ## Assigning Permissions to Service Accounts

View file

@ -61,7 +61,7 @@ within a short time window.
However, these sessions always retain their _read_ privileges - meaning that they can still access However, these sessions always retain their _read_ privileges - meaning that they can still access
and view high levels of data at any time without reauthentication. and view high levels of data at any time without reauthentication.
In high risk environments you should still consider assigning seperate administration accounts to In high risk environments you should still consider assigning separate administration accounts to
users if this is considered a risk. users if this is considered a risk.
## Recovering the Initial Admin Accounts ## Recovering the Initial Admin Accounts

View file

@ -42,7 +42,7 @@ selected to be the parent (toplevel) domain (`example.com`).
Failure to use a unique subdomain may allow cookies to leak to other entities within your domain, Failure to use a unique subdomain may allow cookies to leak to other entities within your domain,
and may allow webauthn to be used on entities you did not intend for which may or may not lead to and may allow webauthn to be used on entities you did not intend for which may or may not lead to
some phishing scenarioes. some phishing scenarios.
## Examples ## Examples

View file

@ -44,7 +44,7 @@ members can write self" meaning that any member of that group can write to thems
themself. themself.
In the future we could also create different target/receiver specifiers to allow other extended In the future we could also create different target/receiver specifiers to allow other extended
management and delegation scenarioes. This improves the situation making things more flexible from management and delegation scenarios. This improves the situation making things more flexible from
the current filter system. It also may allow filters to be simplified to remove the SELF uuid the current filter system. It also may allow filters to be simplified to remove the SELF uuid
resolve step in some cases. resolve step in some cases.
@ -58,7 +58,7 @@ allowing us to move from filter based access controls to "group" targeted.
A risk of filter based groups is "infinite churn" because of recursion. This can occur if you had a A risk of filter based groups is "infinite churn" because of recursion. This can occur if you had a
rule such a "and not memberof = self" on a dynamic group. Because of this, filters on dynamic groups rule such a "and not memberof = self" on a dynamic group. Because of this, filters on dynamic groups
may not use "memberof" unless they are internally provided by the kanidm project so that we can vet may not use "memberof" unless they are internally provided by the kanidm project so that we can vet
these rules as correct and without creating infinite recursion scenarioes. these rules as correct and without creating infinite recursion scenarios.
### Access rules extracted to ACI entries on targets ### Access rules extracted to ACI entries on targets

View file

@ -1,7 +1,7 @@
Account Policy and Lockouts Account Policy and Lockouts
--------------------------- ---------------------------
For accounts we need to be able to define securite constraints and limits to prevent malicious use For accounts we need to be able to define security constraints and limits to prevent malicious use
or attacks from succeeding. While these attacks may have similar sources or goals, the defences or attacks from succeeding. While these attacks may have similar sources or goals, the defences
to them may vary. to them may vary.
@ -100,7 +100,7 @@ Hard Lock + Expiry/Active Time Limits
It must be possible to expire an account so it no longer operates (IE temporary contractor) or It must be possible to expire an account so it no longer operates (IE temporary contractor) or
accounts that can only operate after a known point in time (Student enrollments and their course accounts that can only operate after a known point in time (Student enrollments and their course
commencment date). commencement date).
This expiry must exist at the account level, but also on issued token/API password levels. This allows revocation of This expiry must exist at the account level, but also on issued token/API password levels. This allows revocation of
individual tokens, but also the expiry of the account and all tokens as a whole. This expiry may be individual tokens, but also the expiry of the account and all tokens as a whole. This expiry may be
@ -120,7 +120,7 @@ Application Passwords / Issued Oauth Tokens
=========================================== ===========================================
* Relates to claims * Relates to claims
* Need their own expirys * Need their own expiries
* Need ratelimit as above? * Need ratelimit as above?

View file

@ -160,7 +160,7 @@ client to be able to construct correct authorisations.
cookie keys to prevent forgery of writable master cookies) cookie keys to prevent forgery of writable master cookies)
- cookies can request tokens, tokens are signed cbor that contains the set of group uuids + names - cookies can request tokens, tokens are signed cbor that contains the set of group uuids + names
derferenced so that a client can make all authorisation decisions from a single datapoint dereferenced so that a client can make all authorisation decisions from a single datapoint
- Groups require the ability to be ephemeral/temporary or permanent. - Groups require the ability to be ephemeral/temporary or permanent.
@ -252,7 +252,7 @@ what reqwest supports). For more consideration, see, <https://tools.ietf.org/htm
walking the set of sessions and purging incomplete ones which have passed the time stamp. walking the set of sessions and purging incomplete ones which have passed the time stamp.
- The session id is in the cookie to eliminate leaking of the session id (secure cookies), and to - The session id is in the cookie to eliminate leaking of the session id (secure cookies), and to
prevent tampering of the session id if possible. It's not perfect, but it helps to prevent casual prevent tampering of the session id if possible. It's not perfect, but it helps to prevent casual
attkcs. The session id itself is really the thing that protects us from replays. attacks. The session id itself is really the thing that protects us from replays.
## Auth Questions ## Auth Questions
@ -421,8 +421,8 @@ sshPublicKey: ... <<-- different due to needing anon read.
## Some Dirty Rust Brain Dumps ## Some Dirty Rust Brain Dumps
- Credentials need per-cred locking - Credentials need per-cred locking
- This means they have to be in memory and uniquely ided. - This means they have to be in memory and uniquely IDed.
- How can we display to a user that a credential back-off is inplace? - How can we display to a user that a credential back-off is in place?
- UAT need to know what Credential was used and its state. - UAT need to know what Credential was used and its state.
- The Credential associates the claims - The Credential associates the claims

View file

@ -20,7 +20,7 @@ It requires more extensive testing and it's hard to follow the code that exists
correct MFA situations. correct MFA situations.
Clients that have already been implemented don't used the stepped model. As the server is sending Clients that have already been implemented don't used the stepped model. As the server is sending
*all* required steps the client responds with all needed credentials to fufil the request. This means *all* required steps the client responds with all needed credentials to fulfil the request. This means
that a large part of the model is not used effectively, and this shows in the client which even today that a large part of the model is not used effectively, and this shows in the client which even today
doesn't actually check what's requested due to the complexity of callbacks that would require doesn't actually check what's requested due to the complexity of callbacks that would require
to implement. to implement.

View file

@ -29,7 +29,7 @@ do not, the credential update session is rejected.
If they do have the access, a new credential update session is created, and the user is given a time If they do have the access, a new credential update session is created, and the user is given a time
limited token allowing them to interact with the credential update session. limited token allowing them to interact with the credential update session.
If the credental update session is abandoned it can not be re-accessed until it is expired. If the credential update session is abandoned it can not be re-accessed until it is expired.
Onboarding/Reset Account Workflow Onboarding/Reset Account Workflow
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

View file

@ -1,6 +1,6 @@
# Cryptography Key Domains # Cryptography Key Domains
Within Kanidm we have to manage a number of private keys with various cryptograhpic purposes. In the Within Kanidm we have to manage a number of private keys with various cryptographic purposes. In the
current design, we have evolved where for each purposes keys are managed in unique ways. However we current design, we have evolved where for each purposes keys are managed in unique ways. However we
need to improve this for a number reasons including shared keys for Oauth2 domains and a future need to improve this for a number reasons including shared keys for Oauth2 domains and a future
integration with PKCS11. integration with PKCS11.
@ -31,7 +31,7 @@ keys but preserve existing api-token signatures. Currently we have no mechanism
## Design ## Design
To accomodate future changes, keys will be associated to a Key Provider. Key Objects relate to a To accommodate future changes, keys will be associated to a Key Provider. Key Objects relate to a
single Key Provider. Migration of a Key Object to another Key Provider in the future _may_ be single Key Provider. Migration of a Key Object to another Key Provider in the future _may_ be
possible. possible.
@ -211,7 +211,7 @@ In the future we may need to migrate keyObjects to be part of their own "securit
represents a pkcs11 or other key-trust store. represents a pkcs11 or other key-trust store.
Key trust stores need to consider that some handlers are single threaded only, so we need to design Key trust stores need to consider that some handlers are single threaded only, so we need to design
some form of asynchronisity into our handlers so that they can use work queues to the HSM for keys. some form of asynchronicity into our handlers so that they can use work queues to the HSM for keys.
We also need to consider key-wrapping for import of keys to HSM's on disjoint nodes. As well we We also need to consider key-wrapping for import of keys to HSM's on disjoint nodes. As well we
probably need to consider keyObjects that are not always accessible to all nodes so that the probably need to consider keyObjects that are not always accessible to all nodes so that the

View file

@ -179,7 +179,7 @@ Device enrollments do not require a password
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
On a mobile device we should NOT require a password to be entered to the account. This is because On a mobile device we should NOT require a password to be entered to the account. This is because
the password rules we attempt to enforce in Kanidm should create passwords that are *not* memorisable the password rules we attempt to enforce in Kanidm should create passwords that are *not* memorable
meaning that the user is likely to store this in a password manager. Since the password manager meaning that the user is likely to store this in a password manager. Since the password manager
is already on the mobile device, then compromise of the device yields access to the password, nullifying is already on the mobile device, then compromise of the device yields access to the password, nullifying
its security benefit. its security benefit.

View file

@ -43,5 +43,5 @@ that all replication as a whole will still be valid. This is good!
It does mean we need to consider that we have to upgrade data as it comes in from It does mean we need to consider that we have to upgrade data as it comes in from
replication from an older server too to bring fields up to date if needed. This replication from an older server too to bring fields up to date if needed. This
may necesitate a "data version" field on each entry, which we can also associate may necessitate a "data version" field on each entry, which we can also associate
to any CSN so that it can be accepted or rejected as required. to any CSN so that it can be accepted or rejected as required.

View file

@ -148,7 +148,7 @@ provide faster searches and avoid indexes that are costly unless they are needed
In this case, we would _demote_ any filter where Eq(class, ...) to the _end_ of the And, because it In this case, we would _demote_ any filter where Eq(class, ...) to the _end_ of the And, because it
is highly likely to be less targeted than the other Eq types. Another example would be promotion of is highly likely to be less targeted than the other Eq types. Another example would be promotion of
Eq filters to the front of an And over a Sub term, wherh Sub indexes tend to be larger and have Eq filters to the front of an And over a Sub term, where Sub indexes tend to be larger and have
longer IDLs. longer IDLs.
## Implementation Details and Notes ## Implementation Details and Notes
@ -247,7 +247,7 @@ The major transformation cases for testing are:
- Add a multivalue (one) - Add a multivalue (one)
- Add a multivalue (many) - Add a multivalue (many)
- On a mulitvalue, add another value - On a multivalve, add another value
- On multivalue, remove a value, but leave others - On multivalue, remove a value, but leave others
- Delete a multivalue - Delete a multivalue
- Add a new single value - Add a new single value

View file

@ -97,7 +97,7 @@ But there are some things I want:
* RADIUS pws are per-domain, not replicated. This would breach the cred-silo idea, and really, if domain B has radius it probably has different * RADIUS pws are per-domain, not replicated. This would breach the cred-silo idea, and really, if domain B has radius it probably has different
SSID/ca cert to domain A, so why share the pw? If we did want to really share the credentials, we can have RADIUS act as a client switch SSID/ca cert to domain A, so why share the pw? If we did want to really share the credentials, we can have RADIUS act as a client switch
instead. instead.
* We can't proxy authentications because of webuathn domain verification, so clients that want to * We can't proxy authentications because of webauthn domain verification, so clients that want to
auth users to either side have to redirect through their origin domain to generate the session. This auth users to either side have to redirect through their origin domain to generate the session. This
means the origin domain may have to be accessible in some cases. means the origin domain may have to be accessible in some cases.
* Public-key auth types can be replicated fractionally, which allows the domain to auth a user via * Public-key auth types can be replicated fractionally, which allows the domain to auth a user via

View file

@ -27,7 +27,7 @@ will be Read Only as a result. The ability to write via LDAP will not be support
Most LDAP servers offer their schema in a readonly interface. Translating Kanidm's schema into a way Most LDAP servers offer their schema in a readonly interface. Translating Kanidm's schema into a way
that clients could interpret is of little value as many clients do not request this, or parse it. that clients could interpret is of little value as many clients do not request this, or parse it.
While Kanidm is entry based, and NoSQL, similar to LDAP, our datamodel is subtely different enough While Kanidm is entry based, and NoSQL, similar to LDAP, our datamodel is subtly different enough
that not all attributes can be representing in LDAP. Some data transformation will need to occur as that not all attributes can be representing in LDAP. Some data transformation will need to occur as
a result to expose data that LDAP clients expect. Not all data may be able to be presented, nor a result to expose data that LDAP clients expect. Not all data may be able to be presented, nor
should it (ie radius_secret, you should use the kanidm radius integration instead of the LDAP should it (ie radius_secret, you should use the kanidm radius integration instead of the LDAP
@ -107,7 +107,7 @@ accounts would need the attributes to exist to work with these.
Entry and Attribute Transformations Entry and Attribute Transformations
=================================== ===================================
Some attributes and items will need transformatio to "make sense" to clients. This includes: Some attributes and items will need transformation to "make sense" to clients. This includes:
member/memberOf: Member in LDAP is a DN, where in Kanidm it's a reference type with SPN. We will need member/memberOf: Member in LDAP is a DN, where in Kanidm it's a reference type with SPN. We will need
to transform this in filters *and* in entries that are sent back. to transform this in filters *and* in entries that are sent back.

View file

@ -94,7 +94,7 @@ modification of our memberOf + our uuid. So translated:
member: G2 member: U member: G2 member: U
memberOf: - memberOf: G1 memberOf: G2, G1 memberOf: - memberOf: G1 memberOf: G2, G1
It's important to note, we only recures on Groups - nothing else. This is what breaks the It's important to note, we only recurse on Groups - nothing else. This is what breaks the
cycle on U, as memberOf is now fully applied. cycle on U, as memberOf is now fully applied.

View file

@ -86,7 +86,7 @@ application within the authorisation server.
Each registered resource server will have an associated secret for authentication. The Each registered resource server will have an associated secret for authentication. The
most simple for of this is a "basic" authorisation header. most simple for of this is a "basic" authorisation header.
This resource server entry will nominially list what scopes map to which kanidm roles, This resource server entry will nominally list what scopes map to which kanidm roles,
which scopes are "always" available to all authenticated users. Additionally, it may which scopes are "always" available to all authenticated users. Additionally, it may
be that we have an extra set of "filter rules" to allow authorisation decisions to be be that we have an extra set of "filter rules" to allow authorisation decisions to be
made based on other factors like group membership. made based on other factors like group membership.

View file

@ -25,7 +25,7 @@ While this works well for the oauth2 authorisation design, it doesn't work well
for managing _our_ knowledge of who is granted access to the application. for managing _our_ knowledge of who is granted access to the application.
In order to limit who can see what applications we will need a new method to define who is allowed In order to limit who can see what applications we will need a new method to define who is allowed
access to the resource server on the kanidm side, while also preserving ouath2 semantics. access to the resource server on the Kanidm side, while also preserving OAuth2 semantics.
To fix this the current definition of scopes on oauth2 resource servers need to change. To fix this the current definition of scopes on oauth2 resource servers need to change.

View file

@ -10,7 +10,7 @@ allow rogue users to have a long window of usage of the token before they were f
also means that in the case that an account must be forcefully terminated then the user would retain also means that in the case that an account must be forcefully terminated then the user would retain
access to applications for up to 8 hours or more. access to applications for up to 8 hours or more.
To prevent this, we need oauth2 tokens to "check in" periodically to re-afirm their session To prevent this, we need OAuth2 tokens to "check in" periodically to re-affirm their session
validity. validity.
This is performed with access tokens and refresh tokens. The access token has a short lifespan This is performed with access tokens and refresh tokens. The access token has a short lifespan

View file

@ -14,7 +14,7 @@ One area that is lacking however is the ability to provide external password
material to an account. This is a case where kanidm never sees the plaintext material to an account. This is a case where kanidm never sees the plaintext
password, we are only sent a hash of the material. password, we are only sent a hash of the material.
Scenarioes Scenarios
---------- ----------
* Once off account import - this is where we are migrating from an existing system to kanidm * Once off account import - this is where we are migrating from an existing system to kanidm

View file

@ -109,7 +109,7 @@ There are two possibilities here:
We replicate the radius credential to the trusted domain, so the user has the same radius password We replicate the radius credential to the trusted domain, so the user has the same radius password
in both locations. A question is how the user would auto-add their profile to their devices here in both locations. A question is how the user would auto-add their profile to their devices here
on the remote site, because they would need to be able to access the configuration. This would on the remote site, because they would need to be able to access the configuration. This would
necesitate the user logging into the "trust" site to get the configuration profile anyway. necessitate the user logging into the "trust" site to get the configuration profile anyway.
* One account - two sites - two passwords * One account - two sites - two passwords

View file

@ -370,7 +370,7 @@ transmission of the difference in the sets.
To calculate this, we can use our changelog to construct a table called the replication To calculate this, we can use our changelog to construct a table called the replication
update vector. The RUV is a single servers changelog state, categorised by the originating update vector. The RUV is a single servers changelog state, categorised by the originating
server of the change. A psudeo example of this is: server of the change. A pseudo example of this is:
:: ::
@ -486,7 +486,7 @@ TODO: Must origUUID,
Object Level Conflict Handling Object Level Conflict Handling
=============================== ===============================
With the constructs defined, we have enough in place to be able to handle various scenarioes. With the constructs defined, we have enough in place to be able to handle various scenarios.
For the purposes of these discussions we will present two servers with a series of changes For the purposes of these discussions we will present two servers with a series of changes
over time. over time.

View file

@ -140,7 +140,7 @@ server to be "authoritative".
The KRC is enabled as a replication parameter. This informs the node that it must not contact other The KRC is enabled as a replication parameter. This informs the node that it must not contact other
nodes for its replication topology, and it prepares the node for serving that replication metadata. nodes for its replication topology, and it prepares the node for serving that replication metadata.
This is analgous to a single node operation configuration. This is analogous to a single node operation configuration.
``` ```
[replication] [replication]
@ -201,11 +201,11 @@ server entries will increment the generation counter. This allows us to detect w
requires a new configuration or not. requires a new configuration or not.
If a server's entry in the database is marked to be `Revoked` then it will remain in the database, If a server's entry in the database is marked to be `Revoked` then it will remain in the database,
but be inelligible for replication participation. This is to allow for forced removal of a but be ineligible for replication participation. This is to allow for forced removal of a
potentially compromised node. potentially compromised node.
The KRC will periodically examine its RUV. For any server entry whose UUID is not contained in the The KRC will periodically examine its RUV. For any server entry whose UUID is not contained in the
RUV, and whose "time first seen + trime window" is less than now, then the server entry will be RUV, and whose "time first seen + time window" is less than now, then the server entry will be
REMOVED for inactivity since it has now been trimmed from the RUV. REMOVED for inactivity since it has now been trimmed from the RUV.
### Moving the Replication Coordinator Role ### Moving the Replication Coordinator Role
@ -241,7 +241,7 @@ Imagine the following example. Here, Node A is acting as the KRC.
This would allow Node A to be aware of B, C, D and then create a full mesh. This would allow Node A to be aware of B, C, D and then create a full mesh.
We wish to decommision Node A and promote Node B to become the new KRC. Imagine at this point we cut We wish to decommission Node A and promote Node B to become the new KRC. Imagine at this point we cut
over Node D to point its KRC at Node B. over Node D to point its KRC at Node B.
``` ```

View file

@ -46,7 +46,7 @@ Some types of queries can take more time than others. For example:
* Indexed searches that have a large number of results. * Indexed searches that have a large number of results.
* Write operations that affect many entries. * Write operations that affect many entries.
These necesitate the following limits: These necessitate the following limits:
* No unindexed searches allowed * No unindexed searches allowed
* Prevent searching on terms that do not exist (done, filter schema validation) * Prevent searching on terms that do not exist (done, filter schema validation)

View file

@ -24,7 +24,7 @@ There are three expected methods of using the synchronisation tools for Kanidm
is performed where Kanidm 'gains authority' over all identity data and the existing IDM is is performed where Kanidm 'gains authority' over all identity data and the existing IDM is
disabled. disabled.
In these processes there may be a need to "reset" the synchronsied data. The diagram below shows the In these processes there may be a need to "reset" the synchronised data. The diagram below shows the
possible work flows which account for the above. possible work flows which account for the above.
┏━━━━━━━━━━━━━━━━━┓ ┏━━━━━━━━━━━━━━━━━┓
@ -79,7 +79,7 @@ To achieve this, we initially provide synchronisation primitives in the
### Transform ### Transform
This process will be custom developed by the user, or may have a generic driver that we provide. Our This process will be custom developed by the user, or may have a generic driver that we provide. Our
generic tools may provide attribute mapping abilitys so that we can allow some limited generic tools may provide attribute mapping abilities so that we can allow some limited
customisation. customisation.
### Load ### Load

View file

@ -5,7 +5,7 @@ To ensure that certain actions are only performed after re-authentication, we sh
a sudo mode to kanidm. This relies on some changes from Oauth.rst (namely interactive session a sudo mode to kanidm. This relies on some changes from Oauth.rst (namely interactive session
identification). identification).
Only interactive sessions (IE not api passwords or radius) must be elligble for sudo mode. Only interactive sessions (IE not api passwords or radius) must be eligible for sudo mode.
Sudo mode when requested will perform a partial-reauthentication of the account using a single Sudo mode when requested will perform a partial-reauthentication of the account using a single
factor (if mfa). This is determined based on the credential uuid of the associated session. factor (if mfa). This is determined based on the credential uuid of the associated session.

View file

@ -102,7 +102,7 @@ bounded and if the queue is not being serviced, it proceeds with the login/proce
as we must assume the user has *not* configured the tasks daemon on the system. This queue as we must assume the user has *not* configured the tasks daemon on the system. This queue
also prevents memory growth/ddos if we are overloaded by login requests. also prevents memory growth/ddos if we are overloaded by login requests.
In packaging the tasks daemon will use systemds isolation features to further harden this. For In packaging the tasks daemon will use systemd's isolation features to further harden this. For
example: example:
:: ::

View file

@ -49,9 +49,9 @@ accessible - otherwise people will not run the tests, leading to poor quality.
The project must be simple. Any one should be able to understand how it works and why those The project must be simple. Any one should be able to understand how it works and why those
decisions were made. decisions were made.
### Hierachy of Controls ### Hierarchy of Controls
When a possible risk arises we should always consider the [hierachy of controls]. In descedending When a possible risk arises we should always consider the [hierarchy of controls]. In descedending
order of priority order of priority
- Elimination - eliminate the risk from existing - Elimination - eliminate the risk from existing
@ -60,7 +60,7 @@ order of priority
- Administrative Controls - educate about the risk, add warnings - Administrative Controls - educate about the risk, add warnings
- Personal Protection - document the risk - Personal Protection - document the risk
[hierachy of controls]: https://en.wikipedia.org/wiki/Hierarchy_of_hazard_controls [hierarchy of controls]: https://en.wikipedia.org/wiki/Hierarchy_of_hazard_controls
### Languages ### Languages

View file

@ -222,7 +222,7 @@ claim name. Different applications may expect these values to be formatted (join
ways. ways.
Claim values are mapped based on membership to groups. When an account is a member of multiple Claim values are mapped based on membership to groups. When an account is a member of multiple
groups that would recieve the same claim, the values of these maps are merged. groups that would receive the same claim, the values of these maps are merged.
To create or update a claim map on a client: To create or update a claim map on a client:
@ -327,7 +327,7 @@ To disable PKCE for a confidential client:
kanidm system oauth2 warning-insecure-client-disable-pkce <client name> kanidm system oauth2 warning-insecure-client-disable-pkce <client name>
``` ```
To enable legacy cryptograhy (RSA PKCS1-5 SHA256): To enable legacy cryptography (RSA PKCS1-5 SHA256):
```bash ```bash
kanidm system oauth2 warning-enable-legacy-crypto <client name> kanidm system oauth2 warning-enable-legacy-crypto <client name>

View file

@ -125,7 +125,7 @@ override_homedir = /home/%U
ignore_group_members = True ignore_group_members = True
# Disable caching of credentials by SSSD. SSSD uses less secure local password storage # Disable caching of credentials by SSSD. SSSD uses less secure local password storage
# mechanisims, and is a risk for credential disclosure. # mechanisms, and is a risk for credential disclosure.
# #
# ⚠️ NEVER CHANGE THIS VALUE ⚠️ # ⚠️ NEVER CHANGE THIS VALUE ⚠️
cache_credentials = False cache_credentials = False

View file

@ -154,7 +154,7 @@ docker run --rm -i -t -u 1000:1000 -v kanidmd:/data kanidm/server:latest /sbin/k
## Minimum TLS key lengths ## Minimum TLS key lengths
We enforce a minimum RSA and ECDSA key sizes. If your key is insufficently large, the server will We enforce a minimum RSA and ECDSA key sizes. If your key is insufficiently large, the server will
refuse to start and inform you of this. refuse to start and inform you of this.
Currently accepted key sizes are minimum 2048 bit RSA and 224 bit ECDSA. Currently accepted key sizes are minimum 2048 bit RSA and 224 bit ECDSA.

View file

@ -90,7 +90,7 @@ text=However you choose to run your server, you should document and keep note of
### Default Admin Accounts ### Default Admin Accounts
Now that the server is running, you can initialise the default admin accounts. There are two Now that the server is running, you can initialise the default admin accounts. There are two
parallel admin accounts that have seperate functions. `admin` which manages Kanidm's configuration, parallel admin accounts that have separate functions. `admin` which manages Kanidm's configuration,
and `idm_admin` which manages accounts and groups in Kanidm. and `idm_admin` which manages accounts and groups in Kanidm.
You should consider these as "break-glass" accounts. They exist to allow the server to be You should consider these as "break-glass" accounts. They exist to allow the server to be

View file

@ -10,7 +10,7 @@ missing or if you have a question, please
The version of this document found The version of this document found
[on the project page](https://github.com/kanidm/kanidm/blob/master/book/src/support.md) is [on the project page](https://github.com/kanidm/kanidm/blob/master/book/src/support.md) is
considered authoritive and applies to all versions. considered authoritative and applies to all versions.
## Release Schedule and Versioning ## Release Schedule and Versioning
@ -87,7 +87,7 @@ before the servers release.
### API stability ### API stability
Kanidm has a number of APIs with different stability guarantees. APIs that are stable will only Kanidm has a number of APIs with different stability guarantees. APIs that are stable will only
recieve breaking changes in the case of an ethics, security or potential data corruption issue. receive breaking changes in the case of an ethics, security or potential data corruption issue.
Stable APIs are: Stable APIs are:
@ -123,7 +123,7 @@ All code changes will include full type-casting wherever possible.
### Project Discretion ### Project Discretion
In the event of an unforseen or extraordinary situation, the project team may make decisions In the event of an unforeseen or extraordinary situation, the project team may make decisions
contradictory to this document at their discretion. In these situation, the project team will make contradictory to this document at their discretion. In these situation, the project team will make
every effort to communicate the reason for the decision and will attempt to minimise disruption to every effort to communicate the reason for the decision and will attempt to minimise disruption to
users. users.

View file

@ -94,7 +94,7 @@ By default Kanidm assumes that authority over synchronised entries is retained b
This means that synchronised entries can not be written to in any capacity outside of a small number This means that synchronised entries can not be written to in any capacity outside of a small number
of internal Kanidm internal attributes. of internal Kanidm internal attributes.
An adminisrator may wish to allow synchronised entries to have some attributes written by the An administrator may wish to allow synchronised entries to have some attributes written by the
instance locally. An example is allowing passkeys to be created on Kanidm when the external instance locally. An example is allowing passkeys to be created on Kanidm when the external
synchronisation provider does not supply them. synchronisation provider does not supply them.

View file

@ -26,7 +26,7 @@ ipa_sync_pw = "directory manager password"
# The basedn to examine. # The basedn to examine.
ipa_sync_base_dn = "dc=ipa,dc=dev,dc=kanidm,dc=com" ipa_sync_base_dn = "dc=ipa,dc=dev,dc=kanidm,dc=com"
# By default Kanidm seperates the primary account password and credentials from # By default Kanidm separates the primary account password and credentials from
# the unix credential. This allows the unix password to be isolated from the # the unix credential. This allows the unix password to be isolated from the
# account password so that compromise of one doesn't compromise the other. However # account password so that compromise of one doesn't compromise the other. However
# this can be surprising for new users during a migration. This boolean allows the # this can be surprising for new users during a migration. This boolean allows the

View file

@ -32,7 +32,7 @@ ldap_sync_base_dn = "dc=ldap,dc=dev,dc=kanidm,dc=com"
ldap_filter = "(|(objectclass=person)(objectclass=posixgroup))" ldap_filter = "(|(objectclass=person)(objectclass=posixgroup))"
# ldap_filter = "(cn=\"my value\")" # ldap_filter = "(cn=\"my value\")"
# By default Kanidm seperates the primary account password and credentials from # By default Kanidm separates the primary account password and credentials from
# the unix credential. This allows the unix password to be isolated from the # the unix credential. This allows the unix password to be isolated from the
# account password so that compromise of one doesn't compromise the other. However # account password so that compromise of one doesn't compromise the other. However
# this can be surprising for new users during a migration. This boolean allows the # this can be surprising for new users during a migration. This boolean allows the

View file

@ -1,7 +1,7 @@
# this example configures kanidm-unixd for testing on macos # this example configures kanidm-unixd for testing on macos
db_path = "/tmp/kanidm-unixd" db_path = "/tmp/kanidm-unixd"
sock_path = "/tmp/kanimd_unixd.sock" sock_path = "/tmp/kanidm_unixd.sock"
task_sock_path = "/tmp/kanimd_unidx_task.sock" task_sock_path = "/tmp/kanidm_unixd_task.sock"
# some documentation is here: https://github.com/kanidm/kanidm/blob/master/book/src/pam_and_nsswitch.md # some documentation is here: https://github.com/kanidm/kanidm/blob/master/book/src/pam_and_nsswitch.md
pam_allowed_login_groups = ["posix_group"] pam_allowed_login_groups = ["posix_group"]
# default_shell = "/bin/sh" # default_shell = "/bin/sh"

View file

@ -304,7 +304,7 @@ impl CryptoPolicy {
// //
// We also need to balance this against the fact we are a database, and we do have // We also need to balance this against the fact we are a database, and we do have
// caches. We also don't want to over-use RAM, especially because in the worst case // caches. We also don't want to over-use RAM, especially because in the worst case
// every thread will be operationg in argon2id at the same time. That means // every thread will be operating in argon2id at the same time. That means
// thread x ram will be used. If we had 8 threads at 64mb of ram, that would require // thread x ram will be used. If we had 8 threads at 64mb of ram, that would require
// 512mb of ram alone just for hashing. This becomes worse as core counts scale, with // 512mb of ram alone just for hashing. This becomes worse as core counts scale, with
// 24 core xeons easily reaching 1.5GB in these cases. // 24 core xeons easily reaching 1.5GB in these cases.

View file

@ -67,7 +67,7 @@ if [ "$(which cargo | wc -l)" -eq 0 ]; then
fi fi
# this assumes the versions are in lock-step, which is fine at the moment. # this assumes the versions are in lock-step, which is fine at the moment.
# Debian is picky abour dashes in version strings, so a bit of conversion # Debian is picky about dashes in version strings, so a bit of conversion
# is needed for the first one to prevent interference. # is needed for the first one to prevent interference.
KANIDM_VERSION="$(grep -ioE 'version.*' Cargo.toml | head -n1 | awk '{print $NF}' | tr -d '"' | sed -e 's/-/~/')" KANIDM_VERSION="$(grep -ioE 'version.*' Cargo.toml | head -n1 | awk '{print $NF}' | tr -d '"' | sed -e 's/-/~/')"

View file

@ -7,7 +7,7 @@ After=chronyd.service nscd.service ntpd.service network-online.target suspend.ta
Before=systemd-user-sessions.service sshd.service nss-user-lookup.target Before=systemd-user-sessions.service sshd.service nss-user-lookup.target
Wants=nss-user-lookup.target Wants=nss-user-lookup.target
# While it seems confusing, we need to be after nscd.service so that the # While it seems confusing, we need to be after nscd.service so that the
# Conflicts will triger and then automatically stop it. # Conflicts will trigger and then automatically stop it.
Conflicts=nscd.service Conflicts=nscd.service
[Service] [Service]

View file

@ -290,7 +290,7 @@ pub enum PkceAlg {
#[serde(rename_all = "UPPERCASE")] #[serde(rename_all = "UPPERCASE")]
/// Algorithms supported for token signatures. Prefers `ES256` /// Algorithms supported for token signatures. Prefers `ES256`
pub enum IdTokenSignAlg { pub enum IdTokenSignAlg {
// WE REFUSE TO SUPPORT NONE. DONT EVEN ASK. IT WON'T HAPPEN. // WE REFUSE TO SUPPORT NONE. DON'T EVEN ASK. IT WON'T HAPPEN.
ES256, ES256,
RS256, RS256,
} }

View file

@ -41,7 +41,7 @@ impl Display for AccountType {
// entry/ava/filter types. These related deeply to schema. // entry/ava/filter types. These related deeply to schema.
/// The current purpose of a User Auth Token. It may be read-only, read-write /// The current purpose of a User Auth Token. It may be read-only, read-write
/// or privilige capable (able to step up to read-write after re-authentication). /// or privilege capable (able to step up to read-write after re-authentication).
#[derive(Debug, Serialize, Deserialize, Clone, ToSchema)] #[derive(Debug, Serialize, Deserialize, Clone, ToSchema)]
#[serde(rename_all = "lowercase")] #[serde(rename_all = "lowercase")]
pub enum UatPurposeStatus { pub enum UatPurposeStatus {

View file

@ -128,7 +128,7 @@ impl QueryServerWriteV1 {
.await; .await;
if retry { if retry {
// An error occured, retry each operation one at a time. // An error occurred, retry each operation one at a time.
for da in da_batch.iter() { for da in da_batch.iter() {
let eventid = Uuid::new_v4(); let eventid = Uuid::new_v4();
let span = span!(Level::INFO, "process_delayed_action_retried", uuid = ?eventid); let span = span!(Level::INFO, "process_delayed_action_retried", uuid = ?eventid);

View file

@ -462,7 +462,7 @@ async fn server_loop(
info!("Loading client certificates from {}", client_ca.display()); info!("Loading client certificates from {}", client_ca.display());
let verify = SslVerifyMode::PEER; let verify = SslVerifyMode::PEER;
// In future we may add a "require mTLS option" which would necesitate this. // In future we may add a "require mTLS option" which would necessitate this.
// verify.insert(SslVerifyMode::FAIL_IF_NO_PEER_CERT); // verify.insert(SslVerifyMode::FAIL_IF_NO_PEER_CERT);
tls_builder.set_verify(verify); tls_builder.set_verify(verify);
@ -494,7 +494,7 @@ async fn server_loop(
item.file_name() item.file_name()
.to_str() .to_str()
// Hashed certs end in .0 // Hashed certs end in .0
// Hsahed crls are .r0 // Hashed crls are .r0
.map(|fname| fname.ends_with(".0")) .map(|fname| fname.ends_with(".0"))
.unwrap_or_default() .unwrap_or_default()
}) { }) {

View file

@ -635,7 +635,7 @@ async fn view_login_step(
match issue { match issue {
AuthIssueSession::Token => { AuthIssueSession::Token => {
error!( error!(
"Impossible state, should not recieve token in a htmx view auth flow" "Impossible state, should not receive token in a htmx view auth flow"
); );
return Err(OperationError::InvalidState); return Err(OperationError::InvalidState);
} }

View file

@ -2972,10 +2972,10 @@ impl<VALID, STATE> Entry<VALID, STATE> {
.unwrap_or(false) .unwrap_or(false)
} }
// Since EntryValid/Invalid is just about class adherenece, not Value correctness, we // Since EntryValid/Invalid is just about class adherence, not Value correctness, we
// can now apply filters to invalid entries - why? Because even if they aren't class // can now apply filters to invalid entries - why? Because even if they aren't class
// valid, we still have strict typing checks between the filter -> entry to guarantee // valid, we still have strict typing checks between the filter -> entry to guarantee
// they should be functional. We'll never match something that isn't syntactially valid. // they should be functional. We'll never match something that isn't syntactically valid.
#[inline(always)] #[inline(always)]
#[instrument(level = "trace", name = "entry::entry_match_no_index", skip(self))] #[instrument(level = "trace", name = "entry::entry_match_no_index", skip(self))]
/// Test if the following filter applies to and matches this entry. /// Test if the following filter applies to and matches this entry.

View file

@ -376,7 +376,7 @@ pub enum FilterPlan {
/// ///
/// This `Filter` validation state is in the `STATE` attribute and will be either `FilterInvalid` /// This `Filter` validation state is in the `STATE` attribute and will be either `FilterInvalid`
/// or `FilterValid`. The `Filter` must be checked by the schema to move to `FilterValid`. This /// or `FilterValid`. The `Filter` must be checked by the schema to move to `FilterValid`. This
/// helps to prevent errors at compile time to assert `Filters` are secuerly. checked /// helps to prevent errors at compile time to assert `Filters` are securely checked
/// ///
/// [`Entry`]: ../entry/struct.Entry.html /// [`Entry`]: ../entry/struct.Entry.html
#[derive(Clone, Hash, Ord, Eq, PartialOrd, PartialEq)] #[derive(Clone, Hash, Ord, Eq, PartialOrd, PartialEq)]
@ -634,7 +634,7 @@ impl Filter<FilterInvalid> {
// //
// YOLO. // YOLO.
// tl;dr - blindly accept that this filter and it's ava's MUST have // tl;dr - blindly accept that this filter and it's ava's MUST have
// been normalised and exist in schema. If they don't things may subtely // been normalised and exist in schema. If they don't things may subtly
// break, fail, or explode. As subtle as an explosion can be. // break, fail, or explode. As subtle as an explosion can be.
Filter { Filter {
state: FilterValid { state: FilterValid {

View file

@ -520,7 +520,7 @@ macro_rules! vs_utf8 {
#[allow(unused_macros)] #[allow(unused_macros)]
#[macro_export] #[macro_export]
/// Takes EntryClass objects and makes a VaueSetIutf8 /// Takes EntryClass objects and makes a ValueSetIutf8
macro_rules! vs_iutf8 { macro_rules! vs_iutf8 {
() => ( () => (
compile_error!("ValueSetIutf8 needs at least 1 element") compile_error!("ValueSetIutf8 needs at least 1 element")

View file

@ -41,7 +41,7 @@ impl ReferentialIntegrity {
// F_inc(lusion). All items of inner must be 1 or more, or the filter // F_inc(lusion). All items of inner must be 1 or more, or the filter
// will fail. This will return the union of the inclusion after the // will fail. This will return the union of the inclusion after the
// operationn. // operation.
let filt_in = filter!(f_inc(inner)); let filt_in = filter!(f_inc(inner));
let b = qs.internal_exists(filt_in).map_err(|e| { let b = qs.internal_exists(filt_in).map_err(|e| {
admin_error!(err = ?e, "internal exists failure"); admin_error!(err = ?e, "internal exists failure");
@ -156,7 +156,7 @@ impl Plugin for ReferentialIntegrity {
// Yes, this does mean we do more work to add/index/rollback in an error // Yes, this does mean we do more work to add/index/rollback in an error
// condition, *but* it means we only have developed a single verification // condition, *but* it means we only have developed a single verification
// so we can assert stronger trust in it's correct operation and interaction // so we can assert stronger trust in it's correct operation and interaction
// in complex scenarioes - It actually simplifies the check from "could // in complex scenarios - It actually simplifies the check from "could
// be in cand AND db" to simply "is it in the DB?". // be in cand AND db" to simply "is it in the DB?".
#[instrument(level = "debug", name = "refint_post_create", skip(qs, cand, _ce))] #[instrument(level = "debug", name = "refint_post_create", skip(qs, cand, _ce))]
fn post_create( fn post_create(

View file

@ -185,7 +185,7 @@ impl<'a> QueryServerWriteTransaction<'a> {
// //
let (cand, pre_cand): (Vec<_>, Vec<_>) = all_updates_valid let (cand, pre_cand): (Vec<_>, Vec<_>) = all_updates_valid
.into_iter() .into_iter()
// We previously excluded this to avoid doing unnecesary work on entries that // We previously excluded this to avoid doing unnecessary work on entries that
// were moving to a conflict state, and the survivor was staying "as is" on this // were moving to a conflict state, and the survivor was staying "as is" on this
// node. However, this gets messy with dyngroups and memberof, where on a conflict // node. However, this gets messy with dyngroups and memberof, where on a conflict
// the memberships are deleted across the replication boundary. In these cases // the memberships are deleted across the replication boundary. In these cases
@ -418,7 +418,7 @@ impl<'a> QueryServerWriteTransaction<'a> {
// Reload the domain version, doing any needed migrations. // Reload the domain version, doing any needed migrations.
// //
// While it seems odd that we do the migrations after we recieve the entries, // While it seems odd that we do the migrations after we receive the entries,
// this is because the supplier will already be sending us everything that // this is because the supplier will already be sending us everything that
// was just migrated. As a result, we only need to apply the migrations to entries // was just migrated. As a result, we only need to apply the migrations to entries
// that were not on the supplier, and therefore need updates here. // that were not on the supplier, and therefore need updates here.

View file

@ -617,7 +617,7 @@ impl<'a> ReplicationUpdateVectorWriteTransaction<'a> {
// Since the ctx range comes from the supplier, when we rebuild due to the // Since the ctx range comes from the supplier, when we rebuild due to the
// state machine then some values may not exist since they were replaced // state machine then some values may not exist since they were replaced
// or updated. It's also possible that the imported range maximums *may not* // or updated. It's also possible that the imported range maximums *may not*
// exist especially in three way replication scenarioes where S1:A was the S1 // exist especially in three way replication scenarios where S1:A was the S1
// maximum but is replaced by S2:B. This would make S1:A still it's valid // maximum but is replaced by S2:B. This would make S1:A still it's valid
// maximum but no entry reflects that in it's change state. // maximum but no entry reflects that in it's change state.
let mut valid = true; let mut valid = true;
@ -874,7 +874,7 @@ impl<'a> ReplicationUpdateVectorWriteTransaction<'a> {
to allow the comparison here to continue even if it's ruv is cleaned. Or, we need to to allow the comparison here to continue even if it's ruv is cleaned. Or, we need to
have a delayed trim on the range that is 2x the normal trim range to give a buffer? have a delayed trim on the range that is 2x the normal trim range to give a buffer?
Mostly longer ruv/cid ranges aren't an issue for us, so could we just maek these ranges Mostly longer ruv/cid ranges aren't an issue for us, so could we just make these ranges
really large? really large?
NOTE: For now we do NOT trim out max CID's of any s_uuid so that we don't have to confront NOTE: For now we do NOT trim out max CID's of any s_uuid so that we don't have to confront

View file

@ -64,7 +64,7 @@ impl<'a> QueryServerWriteTransaction<'a> {
err err
})?; })?;
// Can you process the keyhande? // Can you process the keyhandle?
let key_cert = match maybe_key_handle { let key_cert = match maybe_key_handle {
Some(KeyHandle::X509Key { private, x509 }) => (private, x509), Some(KeyHandle::X509Key { private, x509 }) => (private, x509),
/* /*
@ -172,7 +172,7 @@ impl<'a> QueryServerReadTransaction<'a> {
return Ok(ReplIncrementalContext::UnwillingToSupply); return Ok(ReplIncrementalContext::UnwillingToSupply);
} }
RangeDiffStatus::NoRUVOverlap => { RangeDiffStatus::NoRUVOverlap => {
error!("Replication Critical - Consumers RUV has desynchronsied and diverged! This must be immediately investigated!"); error!("Replication Critical - Consumers RUV has desynchronised and diverged! This must be immediately investigated!");
debug!(consumer_ranges = ?ctx_ranges); debug!(consumer_ranges = ?ctx_ranges);
debug!(supplier_ranges = ?our_ranges); debug!(supplier_ranges = ?our_ranges);
return Ok(ReplIncrementalContext::UnwillingToSupply); return Ok(ReplIncrementalContext::UnwillingToSupply);

View file

@ -98,7 +98,7 @@ fn repl_incremental(
trace!(?b_ruv_range); trace!(?b_ruv_range);
// May need to be "is subset" for future when we are testing // May need to be "is subset" for future when we are testing
// some more complex scenarioes. // some more complex scenarios.
let valid = match ReplicationUpdateVector::range_diff(&a_ruv_range, &b_ruv_range) { let valid = match ReplicationUpdateVector::range_diff(&a_ruv_range, &b_ruv_range) {
RangeDiffStatus::Ok(require) => require.is_empty(), RangeDiffStatus::Ok(require) => require.is_empty(),
_ => false, _ => false,

View file

@ -2585,7 +2585,7 @@ mod tests {
..Default::default() ..Default::default()
}; };
// Since valueset now disallows such shenangians at a type level, this can't occur // Since valueset now disallows such shenanigans at a type level, this can't occur
/* /*
let rvs = unsafe { let rvs = unsafe {
valueset![ valueset![

View file

@ -107,7 +107,7 @@ fn delete_filter_entry<'a>(
return false; return false;
} }
} else { } else {
// Can not satsify. // Can not satisfy.
return false; return false;
} }
} }

View file

@ -2673,7 +2673,7 @@ mod tests {
} }
#[test] #[test]
fn test_access_ouath2_dyn_search() { fn test_access_oauth2_dyn_search() {
sketching::test_init(); sketching::test_init();
// Test that an account that is granted a scope to an oauth2 rs is granted // Test that an account that is granted a scope to an oauth2 rs is granted
// the ability to search that rs. // the ability to search that rs.

View file

@ -100,7 +100,7 @@ pub(super) fn apply_modify_access<'a>(
return None; return None;
} }
} else { } else {
// Can not satsify. // Can not satisfy.
return None; return None;
} }
} }

View file

@ -137,7 +137,7 @@ fn search_filter_entry<'a>(
return None return None
} }
} else { } else {
// Can not satsify. // Can not satisfy.
return None return None
} }
} }
@ -240,7 +240,7 @@ fn search_sync_account_filter_entry<'a>(
if sync_source_match { if sync_source_match {
// We finally got here! // We finally got here!
security_debug!(entry = ?entry.get_uuid(), ident = ?iuser.entry.get_uuid2rdn(), "ident is a synchronsied account from this sync account"); security_debug!(entry = ?entry.get_uuid(), ident = ?iuser.entry.get_uuid2rdn(), "ident is a synchronised account from this sync account");
return AccessResult::Allow(btreeset!( return AccessResult::Allow(btreeset!(
Attribute::Class.as_ref(), Attribute::Class.as_ref(),

View file

@ -1165,7 +1165,7 @@ mod tests {
// Scope to limit the key object // Scope to limit the key object
} }
// Will fail to be signed with the former key, since it is now revoked, and the ct preceeds // Will fail to be signed with the former key, since it is now revoked, and the ct precedes
// the validity of the new key // the validity of the new key
{ {
let key_object_loaded = write_txn let key_object_loaded = write_txn

View file

@ -324,7 +324,7 @@ mod tests {
.internal_apply_domain_migration(DOMAIN_LEVEL_6) .internal_apply_domain_migration(DOMAIN_LEVEL_6)
.expect("Unable to set domain level to version 6"); .expect("Unable to set domain level to version 6");
// The internel key provider is created from dl 5 to 6 // The internal key provider is created from dl 5 to 6
let key_provider_object = write_txn let key_provider_object = write_txn
.internal_search_uuid(UUID_KEY_PROVIDER_INTERNAL) .internal_search_uuid(UUID_KEY_PROVIDER_INTERNAL)
.expect("Unable to find key provider entry."); .expect("Unable to find key provider entry.");

View file

@ -34,7 +34,7 @@ impl QueryServer {
// Remember, that this would normally mean that it's possible for schema // Remember, that this would normally mean that it's possible for schema
// to be mis-indexed (IE we index the new schemas here before we read // to be mis-indexed (IE we index the new schemas here before we read
// the schema to tell us what's indexed), but because we have the in // the schema to tell us what's indexed), but because we have the in
// mem schema that defines how schema is structuded, and this is all // mem schema that defines how schema is structured, and this is all
// marked "system", then we won't have an issue here. // marked "system", then we won't have an issue here.
write_txn write_txn
.initialise_schema_core() .initialise_schema_core()

View file

@ -1338,7 +1338,7 @@ impl QueryServer {
} }
pub async fn read(&self) -> QueryServerReadTransaction<'_> { pub async fn read(&self) -> QueryServerReadTransaction<'_> {
// Get a read ticket. Basicly this forces us to queue with other readers, while preventing // Get a read ticket. Basically this forces us to queue with other readers, while preventing
// us from competing with writers on the db tickets. This tilts us to write prioritising // us from competing with writers on the db tickets. This tilts us to write prioritising
// on db operations by always making sure a writer can get a db ticket. // on db operations by always making sure a writer can get a db ticket.
let read_ticket = if cfg!(test) { let read_ticket = if cfg!(test) {

View file

@ -676,7 +676,7 @@ mod tests {
&["22b47373-d123-421f-859e-9ddd8ab14a2a"], &["22b47373-d123-421f-859e-9ddd8ab14a2a"],
); );
// Need a user in A -> B -> User, such that A/B are re-adde as MO // Need a user in A -> B -> User, such that A/B are re-added as MO
let u2 = create_user("u2", "5c19a4a2-b9f0-4429-b130-5782de5fddda"); let u2 = create_user("u2", "5c19a4a2-b9f0-4429-b130-5782de5fddda");
let g2a = create_group( let g2a = create_group(
"g2a", "g2a",

View file

@ -2123,7 +2123,7 @@ impl Value {
if UNICODE_CONTROL_RE.is_match(s) { if UNICODE_CONTROL_RE.is_match(s) {
error!("value contains invalid unicode control character",); error!("value contains invalid unicode control character",);
// Trace only, could be an injection attack of some kind. // Trace only, could be an injection attack of some kind.
trace!(?s, "Invalid Uncode Control"); trace!(?s, "Invalid Unicode Control");
false false
} else { } else {
true true

View file

@ -115,7 +115,7 @@ fn test_jpg_has_trailer() {
.expect("Failed to read file"); .expect("Failed to read file");
assert!(!has_trailer(&file_contents).expect("Failed to check for JPEG trailer")); assert!(!has_trailer(&file_contents).expect("Failed to check for JPEG trailer"));
// checking a known bad imagee // checking a known bad image
let file_contents = std::fs::read(format!( let file_contents = std::fs::read(format!(
"{}/src/valueset/image/test_images/windows11_3_cropped.jpg", "{}/src/valueset/image/test_images/windows11_3_cropped.jpg",
env!("CARGO_MANIFEST_DIR") env!("CARGO_MANIFEST_DIR")

View file

@ -5,7 +5,7 @@ static PNG_CHUNK_END: &[u8; 4] = b"IEND";
#[derive(Debug)] #[derive(Debug)]
/// This is used as part of PNG validation to identify if we've seen the end of the file, and if it suffers from /// This is used as part of PNG validation to identify if we've seen the end of the file, and if it suffers from
/// Acropalypyse issues by having trailing data. /// Acropalypse issues by having trailing data.
enum PngChunkStatus { enum PngChunkStatus {
SeenEnd { has_trailer: bool }, SeenEnd { has_trailer: bool },
MoreChunks, MoreChunks,

View file

@ -282,7 +282,7 @@ impl ValueSetT for ValueSetSession {
// is replication safe since other replicas will also be performing // is replication safe since other replicas will also be performing
// the same operation on merge, since we trim by session issuance order. // the same operation on merge, since we trim by session issuance order.
// This is a "slow path". This is becase we optimise session storage // This is a "slow path". This is because we optimise session storage
// based on fast session lookup, so now we need to actually create an // based on fast session lookup, so now we need to actually create an
// index based on time. We need to also clone here since we need to mutate // index based on time. We need to also clone here since we need to mutate
// self.map which would violate mut/imut. // self.map which would violate mut/imut.

View file

@ -124,7 +124,7 @@ async fn test_webdriver_user_login(rsclient: kanidm_client::KanidmClient) {
let username_form = handle_error!( let username_form = handle_error!(
c, c,
c.form(Locator::Id("login")).await, c.form(Locator::Id("login")).await,
"Coudln't find login form" "Couldn't find login form"
); );
handle_error!( handle_error!(
c, c,

View file

@ -388,7 +388,7 @@ pub struct AccountNamedTagPkOpt {
} }
#[derive(Debug, Args)] #[derive(Debug, Args)]
/// Command-line options for account credental use-reset-token /// Command-line options for account credential use-reset-token
pub struct UseResetTokenOpt { pub struct UseResetTokenOpt {
#[clap(flatten)] #[clap(flatten)]
copt: CommonOpt, copt: CommonOpt,

View file

@ -1001,7 +1001,7 @@ fn ipa_to_scim_entry(
.into(), .into(),
)) ))
} else if oc.contains("ipatokentotp") { } else if oc.contains("ipatokentotp") {
// Skip for now, we don't supporty multiple totp yet. // Skip for now, we don't support multiple totp yet.
Ok(None) Ok(None)
} else { } else {
debug!("Skipping entry {} with oc {:?}", dn, oc); debug!("Skipping entry {} with oc {:?}", dn, oc);

View file

@ -44,7 +44,7 @@ the parameters of the test you wish to perform.
A statefile is the fully generated state of all entries that will be created and then used in the A statefile is the fully generated state of all entries that will be created and then used in the
load test. The state file can be recreated from a profile and it's seed at anytime. The reason to load test. The state file can be recreated from a profile and it's seed at anytime. The reason to
seperate these is that state files may get quite large, when what you really just need is the separate these is that state files may get quite large, when what you really just need is the
ability to recreate them when needed. ability to recreate them when needed.
This state file also contains all the details about accounts and entries so that during test This state file also contains all the details about accounts and entries so that during test

View file

@ -40,7 +40,7 @@ async fn preflight_person(
} }
} }
// For each role we are part of, did we have other permissions required to fufil that? // For each role we are part of, did we have other permissions required to fulfil that?
for role in &person.roles { for role in &person.roles {
if let Some(need_groups) = role.requires_membership_to() { if let Some(need_groups) = role.requires_membership_to() {
for group_name in need_groups { for group_name in need_groups {