* Resolve kanidm-unix auth-test bug
When reworking the unix daemon, we missed changing the auth-test
tool to handle the new challenge-response flow correctly which
would cause the session to disconnect.
* Cleanup
Improve error message when passkey is missing PIN
Firefox still doesn't support setting a PIN on new devices. Because
of this we need a way to return a better error message for devices
that don't have UV configured.
This updates to the latest webauthn-rs release. When
updating, an issue with time was found that changes
the behaviour of it's parser for rfc3339. This also
updates our tests to accomodate that change.
Co-authored-by: James Hodgkinson <james@terminaloutcomes.com>
instead of passing a giant blob of JSON as a command argument.
Before, it was not possible to allow all valid authenticators
certified by the FIDO Alliance because
fido-mds-list query -o "status gte valid"
outputs a JSON string longer than Linux allows for command
arguments.
Co-authored-by: Firstyear <william@blackhats.net.au>
So that we can start to add some more easter eggs to the server,
we also need to respect user preferences that may not want them.
This adds a configuration setting to the domain allowing a release
build to opt-in to easter eggs, and development builds to opt-out
of them.
* Allow reseting account policy values to defaults
This allows the admin cli to reset account policy values to
defaults by clearing them. Due to how account policy resolves
a lack of value implies the default.
While improving the webauthn feature handling yesterday I accidentally left mozilla enabled on linux which doesn't need it and u2fhid on macos.
In the future the plan is to swap fully to u2fhid, but in the mean time we need mozilla for freebsd support. That's something I'll need to work on later with @micolous.
* feat: Rebuild the deb packaging flow
fix: Add more sudo, GHA likes sudo
fix: Give build_debs.sh only the triplet argument
fix: Work around more GHA weirdness in apt sources
Drop crossbuild as it was only used by debian packaging
docs: Update book and other docs for packaging flow
feat: package kanidm_tools aka kanidm cli
docs: Update packaging docs for latest process and clarity
fix: use full triple in sdynlib variants
fix: Correct kanidm.pam asset placement
fix: Give pam & nss modules a description so the debs get it
fix: Work around wonky libssl3 naming in Ubuntu 24.04
fix: Place kanidm bin correctly :3
feat: Pin all blame on @yaleman :3
WIP: Swap out the submodule reference. Still not the final one though.
refactor: Switch kanidm-pam & kanidm-nss to mandatory deps
While in theory unixd will start and run without them, it also won't do
anything useful.
fix: explicit depends for nss & pam libs without versions
We build the debs on the ubuntu24.04 GHA runner so automatic pins
versions that are too new for 22.04. Ideally we'd run cargo-deb also on
the target images but that'll have to be a future improvement.
* refactor: Switch nss_kanidm & pam_kanidm package naming closer to debian guidance
* feat: Attempt enabling unixd by default with secure defaults
* fix: Relax config permissions so the kanidm user can read
Also, update postinst config instructions
* fix(scim_proto): fixing an issue with building due to dependencies
* feat(cli): more error message detail when things go wrong with images on the CLI
Incorrect logic in cred update meant that valid transactions would not be allowed to commit as a mistake in the UI flow.
This is a skill issue on my part.
* fix typos and misspellings
* use proper capitalization
* Apply suggestions from code review
---------
Co-authored-by: James Hodgkinson <james@terminaloutcomes.com>
Thanks to @Seba-T's work with Orca, we were able to identify a number of performance issues in certain high load conditions.
This commit contains fixes for the following issues
* Unbounded Memory Growth - due to how ARCache works, to maintain temporal consistency it must retain copies of keys (not values) in a special data set for tracking. The Filter Resolve Cache was using unresolved filters as keys. This caused memory explosions when refint or memberof were updating a group with a large number of members because they would emit a query with hundreds of filter terms that would only be used once and never again, causing the ARCache haunted set to grow without bound. To limit this, we no longer cache large/complex queries for resolution, and in future we may implement some other methods to reduce this like sha256/hmac of the queries.
* When creating a new account, dyngroups would be engaged to add the account as a member due to the matching scope. However the change to the dyngroup was triggering an update of all the dyngroups *members* related memberof attributes. This would mean that adding an account would trigger every other account to be loaded an updated.
* When memberof would iterate over leaf entries and update them one at a time. This mean a large number of small fragmented queries in the case of a lot of leaf entries being updated. Now leaf entries are updated in a single stripe once groups are stabilised.
* Member of would always trigger it's members to always update. Instead, we should only update members where a difference is observed, or all members if the group's memberof itself has changed since this needs to propogate to all leaf entries. This significantly reduces the amount of writes and operations to examine the changed member of set.
* Referential integrity would examine all reference uuids on entries for validity rather than just the reference uuids that were altered within the transaction. This change means that only uuids that were *added* are validated during an operation.
* During async write backs (delayed actions) these were performed one at a time. Instead, when possible this should be done in a single transaction as the write transaction caches all writes in memory until the commit meaning that by batching we reduce overall latency.
* In the server there can only be one write transaction and many readers. These are guarded by tokio semaphores that act as fair queues - first in gets the lock next. Due to the design of the server readers would be blocked on the *database* semaphore, and writers would block on the write semaphore and THEN the database semaphore. This arrangement was creating a situation which unfairly advantaged readers over writers, as any write would first have to become the head of it's queue, and then compete with all readers to access a db transaction. Instead, we now have a reader semaphore with size threads minus 1, clamped at a minimum of 1. This means that provided there are two or more threads, then a writer will *always* have a database handle available, and readers will pre-queue with each other before queueing on the db ticket. If there is only one thread, then writes and reads will alternate between each other fairly.
* kanidm cli logs on debug level - Fixes#2745
* such clippy like wow
* It's important for a wordsmith to know when to get its fixes in.
* updootin' wasms