kanidm/server/lib/src/constants/mod.rs

172 lines
6.7 KiB
Rust
Raw Normal View History

// Re-export as needed
pub mod acp;
pub mod entries;
pub mod groups;
mod key_providers;
pub mod schema;
pub mod system_config;
pub mod uuids;
2022-10-05 01:48:48 +02:00
pub mod values;
pub use self::acp::*;
pub use self::entries::*;
pub use self::groups::*;
pub use self::key_providers::*;
pub use self::schema::*;
pub use self::system_config::*;
pub use self::uuids::*;
pub use self::values::*;
2022-10-17 12:09:47 +02:00
use std::time::Duration;
// This value no longer requires incrementing during releases. It only
// serves as a "once off" marker so that we know when the initial db
// index is performed on first-run.
pub const SYSTEM_INDEX_VERSION: i64 = 31;
2023-01-17 05:14:11 +01:00
/*
* domain functional levels
*
* The idea here is to allow topology wide upgrades to be performed. We have to
* assume that across multiple kanidm instances there may be cases where we have version
* N and version N minus 1 as upgrades are rolled out.
*
* Imagine we set up a new cluster. Machine A and B both have level 1 support.
* We upgrade machine A. It has support up to level 2, but machine B does not.
* So the overall functional level is level 1. Then we upgrade B, which supports
* up to level 2. We still don't do the upgrade! The topology is still level 1
* unless an admin at this point *intervenes* and forces the update. OR what
* happens we we update machine A again and it now supports up to level 3, with
* a target level of 2. So we update machine A now to level 2, and that can
* still replicate to machine B since it also supports level 2.
*
* effectively it means that "some features" may be a "release behind" for users
* who don't muck with the levels, but it means that we can do mixed version
* upgrades.
*/
pub type DomainVersion = u32;
20240301 systemd uid (#2602) Fixes #2601 Fixes #393 - gid numbers can be part of the systemd nspawn range. Previously we allocated gid numbers based on the fact that uid_t is a u32, so we allowed 65536 through u32::max. However, there are two major issues with this that I didn't realise. The first is that anything greater than i32::max (2147483648) can confuse the linux kernel. The second is that systemd allocates 524288 through 1879048191 to itself for nspawn. This leaves with with only a few usable ranges. 1000 through 60000 60578 through 61183 65520 through 65533 65536 through 524287 1879048192 through 2147483647 The last range being the largest is the natural and obvious area we should allocate from. This happens to nicely fall in the pattern of 0x7000_0000 through 0x7fff_ffff which allows us to take the last 24 bits of the uuid then applying a bit mask we can ensure that we end up in this range. There are now two major issues. We have now changed our validation code to enforce a tighter range, but we may have already allocated users into these ranges. External systems like FreeIPA allocated uid/gid numbers with reckless abandon directly into these ranges. As a result we need to make two concessions. We *secretly* still allow manual allocation of id's from 65536 through to 1879048191 which is the nspawn container range. This happens to be the range that freeipa allocates into. We will never generate an ID in this range, but we will allow it to ease imports since the users of these ranges already have shown they 'don't care' about that range. This also affects SCIM imports for longer term migrations. Second is id's that fall outside the valid ranges. In the extremely unlikely event this has occurred, a startup migration has been added to regenerate these id values for affected entries to prevent upgrade issues. An accidental effect of this is freeing up the range 524288 to 1879048191 for other subuid uses.
2024-03-07 04:25:54 +01:00
/// Domain level 0 - this indicates that this instance
/// is a new install and has never had a domain level
/// previously.
pub const DOMAIN_LEVEL_0: DomainVersion = 0;
20240301 systemd uid (#2602) Fixes #2601 Fixes #393 - gid numbers can be part of the systemd nspawn range. Previously we allocated gid numbers based on the fact that uid_t is a u32, so we allowed 65536 through u32::max. However, there are two major issues with this that I didn't realise. The first is that anything greater than i32::max (2147483648) can confuse the linux kernel. The second is that systemd allocates 524288 through 1879048191 to itself for nspawn. This leaves with with only a few usable ranges. 1000 through 60000 60578 through 61183 65520 through 65533 65536 through 524287 1879048192 through 2147483647 The last range being the largest is the natural and obvious area we should allocate from. This happens to nicely fall in the pattern of 0x7000_0000 through 0x7fff_ffff which allows us to take the last 24 bits of the uuid then applying a bit mask we can ensure that we end up in this range. There are now two major issues. We have now changed our validation code to enforce a tighter range, but we may have already allocated users into these ranges. External systems like FreeIPA allocated uid/gid numbers with reckless abandon directly into these ranges. As a result we need to make two concessions. We *secretly* still allow manual allocation of id's from 65536 through to 1879048191 which is the nspawn container range. This happens to be the range that freeipa allocates into. We will never generate an ID in this range, but we will allow it to ease imports since the users of these ranges already have shown they 'don't care' about that range. This also affects SCIM imports for longer term migrations. Second is id's that fall outside the valid ranges. In the extremely unlikely event this has occurred, a startup migration has been added to regenerate these id values for affected entries to prevent upgrade issues. An accidental effect of this is freeing up the range 524288 to 1879048191 for other subuid uses.
2024-03-07 04:25:54 +01:00
/// Deprcated as of 1.3.0
pub const DOMAIN_LEVEL_5: DomainVersion = 5;
20240301 systemd uid (#2602) Fixes #2601 Fixes #393 - gid numbers can be part of the systemd nspawn range. Previously we allocated gid numbers based on the fact that uid_t is a u32, so we allowed 65536 through u32::max. However, there are two major issues with this that I didn't realise. The first is that anything greater than i32::max (2147483648) can confuse the linux kernel. The second is that systemd allocates 524288 through 1879048191 to itself for nspawn. This leaves with with only a few usable ranges. 1000 through 60000 60578 through 61183 65520 through 65533 65536 through 524287 1879048192 through 2147483647 The last range being the largest is the natural and obvious area we should allocate from. This happens to nicely fall in the pattern of 0x7000_0000 through 0x7fff_ffff which allows us to take the last 24 bits of the uuid then applying a bit mask we can ensure that we end up in this range. There are now two major issues. We have now changed our validation code to enforce a tighter range, but we may have already allocated users into these ranges. External systems like FreeIPA allocated uid/gid numbers with reckless abandon directly into these ranges. As a result we need to make two concessions. We *secretly* still allow manual allocation of id's from 65536 through to 1879048191 which is the nspawn container range. This happens to be the range that freeipa allocates into. We will never generate an ID in this range, but we will allow it to ease imports since the users of these ranges already have shown they 'don't care' about that range. This also affects SCIM imports for longer term migrations. Second is id's that fall outside the valid ranges. In the extremely unlikely event this has occurred, a startup migration has been added to regenerate these id values for affected entries to prevent upgrade issues. An accidental effect of this is freeing up the range 524288 to 1879048191 for other subuid uses.
2024-03-07 04:25:54 +01:00
/// Domain Level introduced with 1.2.0.
/// Deprcated as of 1.4.0
pub const DOMAIN_LEVEL_6: DomainVersion = 6;
pub const PATCH_LEVEL_1: u32 = 1;
20240301 systemd uid (#2602) Fixes #2601 Fixes #393 - gid numbers can be part of the systemd nspawn range. Previously we allocated gid numbers based on the fact that uid_t is a u32, so we allowed 65536 through u32::max. However, there are two major issues with this that I didn't realise. The first is that anything greater than i32::max (2147483648) can confuse the linux kernel. The second is that systemd allocates 524288 through 1879048191 to itself for nspawn. This leaves with with only a few usable ranges. 1000 through 60000 60578 through 61183 65520 through 65533 65536 through 524287 1879048192 through 2147483647 The last range being the largest is the natural and obvious area we should allocate from. This happens to nicely fall in the pattern of 0x7000_0000 through 0x7fff_ffff which allows us to take the last 24 bits of the uuid then applying a bit mask we can ensure that we end up in this range. There are now two major issues. We have now changed our validation code to enforce a tighter range, but we may have already allocated users into these ranges. External systems like FreeIPA allocated uid/gid numbers with reckless abandon directly into these ranges. As a result we need to make two concessions. We *secretly* still allow manual allocation of id's from 65536 through to 1879048191 which is the nspawn container range. This happens to be the range that freeipa allocates into. We will never generate an ID in this range, but we will allow it to ease imports since the users of these ranges already have shown they 'don't care' about that range. This also affects SCIM imports for longer term migrations. Second is id's that fall outside the valid ranges. In the extremely unlikely event this has occurred, a startup migration has been added to regenerate these id values for affected entries to prevent upgrade issues. An accidental effect of this is freeing up the range 524288 to 1879048191 for other subuid uses.
2024-03-07 04:25:54 +01:00
/// Domain Level introduced with 1.3.0.
/// Deprcated as of 1.5.0
pub const DOMAIN_LEVEL_7: DomainVersion = 7;
2024-05-17 08:06:14 +02:00
/// Domain Level introduced with 1.4.0.
/// Deprcated as of 1.6.0
pub const DOMAIN_LEVEL_8: DomainVersion = 8;
2024-07-31 16:02:11 +02:00
/// Domain Level introduced with 1.5.0.
/// Deprcated as of 1.7.0
pub const DOMAIN_LEVEL_9: DomainVersion = 9;
// The minimum level that we can re-migrate from.
// This should be DOMAIN_TGT_LEVEL minus 2
2024-07-31 16:02:11 +02:00
pub const DOMAIN_MIN_REMIGRATION_LEVEL: DomainVersion = DOMAIN_LEVEL_6;
// The minimum supported domain functional level (for replication)
pub const DOMAIN_MIN_LEVEL: DomainVersion = DOMAIN_TGT_LEVEL;
// The previous releases domain functional level
2024-07-31 16:02:11 +02:00
pub const DOMAIN_PREVIOUS_TGT_LEVEL: DomainVersion = DOMAIN_LEVEL_7;
// The target supported domain functional level. During development this is
// the NEXT level that users will upgrade too.
2024-07-31 16:02:11 +02:00
pub const DOMAIN_TGT_LEVEL: DomainVersion = DOMAIN_LEVEL_8;
// The current patch level if any out of band fixes are required.
pub const DOMAIN_TGT_PATCH_LEVEL: u32 = PATCH_LEVEL_1;
// The target domain functional level for the SUBSEQUENT release/dev cycle.
2024-07-31 16:02:11 +02:00
pub const DOMAIN_TGT_NEXT_LEVEL: DomainVersion = DOMAIN_LEVEL_9;
2023-01-17 05:14:11 +01:00
// The maximum supported domain functional level
2024-07-31 16:02:11 +02:00
pub const DOMAIN_MAX_LEVEL: DomainVersion = DOMAIN_LEVEL_9;
2023-01-17 05:14:11 +01:00
// On test builds define to 60 seconds
#[cfg(test)]
pub const PURGE_FREQUENCY: u64 = 60;
// For production 10 minutes.
#[cfg(not(test))]
pub const PURGE_FREQUENCY: u64 = 600;
20240613 performance improvements (#2844) Thanks to @Seba-T's work with Orca, we were able to identify a number of performance issues in certain high load conditions. This commit contains fixes for the following issues * Unbounded Memory Growth - due to how ARCache works, to maintain temporal consistency it must retain copies of keys (not values) in a special data set for tracking. The Filter Resolve Cache was using unresolved filters as keys. This caused memory explosions when refint or memberof were updating a group with a large number of members because they would emit a query with hundreds of filter terms that would only be used once and never again, causing the ARCache haunted set to grow without bound. To limit this, we no longer cache large/complex queries for resolution, and in future we may implement some other methods to reduce this like sha256/hmac of the queries. * When creating a new account, dyngroups would be engaged to add the account as a member due to the matching scope. However the change to the dyngroup was triggering an update of all the dyngroups *members* related memberof attributes. This would mean that adding an account would trigger every other account to be loaded an updated. * When memberof would iterate over leaf entries and update them one at a time. This mean a large number of small fragmented queries in the case of a lot of leaf entries being updated. Now leaf entries are updated in a single stripe once groups are stabilised. * Member of would always trigger it's members to always update. Instead, we should only update members where a difference is observed, or all members if the group's memberof itself has changed since this needs to propogate to all leaf entries. This significantly reduces the amount of writes and operations to examine the changed member of set. * Referential integrity would examine all reference uuids on entries for validity rather than just the reference uuids that were altered within the transaction. This change means that only uuids that were *added* are validated during an operation. * During async write backs (delayed actions) these were performed one at a time. Instead, when possible this should be done in a single transaction as the write transaction caches all writes in memory until the commit meaning that by batching we reduce overall latency. * In the server there can only be one write transaction and many readers. These are guarded by tokio semaphores that act as fair queues - first in gets the lock next. Due to the design of the server readers would be blocked on the *database* semaphore, and writers would block on the write semaphore and THEN the database semaphore. This arrangement was creating a situation which unfairly advantaged readers over writers, as any write would first have to become the head of it's queue, and then compete with all readers to access a db transaction. Instead, we now have a reader semaphore with size threads minus 1, clamped at a minimum of 1. This means that provided there are two or more threads, then a writer will *always* have a database handle available, and readers will pre-queue with each other before queueing on the db ticket. If there is only one thread, then writes and reads will alternate between each other fairly.
2024-06-20 04:50:00 +02:00
/// The number of delayed actions to consider per write transaction. Higher
/// values allow more coalescing to occur, but may consume more ram and cause
/// some latency while dequeuing and writing those operations.
pub const DELAYED_ACTION_BATCH_SIZE: usize = 256;
/// The amount of time to wait to acquire a database ticket before timing out.
/// Higher values allow greater operation queuing but can cause feedback
/// loops where operations will stall for long periods.
pub const DB_LOCK_ACQUIRE_TIMEOUT_MILLIS: u64 = 5000;
#[cfg(test)]
/// In test, we limit the changelog to 10 minutes.
pub const CHANGELOG_MAX_AGE: u64 = 600;
#[cfg(not(test))]
/// A replica may be up to 7 days out of sync before being denied updates.
pub const CHANGELOG_MAX_AGE: u64 = 7 * 86400;
#[cfg(test)]
/// In test, we limit the recyclebin to 5 minutes.
pub const RECYCLEBIN_MAX_AGE: u64 = 300;
#[cfg(not(test))]
/// In production we allow 1 week
pub const RECYCLEBIN_MAX_AGE: u64 = 7 * 86400;
// 5 minute auth session window.
pub const AUTH_SESSION_TIMEOUT: u64 = 300;
// 5 minute mfa reg window
pub const MFAREG_SESSION_TIMEOUT: u64 = 300;
pub const PW_MIN_LENGTH: u32 = 10;
2021-05-26 08:11:00 +02:00
// Maximum - Sessions have no upper bound.
pub const MAXIMUM_AUTH_SESSION_EXPIRY: u32 = u32::MAX;
// Default - sessions last for 1 day
pub const DEFAULT_AUTH_SESSION_EXPIRY: u32 = 86400;
// Maximum - privileges last for 1 hour.
pub const MAXIMUM_AUTH_PRIVILEGE_EXPIRY: u32 = 3600;
2023-04-20 00:34:21 +02:00
// Default - privileges last for 10 minutes.
pub const DEFAULT_AUTH_PRIVILEGE_EXPIRY: u32 = 600;
20240613 performance improvements (#2844) Thanks to @Seba-T's work with Orca, we were able to identify a number of performance issues in certain high load conditions. This commit contains fixes for the following issues * Unbounded Memory Growth - due to how ARCache works, to maintain temporal consistency it must retain copies of keys (not values) in a special data set for tracking. The Filter Resolve Cache was using unresolved filters as keys. This caused memory explosions when refint or memberof were updating a group with a large number of members because they would emit a query with hundreds of filter terms that would only be used once and never again, causing the ARCache haunted set to grow without bound. To limit this, we no longer cache large/complex queries for resolution, and in future we may implement some other methods to reduce this like sha256/hmac of the queries. * When creating a new account, dyngroups would be engaged to add the account as a member due to the matching scope. However the change to the dyngroup was triggering an update of all the dyngroups *members* related memberof attributes. This would mean that adding an account would trigger every other account to be loaded an updated. * When memberof would iterate over leaf entries and update them one at a time. This mean a large number of small fragmented queries in the case of a lot of leaf entries being updated. Now leaf entries are updated in a single stripe once groups are stabilised. * Member of would always trigger it's members to always update. Instead, we should only update members where a difference is observed, or all members if the group's memberof itself has changed since this needs to propogate to all leaf entries. This significantly reduces the amount of writes and operations to examine the changed member of set. * Referential integrity would examine all reference uuids on entries for validity rather than just the reference uuids that were altered within the transaction. This change means that only uuids that were *added* are validated during an operation. * During async write backs (delayed actions) these were performed one at a time. Instead, when possible this should be done in a single transaction as the write transaction caches all writes in memory until the commit meaning that by batching we reduce overall latency. * In the server there can only be one write transaction and many readers. These are guarded by tokio semaphores that act as fair queues - first in gets the lock next. Due to the design of the server readers would be blocked on the *database* semaphore, and writers would block on the write semaphore and THEN the database semaphore. This arrangement was creating a situation which unfairly advantaged readers over writers, as any write would first have to become the head of it's queue, and then compete with all readers to access a db transaction. Instead, we now have a reader semaphore with size threads minus 1, clamped at a minimum of 1. This means that provided there are two or more threads, then a writer will *always* have a database handle available, and readers will pre-queue with each other before queueing on the db ticket. If there is only one thread, then writes and reads will alternate between each other fairly.
2024-06-20 04:50:00 +02:00
// Default - directly privileged sessions only last 1 hour.
pub const DEFAULT_AUTH_SESSION_LIMITED_EXPIRY: u32 = 3600;
2023-04-20 00:34:21 +02:00
// Default - oauth refresh tokens last for 16 hours.
20240613 performance improvements (#2844) Thanks to @Seba-T's work with Orca, we were able to identify a number of performance issues in certain high load conditions. This commit contains fixes for the following issues * Unbounded Memory Growth - due to how ARCache works, to maintain temporal consistency it must retain copies of keys (not values) in a special data set for tracking. The Filter Resolve Cache was using unresolved filters as keys. This caused memory explosions when refint or memberof were updating a group with a large number of members because they would emit a query with hundreds of filter terms that would only be used once and never again, causing the ARCache haunted set to grow without bound. To limit this, we no longer cache large/complex queries for resolution, and in future we may implement some other methods to reduce this like sha256/hmac of the queries. * When creating a new account, dyngroups would be engaged to add the account as a member due to the matching scope. However the change to the dyngroup was triggering an update of all the dyngroups *members* related memberof attributes. This would mean that adding an account would trigger every other account to be loaded an updated. * When memberof would iterate over leaf entries and update them one at a time. This mean a large number of small fragmented queries in the case of a lot of leaf entries being updated. Now leaf entries are updated in a single stripe once groups are stabilised. * Member of would always trigger it's members to always update. Instead, we should only update members where a difference is observed, or all members if the group's memberof itself has changed since this needs to propogate to all leaf entries. This significantly reduces the amount of writes and operations to examine the changed member of set. * Referential integrity would examine all reference uuids on entries for validity rather than just the reference uuids that were altered within the transaction. This change means that only uuids that were *added* are validated during an operation. * During async write backs (delayed actions) these were performed one at a time. Instead, when possible this should be done in a single transaction as the write transaction caches all writes in memory until the commit meaning that by batching we reduce overall latency. * In the server there can only be one write transaction and many readers. These are guarded by tokio semaphores that act as fair queues - first in gets the lock next. Due to the design of the server readers would be blocked on the *database* semaphore, and writers would block on the write semaphore and THEN the database semaphore. This arrangement was creating a situation which unfairly advantaged readers over writers, as any write would first have to become the head of it's queue, and then compete with all readers to access a db transaction. Instead, we now have a reader semaphore with size threads minus 1, clamped at a minimum of 1. This means that provided there are two or more threads, then a writer will *always* have a database handle available, and readers will pre-queue with each other before queueing on the db ticket. If there is only one thread, then writes and reads will alternate between each other fairly.
2024-06-20 04:50:00 +02:00
pub const OAUTH_REFRESH_TOKEN_EXPIRY: u64 = 3600 * 16;
2022-10-17 12:09:47 +02:00
/// How long access tokens should last. This is NOT the length
/// of the refresh token, which is bound to the issuing session.
2023-04-20 00:34:21 +02:00
pub const OAUTH2_ACCESS_TOKEN_EXPIRY: u32 = 15 * 60;
/// The amount of time a suppliers clock can be "ahead" before
/// we warn about possible clock synchronisation issues.
pub const REPL_SUPPLIER_ADVANCE_WINDOW: Duration = Duration::from_secs(600);
2024-07-10 23:35:24 +02:00
/// The number of days that the default replication MTLS cert lasts for when
/// configured manually. Defaults to 4 years (including 1 day for the leap year).
pub const REPL_MTLS_CERTIFICATE_DAYS: u32 = 1461;
/// The default number of entries that a user may retrieve in a search
pub const DEFAULT_LIMIT_SEARCH_MAX_RESULTS: u64 = 1024;
/// The default number of entries than an api token may retrieve in a search;
pub const DEFAULT_LIMIT_API_SEARCH_MAX_RESULTS: u64 = u64::MAX >> 1;
/// the default number of entries that may be examined in a partially indexed
/// query.
pub const DEFAULT_LIMIT_SEARCH_MAX_FILTER_TEST: u64 = 2048;
/// the default number of entries that may be examined in a partially indexed
/// query by an api token.
pub const DEFAULT_LIMIT_API_SEARCH_MAX_FILTER_TEST: u64 = 16384;
/// The maximum number of items in a filter, regardless of nesting level.
pub const DEFAULT_LIMIT_FILTER_MAX_ELEMENTS: u64 = 32;
/// The maximum amount of recursion allowed in a filter.
pub const DEFAULT_LIMIT_FILTER_DEPTH_MAX: u64 = 12;
2024-04-23 08:02:42 +02:00
/// The maximum number of sessions allowed on a single entry.
pub(crate) const SESSION_MAXIMUM: usize = 48;