20240301 systemd uid (#2602)

Fixes #2601 Fixes #393 - gid numbers can be part of the systemd nspawn range.

Previously we allocated gid numbers based on the fact that uid_t is a u32, so we allowed 65536 through u32::max. However, there are two major issues with this that I didn't realise. The first is that anything greater than i32::max (2147483648) can confuse the linux kernel. 

The second is that systemd allocates 524288 through 1879048191 to itself for nspawn.

This leaves with with only a few usable ranges.

1000 through 60000
60578 through 61183
65520 through 65533
65536 through 524287
1879048192 through 2147483647

The last range being the largest is the natural and obvious area we should allocate from. This happens to nicely fall in the pattern of 0x7000_0000 through 0x7fff_ffff which allows us to take the last 24 bits of the uuid then applying a bit mask we can ensure that we end up in this range. 

There are now two major issues.

We have now changed our validation code to enforce a tighter range, but we may have already allocated users into these ranges. 

External systems like FreeIPA allocated uid/gid numbers with reckless abandon directly into these ranges. 

As a result we need to make two concessions.

We *secretly* still allow manual allocation of id's from 65536 through to 1879048191 which is the nspawn container range. This happens to be the range that freeipa allocates into. We will never generate an ID in this range, but we will allow it to ease imports since the users of these ranges already have shown they 'don't care' about that range. This also affects SCIM imports for longer term migrations. 

Second is id's that fall outside the valid ranges. In the extremely unlikely event this has occurred, a startup migration has been added to regenerate these id values for affected entries to prevent upgrade issues. 

An accidental effect of this is freeing up the range 524288 to 1879048191 for other subuid uses.
This commit is contained in:
Firstyear 2024-03-07 13:25:54 +10:00 committed by GitHub
parent 221445d387
commit b4d9cdd7d5
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
30 changed files with 1183 additions and 532 deletions

View file

@ -50,18 +50,26 @@ this is the point.
### GID Number Generation ### GID Number Generation
Kanidm will have asynchronous replication as a feature between writable database servers. In this Kanidm has asynchronous replication as a feature between database servers. In this case, we need to
case, we need to be able to allocate stable and reliable GID numbers to accounts on replicas that be able to allocate stable and reliable GID numbers to accounts on replicas that may not be in
may not be in continual communication. continual communication.
To do this, we use the last 32 bits of the account or group's UUID to generate the GID number. To do this, we use the last 24 bits of the account or group's UUID to generate the GID number. We
can only use the UID range `1879048192` (`0x70000000`) to `2147483647` (`0x7fffffff`) due to
limitations of the Linux kernel and [systemd reserving other uids in the range](http://systemd.io/UIDS-GIDS/) for its exclusive
use.
A valid concern is the possibility of duplication in the lower 32 bits. Given the birthday problem, A valid concern is the possibility of duplication in the lower 24 bits. Given the [birthday problem](https://en.wikipedia.org/wiki/Birthday_problem),
if you have 77,000 groups and accounts, you have a 50% chance of duplication. With 50,000 you have a if you have ~7700 groups and accounts, you have a 50% chance of duplication. With ~5000 you have a
20% chance, 9,300 you have a 1% chance and with 2900 you have a 0.1% chance. 25% chance, ~930 you have a 1% chance and with 290 you have a 0.1% chance.
We advise that if you have a site with >10,000 users you should use an external system to allocate We advise that if you have a site with greater than approximately 2,000 users you should use an
GID numbers serially or consistently to avoid potential duplication events. external system to allocate GID numbers serially or consistently to avoid potential duplication
events.
We recommend the use of the range `65536` through `524287` for manual allocation. This leaves the
range `1000` through `65535` for OS/Distro purposes, and allows Kanidm to continue dynamic
allocation in the range `1879048192` to `2147483647` if you choose a hybrid allocation approach.
This design decision is made as most small sites will benefit greatly from the auto-allocation This design decision is made as most small sites will benefit greatly from the auto-allocation
policy and the simplicity of its design, while larger enterprises will already have IDM or business policy and the simplicity of its design, while larger enterprises will already have IDM or business

View file

@ -1,6 +1,11 @@
use crate::{ClientError, KanidmClient}; use crate::{ClientError, KanidmClient};
impl KanidmClient { impl KanidmClient {
pub async fn idm_group_purge_attr(&self, id: &str, attr: &str) -> Result<(), ClientError> {
self.perform_delete_request(format!("/v1/group/{}/_attr/{}", id, attr).as_str())
.await
}
pub async fn group_account_policy_enable(&self, id: &str) -> Result<(), ClientError> { pub async fn group_account_policy_enable(&self, id: &str) -> Result<(), ClientError> {
self.perform_post_request( self.perform_post_request(
&format!("/v1/group/{}/_attr/class", id), &format!("/v1/group/{}/_attr/class", id),

View file

@ -117,8 +117,7 @@ pub enum OperationError {
ReplDomainUuidMismatch, ReplDomainUuidMismatch,
ReplServerUuidSplitDataState, ReplServerUuidSplitDataState,
TransactionAlreadyCommitted, TransactionAlreadyCommitted,
/// when you ask for a gid that's lower than a safe minimum /// when you ask for a gid that overlaps a system reserved range
GidOverlapsSystemMin(u32),
/// When a name is denied by the system config /// When a name is denied by the system config
ValueDenyName, ValueDenyName,
// What about something like this for unique errors? // What about something like this for unique errors?
@ -135,6 +134,11 @@ pub enum OperationError {
MG0001InvalidReMigrationLevel, MG0001InvalidReMigrationLevel,
MG0002RaiseDomainLevelExceedsMaximum, MG0002RaiseDomainLevelExceedsMaximum,
MG0003ServerPhaseInvalidForMigration, MG0003ServerPhaseInvalidForMigration,
MG0004DomainLevelInDevelopment,
MG0005GidConstraintsNotMet,
// Plugins
PL0001GidOverlapsSystemRange,
} }
impl PartialEq for OperationError { impl PartialEq for OperationError {

View file

@ -225,6 +225,29 @@ pub struct DomainInfo {
pub level: u32, pub level: u32,
} }
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct DomainUpgradeCheckReport {
pub name: String,
pub uuid: Uuid,
pub current_level: u32,
pub upgrade_level: u32,
pub report_items: Vec<DomainUpgradeCheckItem>,
}
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq)]
pub enum DomainUpgradeCheckStatus {
Pass6To7Gidnumber,
Fail6To7Gidnumber,
}
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct DomainUpgradeCheckItem {
pub from_level: u32,
pub to_level: u32,
pub status: DomainUpgradeCheckStatus,
pub affected_entries: Vec<String>,
}
#[test] #[test]
fn test_fstype_deser() { fn test_fstype_deser() {
assert_eq!(FsType::try_from("zfs"), Ok(FsType::Zfs)); assert_eq!(FsType::try_from("zfs"), Ok(FsType::Zfs));

View file

@ -0,0 +1,170 @@
//! ⚠️ Operations in this set of actor handlers are INTERNAL and MAY bypass
//! access controls. Access is *IMPLIED* by the use of these via the internal
//! admin unixd socket.
use crate::{QueryServerReadV1, QueryServerWriteV1};
use tracing::{Instrument, Level};
use kanidmd_lib::prelude::*;
use kanidmd_lib::{
event::{PurgeRecycledEvent, PurgeTombstoneEvent},
idm::delayed::DelayedAction,
};
use kanidm_proto::internal::{
DomainInfo as ProtoDomainInfo, DomainUpgradeCheckReport as ProtoDomainUpgradeCheckReport,
};
impl QueryServerReadV1 {
#[instrument(
level = "info",
skip_all,
fields(uuid = ?eventid)
)]
pub(crate) async fn handle_domain_show(
&self,
eventid: Uuid,
) -> Result<ProtoDomainInfo, OperationError> {
let mut idms_prox_read = self.idms.proxy_read().await;
idms_prox_read.qs_read.domain_info()
}
#[instrument(
level = "info",
skip_all,
fields(uuid = ?eventid)
)]
pub(crate) async fn handle_domain_upgrade_check(
&self,
eventid: Uuid,
) -> Result<ProtoDomainUpgradeCheckReport, OperationError> {
let mut idms_prox_read = self.idms.proxy_read().await;
idms_prox_read.qs_read.domain_upgrade_check()
}
}
impl QueryServerWriteV1 {
#[instrument(
level = "info",
skip_all,
fields(uuid = ?msg.eventid)
)]
pub async fn handle_purgetombstoneevent(&self, msg: PurgeTombstoneEvent) {
let mut idms_prox_write = self.idms.proxy_write(duration_from_epoch_now()).await;
let res = idms_prox_write
.qs_write
.purge_tombstones()
.and_then(|_changed| idms_prox_write.commit());
match res {
Ok(()) => {
debug!("Purge tombstone success");
}
Err(err) => {
error!(?err, "Unable to purge tombstones");
}
}
}
#[instrument(
level = "info",
skip_all,
fields(uuid = ?msg.eventid)
)]
pub async fn handle_purgerecycledevent(&self, msg: PurgeRecycledEvent) {
let ct = duration_from_epoch_now();
let mut idms_prox_write = self.idms.proxy_write(ct).await;
let res = idms_prox_write
.qs_write
.purge_recycled()
.and_then(|touched| {
// don't need to commit a txn with no changes
if touched > 0 {
idms_prox_write.commit()
} else {
Ok(())
}
});
match res {
Ok(()) => {
debug!("Purge recyclebin success");
}
Err(err) => {
error!(?err, "Unable to purge recyclebin");
}
}
}
pub(crate) async fn handle_delayedaction(&self, da: DelayedAction) {
let eventid = Uuid::new_v4();
let span = span!(Level::INFO, "process_delayed_action", uuid = ?eventid);
async {
let ct = duration_from_epoch_now();
let mut idms_prox_write = self.idms.proxy_write(ct).await;
if let Err(res) = idms_prox_write
.process_delayedaction(da, ct)
.and_then(|_| idms_prox_write.commit())
{
info!(?res, "delayed action error");
}
}
.instrument(span)
.await
}
#[instrument(
level = "info",
skip(self, eventid),
fields(uuid = ?eventid)
)]
pub(crate) async fn handle_admin_recover_account(
&self,
name: String,
eventid: Uuid,
) -> Result<String, OperationError> {
let ct = duration_from_epoch_now();
let mut idms_prox_write = self.idms.proxy_write(ct).await;
let pw = idms_prox_write.recover_account(name.as_str(), None)?;
idms_prox_write.commit().map(|()| pw)
}
#[instrument(
level = "info",
skip_all,
fields(uuid = ?eventid)
)]
pub(crate) async fn handle_domain_raise(&self, eventid: Uuid) -> Result<u32, OperationError> {
let ct = duration_from_epoch_now();
let mut idms_prox_write = self.idms.proxy_write(ct).await;
idms_prox_write.qs_write.domain_raise(DOMAIN_MAX_LEVEL)?;
idms_prox_write.commit().map(|()| DOMAIN_MAX_LEVEL)
}
#[instrument(
level = "info",
skip(self, eventid),
fields(uuid = ?eventid)
)]
pub(crate) async fn handle_domain_remigrate(
&self,
level: Option<u32>,
eventid: Uuid,
) -> Result<(), OperationError> {
let level = level.unwrap_or(DOMAIN_MIN_REMIGRATION_LEVEL);
let ct = duration_from_epoch_now();
let mut idms_prox_write = self.idms.proxy_write(ct).await;
idms_prox_write.qs_write.domain_remigrate(level)?;
idms_prox_write.commit()
}
}

View file

@ -2,6 +2,48 @@
//! components to conduct operations. These are separated based on protocol versions and //! components to conduct operations. These are separated based on protocol versions and
//! if they are read or write transactions internally. //! if they are read or write transactions internally.
use kanidmd_lib::idm::ldap::LdapServer;
use kanidmd_lib::idm::server::IdmServer;
use std::sync::Arc;
pub struct QueryServerReadV1 {
pub(crate) idms: Arc<IdmServer>,
ldap: Arc<LdapServer>,
}
impl QueryServerReadV1 {
pub fn new(idms: Arc<IdmServer>, ldap: Arc<LdapServer>) -> Self {
debug!("Starting query server read worker ...");
QueryServerReadV1 { idms, ldap }
}
pub fn start_static(idms: Arc<IdmServer>, ldap: Arc<LdapServer>) -> &'static Self {
let x = Box::new(QueryServerReadV1::new(idms, ldap));
let x_ref = Box::leak(x);
&(*x_ref)
}
}
pub struct QueryServerWriteV1 {
pub(crate) idms: Arc<IdmServer>,
}
impl QueryServerWriteV1 {
pub fn new(idms: Arc<IdmServer>) -> Self {
debug!("Starting a query server write worker ...");
QueryServerWriteV1 { idms }
}
pub fn start_static(idms: Arc<IdmServer>) -> &'static QueryServerWriteV1 {
let x = Box::new(QueryServerWriteV1::new(idms));
let x_ptr = Box::leak(x);
&(*x_ptr)
}
}
pub mod internal;
pub mod v1_read; pub mod v1_read;
pub mod v1_scim; pub mod v1_scim;
pub mod v1_write; pub mod v1_write;

View file

@ -2,7 +2,6 @@ use std::convert::TryFrom;
use std::fs; use std::fs;
use std::net::IpAddr; use std::net::IpAddr;
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use std::sync::Arc;
use kanidm_proto::internal::{ use kanidm_proto::internal::{
ApiToken, AppLink, BackupCodesView, CURequest, CUSessionToken, CUStatus, CredentialStatus, ApiToken, AppLink, BackupCodesView, CURequest, CUSessionToken, CUStatus, CredentialStatus,
@ -32,37 +31,22 @@ use kanidmd_lib::{
AuthEvent, AuthResult, CredentialStatusEvent, RadiusAuthTokenEvent, ReadBackupCodeEvent, AuthEvent, AuthResult, CredentialStatusEvent, RadiusAuthTokenEvent, ReadBackupCodeEvent,
UnixGroupTokenEvent, UnixUserAuthEvent, UnixUserTokenEvent, UnixGroupTokenEvent, UnixUserAuthEvent, UnixUserTokenEvent,
}, },
idm::ldap::{LdapBoundToken, LdapResponseState, LdapServer}, idm::ldap::{LdapBoundToken, LdapResponseState},
idm::oauth2::{ idm::oauth2::{
AccessTokenIntrospectRequest, AccessTokenIntrospectResponse, AuthorisationRequest, AccessTokenIntrospectRequest, AccessTokenIntrospectResponse, AuthorisationRequest,
AuthoriseResponse, JwkKeySet, Oauth2Error, Oauth2Rfc8414MetadataResponse, AuthoriseResponse, JwkKeySet, Oauth2Error, Oauth2Rfc8414MetadataResponse,
OidcDiscoveryResponse, OidcToken, OidcDiscoveryResponse, OidcToken,
}, },
idm::server::{IdmServer, IdmServerTransaction}, idm::server::IdmServerTransaction,
idm::serviceaccount::ListApiTokenEvent, idm::serviceaccount::ListApiTokenEvent,
idm::ClientAuthInfo, idm::ClientAuthInfo,
}; };
use super::QueryServerReadV1;
// =========================================================== // ===========================================================
pub struct QueryServerReadV1 {
pub(crate) idms: Arc<IdmServer>,
ldap: Arc<LdapServer>,
}
impl QueryServerReadV1 { impl QueryServerReadV1 {
pub fn new(idms: Arc<IdmServer>, ldap: Arc<LdapServer>) -> Self {
info!("Starting query server v1 worker ...");
QueryServerReadV1 { idms, ldap }
}
pub fn start_static(idms: Arc<IdmServer>, ldap: Arc<LdapServer>) -> &'static Self {
let x = Box::new(QueryServerReadV1::new(idms, ldap));
let x_ref = Box::leak(x);
&(*x_ref)
}
// The server only receives "Message" structures, which // The server only receives "Message" structures, which
// are whole self contained DB operations with all parsing // are whole self contained DB operations with all parsing
// required complete. We still need to do certain validation steps, but // required complete. We still need to do certain validation steps, but

View file

@ -1,6 +1,5 @@
use kanidmd_lib::prelude::*; use kanidmd_lib::prelude::*;
use crate::{QueryServerReadV1, QueryServerWriteV1};
use kanidmd_lib::idm::scim::{ use kanidmd_lib::idm::scim::{
GenerateScimSyncTokenEvent, ScimSyncFinaliseEvent, ScimSyncTerminateEvent, ScimSyncUpdateEvent, GenerateScimSyncTokenEvent, ScimSyncFinaliseEvent, ScimSyncTerminateEvent, ScimSyncUpdateEvent,
}; };
@ -8,6 +7,8 @@ use kanidmd_lib::idm::server::IdmServerTransaction;
use kanidm_proto::scim_v1::{ScimSyncRequest, ScimSyncState}; use kanidm_proto::scim_v1::{ScimSyncRequest, ScimSyncState};
use super::{QueryServerReadV1, QueryServerWriteV1};
impl QueryServerWriteV1 { impl QueryServerWriteV1 {
#[instrument( #[instrument(
level = "info", level = "info",

View file

@ -1,34 +1,29 @@
use std::{iter, sync::Arc}; use std::iter;
use kanidm_proto::internal::{ use kanidm_proto::internal::{
CUIntentToken, CUSessionToken, CUStatus, CreateRequest, DeleteRequest, CUIntentToken, CUSessionToken, CUStatus, CreateRequest, DeleteRequest, ImageValue,
DomainInfo as ProtoDomainInfo, ImageValue, Modify as ProtoModify, Modify as ProtoModify, ModifyList as ProtoModifyList, ModifyRequest,
ModifyList as ProtoModifyList, ModifyRequest, Oauth2ClaimMapJoin as ProtoOauth2ClaimMapJoin, Oauth2ClaimMapJoin as ProtoOauth2ClaimMapJoin, OperationError,
OperationError,
}; };
use kanidm_proto::v1::{AccountUnixExtend, Entry as ProtoEntry, GroupUnixExtend}; use kanidm_proto::v1::{AccountUnixExtend, Entry as ProtoEntry, GroupUnixExtend};
use time::OffsetDateTime; use time::OffsetDateTime;
use tracing::{info, instrument, span, trace, Instrument, Level}; use tracing::{info, instrument, trace};
use uuid::Uuid; use uuid::Uuid;
use kanidmd_lib::{ use kanidmd_lib::{
event::{ event::{CreateEvent, DeleteEvent, ModifyEvent, ReviveRecycledEvent},
CreateEvent, DeleteEvent, ModifyEvent, PurgeRecycledEvent, PurgeTombstoneEvent,
ReviveRecycledEvent,
},
filter::{Filter, FilterInvalid}, filter::{Filter, FilterInvalid},
idm::account::DestroySessionTokenEvent, idm::account::DestroySessionTokenEvent,
idm::credupdatesession::{ idm::credupdatesession::{
CredentialUpdateIntentToken, CredentialUpdateSessionToken, InitCredentialUpdateEvent, CredentialUpdateIntentToken, CredentialUpdateSessionToken, InitCredentialUpdateEvent,
InitCredentialUpdateIntentEvent, InitCredentialUpdateIntentEvent,
}, },
idm::delayed::DelayedAction,
idm::event::{GeneratePasswordEvent, RegenerateRadiusSecretEvent, UnixPasswordChangeEvent}, idm::event::{GeneratePasswordEvent, RegenerateRadiusSecretEvent, UnixPasswordChangeEvent},
idm::oauth2::{ idm::oauth2::{
AccessTokenRequest, AccessTokenResponse, AuthorisePermitSuccess, Oauth2Error, AccessTokenRequest, AccessTokenResponse, AuthorisePermitSuccess, Oauth2Error,
TokenRevokeRequest, TokenRevokeRequest,
}, },
idm::server::{IdmServer, IdmServerTransaction}, idm::server::IdmServerTransaction,
idm::serviceaccount::{DestroyApiTokenEvent, GenerateApiTokenEvent}, idm::serviceaccount::{DestroyApiTokenEvent, GenerateApiTokenEvent},
modify::{Modify, ModifyInvalid, ModifyList}, modify::{Modify, ModifyInvalid, ModifyList},
value::{OauthClaimMapJoin, PartialValue, Value}, value::{OauthClaimMapJoin, PartialValue, Value},
@ -36,23 +31,9 @@ use kanidmd_lib::{
use kanidmd_lib::prelude::*; use kanidmd_lib::prelude::*;
pub struct QueryServerWriteV1 { use super::QueryServerWriteV1;
pub(crate) idms: Arc<IdmServer>,
}
impl QueryServerWriteV1 { impl QueryServerWriteV1 {
pub fn new(idms: Arc<IdmServer>) -> Self {
debug!("Starting a query server v1 worker ...");
QueryServerWriteV1 { idms }
}
pub fn start_static(idms: Arc<IdmServer>) -> &'static QueryServerWriteV1 {
let x = Box::new(QueryServerWriteV1::new(idms));
let x_ptr = Box::leak(x);
&(*x_ptr)
}
#[instrument(level = "debug", skip_all)] #[instrument(level = "debug", skip_all)]
async fn modify_from_parts( async fn modify_from_parts(
&self, &self,
@ -1727,150 +1708,4 @@ impl QueryServerWriteV1 {
.oauth2_token_revoke(&client_authz, &intr_req, ct) .oauth2_token_revoke(&client_authz, &intr_req, ct)
.and_then(|()| idms_prox_write.commit().map_err(Oauth2Error::ServerError)) .and_then(|()| idms_prox_write.commit().map_err(Oauth2Error::ServerError))
} }
// ===== These below are internal only event types. =====
#[instrument(
level = "info",
skip_all,
fields(uuid = ?msg.eventid)
)]
pub async fn handle_purgetombstoneevent(&self, msg: PurgeTombstoneEvent) {
trace!(?msg, "Begin purge tombstone event");
let mut idms_prox_write = self.idms.proxy_write(duration_from_epoch_now()).await;
let res = idms_prox_write
.qs_write
.purge_tombstones()
.and_then(|_changed| idms_prox_write.commit());
match res {
Ok(()) => {
debug!("Purge tombstone success");
}
Err(err) => {
error!(?err, "Unable to purge tombstones");
}
}
}
#[instrument(
level = "info",
skip_all,
fields(uuid = ?msg.eventid)
)]
pub async fn handle_purgerecycledevent(&self, msg: PurgeRecycledEvent) {
trace!(?msg, "Begin purge recycled event");
let ct = duration_from_epoch_now();
let mut idms_prox_write = self.idms.proxy_write(ct).await;
let res = idms_prox_write
.qs_write
.purge_recycled()
.and_then(|touched| {
// don't need to commit a txn with no changes
if touched > 0 {
idms_prox_write.commit()
} else {
Ok(())
}
});
match res {
Ok(()) => {
debug!("Purge recyclebin success");
}
Err(err) => {
error!(?err, "Unable to purge recyclebin");
}
}
}
pub(crate) async fn handle_delayedaction(&self, da: DelayedAction) {
let eventid = Uuid::new_v4();
let span = span!(Level::INFO, "process_delayed_action", uuid = ?eventid);
async {
trace!("Begin delayed action ...");
let ct = duration_from_epoch_now();
let mut idms_prox_write = self.idms.proxy_write(ct).await;
if let Err(res) = idms_prox_write
.process_delayedaction(da, ct)
.and_then(|_| idms_prox_write.commit())
{
info!(?res, "delayed action error");
}
}
.instrument(span)
.await
}
#[instrument(
level = "info",
skip_all,
fields(uuid = ?eventid)
)]
pub(crate) async fn handle_admin_recover_account(
&self,
name: String,
eventid: Uuid,
) -> Result<String, OperationError> {
trace!(%name, "Begin admin recover account event");
let ct = duration_from_epoch_now();
let mut idms_prox_write = self.idms.proxy_write(ct).await;
let pw = idms_prox_write.recover_account(name.as_str(), None)?;
idms_prox_write.commit().map(|()| pw)
}
#[instrument(
level = "info",
skip_all,
fields(uuid = ?eventid)
)]
pub(crate) async fn handle_domain_show(
&self,
eventid: Uuid,
) -> Result<ProtoDomainInfo, OperationError> {
trace!("Begin domain show event");
let ct = duration_from_epoch_now();
let mut idms_prox_write = self.idms.proxy_write(ct).await;
let domain_info = idms_prox_write.qs_write.domain_info()?;
idms_prox_write.commit().map(|()| domain_info)
}
#[instrument(
level = "info",
skip_all,
fields(uuid = ?eventid)
)]
pub(crate) async fn handle_domain_raise(&self, eventid: Uuid) -> Result<u32, OperationError> {
trace!("Begin domain raise event");
let ct = duration_from_epoch_now();
let mut idms_prox_write = self.idms.proxy_write(ct).await;
idms_prox_write.qs_write.domain_raise(DOMAIN_MAX_LEVEL)?;
idms_prox_write.commit().map(|()| DOMAIN_MAX_LEVEL)
}
#[instrument(
level = "info",
skip_all,
fields(uuid = ?eventid)
)]
pub(crate) async fn handle_domain_remigrate(
&self,
level: Option<u32>,
eventid: Uuid,
) -> Result<(), OperationError> {
let level = level.unwrap_or(DOMAIN_MIN_REMIGRATION_LEVEL);
trace!(%level, "Begin domain remigrate event");
let ct = duration_from_epoch_now();
let mut idms_prox_write = self.idms.proxy_write(ct).await;
idms_prox_write.qs_write.domain_remigrate(level)?;
idms_prox_write.commit()
}
} }

View file

@ -1,4 +1,4 @@
use crate::actors::v1_write::QueryServerWriteV1; use crate::actors::{QueryServerReadV1, QueryServerWriteV1};
use crate::repl::ReplCtrl; use crate::repl::ReplCtrl;
use crate::CoreAction; use crate::CoreAction;
use bytes::{BufMut, BytesMut}; use bytes::{BufMut, BytesMut};
@ -17,7 +17,10 @@ use tokio_util::codec::{Decoder, Encoder, Framed};
use tracing::{span, Instrument, Level}; use tracing::{span, Instrument, Level};
use uuid::Uuid; use uuid::Uuid;
pub use kanidm_proto::internal::DomainInfo as ProtoDomainInfo; pub use kanidm_proto::internal::{
DomainInfo as ProtoDomainInfo, DomainUpgradeCheckReport as ProtoDomainUpgradeCheckReport,
DomainUpgradeCheckStatus as ProtoDomainUpgradeCheckStatus,
};
#[derive(Serialize, Deserialize, Debug)] #[derive(Serialize, Deserialize, Debug)]
pub enum AdminTaskRequest { pub enum AdminTaskRequest {
@ -26,16 +29,28 @@ pub enum AdminTaskRequest {
RenewReplicationCertificate, RenewReplicationCertificate,
RefreshReplicationConsumer, RefreshReplicationConsumer,
DomainShow, DomainShow,
DomainUpgradeCheck,
DomainRaise, DomainRaise,
DomainRemigrate { level: Option<u32> }, DomainRemigrate { level: Option<u32> },
} }
#[derive(Serialize, Deserialize, Debug)] #[derive(Serialize, Deserialize, Debug)]
pub enum AdminTaskResponse { pub enum AdminTaskResponse {
RecoverAccount { password: String }, RecoverAccount {
ShowReplicationCertificate { cert: String }, password: String,
DomainRaise { level: u32 }, },
DomainShow { domain_info: ProtoDomainInfo }, ShowReplicationCertificate {
cert: String,
},
DomainUpgradeCheck {
report: ProtoDomainUpgradeCheckReport,
},
DomainRaise {
level: u32,
},
DomainShow {
domain_info: ProtoDomainInfo,
},
Success, Success,
Error, Error,
} }
@ -113,7 +128,8 @@ pub(crate) struct AdminActor;
impl AdminActor { impl AdminActor {
pub async fn create_admin_sock( pub async fn create_admin_sock(
sock_path: &str, sock_path: &str,
server: &'static QueryServerWriteV1, server_rw: &'static QueryServerWriteV1,
server_ro: &'static QueryServerReadV1,
mut broadcast_rx: broadcast::Receiver<CoreAction>, mut broadcast_rx: broadcast::Receiver<CoreAction>,
repl_ctrl_tx: Option<mpsc::Sender<ReplCtrl>>, repl_ctrl_tx: Option<mpsc::Sender<ReplCtrl>>,
) -> Result<tokio::task::JoinHandle<()>, ()> { ) -> Result<tokio::task::JoinHandle<()>, ()> {
@ -163,7 +179,7 @@ impl AdminActor {
// spawn the worker. // spawn the worker.
let task_repl_ctrl_tx = repl_ctrl_tx.clone(); let task_repl_ctrl_tx = repl_ctrl_tx.clone();
tokio::spawn(async move { tokio::spawn(async move {
if let Err(e) = handle_client(socket, server, task_repl_ctrl_tx).await { if let Err(e) = handle_client(socket, server_rw, server_ro, task_repl_ctrl_tx).await {
error!(err = ?e, "admin client error"); error!(err = ?e, "admin client error");
} }
}); });
@ -277,7 +293,8 @@ async fn replication_consumer_refresh(ctrl_tx: &mut mpsc::Sender<ReplCtrl>) -> A
async fn handle_client( async fn handle_client(
sock: UnixStream, sock: UnixStream,
server: &'static QueryServerWriteV1, server_rw: &'static QueryServerWriteV1,
server_ro: &'static QueryServerReadV1,
mut repl_ctrl_tx: Option<mpsc::Sender<ReplCtrl>>, mut repl_ctrl_tx: Option<mpsc::Sender<ReplCtrl>>,
) -> Result<(), Box<dyn Error>> { ) -> Result<(), Box<dyn Error>> {
debug!("Accepted admin socket connection"); debug!("Accepted admin socket connection");
@ -293,7 +310,7 @@ async fn handle_client(
let resp = async { let resp = async {
match req { match req {
AdminTaskRequest::RecoverAccount { name } => { AdminTaskRequest::RecoverAccount { name } => {
match server.handle_admin_recover_account(name, eventid).await { match server_rw.handle_admin_recover_account(name, eventid).await {
Ok(password) => AdminTaskResponse::RecoverAccount { password }, Ok(password) => AdminTaskResponse::RecoverAccount { password },
Err(e) => { Err(e) => {
error!(err = ?e, "error during recover-account"); error!(err = ?e, "error during recover-account");
@ -323,14 +340,24 @@ async fn handle_client(
} }
}, },
AdminTaskRequest::DomainShow => match server.handle_domain_show(eventid).await { AdminTaskRequest::DomainShow => match server_ro.handle_domain_show(eventid).await {
Ok(domain_info) => AdminTaskResponse::DomainShow { domain_info }, Ok(domain_info) => AdminTaskResponse::DomainShow { domain_info },
Err(e) => { Err(e) => {
error!(err = ?e, "error during domain show"); error!(err = ?e, "error during domain show");
AdminTaskResponse::Error AdminTaskResponse::Error
} }
}, },
AdminTaskRequest::DomainRaise => match server.handle_domain_raise(eventid).await { AdminTaskRequest::DomainUpgradeCheck => {
match server_ro.handle_domain_upgrade_check(eventid).await {
Ok(report) => AdminTaskResponse::DomainUpgradeCheck { report },
Err(e) => {
error!(err = ?e, "error during domain upgrade checkr");
AdminTaskResponse::Error
}
}
}
AdminTaskRequest::DomainRaise => match server_rw.handle_domain_raise(eventid).await
{
Ok(level) => AdminTaskResponse::DomainRaise { level }, Ok(level) => AdminTaskResponse::DomainRaise { level },
Err(e) => { Err(e) => {
error!(err = ?e, "error during domain raise"); error!(err = ?e, "error during domain raise");
@ -338,7 +365,7 @@ async fn handle_client(
} }
}, },
AdminTaskRequest::DomainRemigrate { level } => { AdminTaskRequest::DomainRemigrate { level } => {
match server.handle_domain_remigrate(level, eventid).await { match server_rw.handle_domain_remigrate(level, eventid).await {
Ok(()) => AdminTaskResponse::Success, Ok(()) => AdminTaskResponse::Success,
Err(e) => { Err(e) => {
error!(err = ?e, "error during domain remigrate"); error!(err = ?e, "error during domain remigrate");

View file

@ -16,8 +16,7 @@ mod v1_scim;
use self::extractors::ClientConnInfo; use self::extractors::ClientConnInfo;
use self::javascript::*; use self::javascript::*;
use crate::actors::v1_read::QueryServerReadV1; use crate::actors::{QueryServerReadV1, QueryServerWriteV1};
use crate::actors::v1_write::QueryServerWriteV1;
use crate::config::{Configuration, ServerRole, TlsConfiguration}; use crate::config::{Configuration, ServerRole, TlsConfiguration};
use axum::extract::connect_info::IntoMakeServiceWithConnectInfo; use axum::extract::connect_info::IntoMakeServiceWithConnectInfo;
use axum::http::{HeaderMap, HeaderValue}; use axum::http::{HeaderMap, HeaderValue};
@ -60,11 +59,11 @@ use self::v1::SessionId;
#[derive(Clone, FromRef)] #[derive(Clone, FromRef)]
pub struct ServerState { pub struct ServerState {
pub status_ref: &'static kanidmd_lib::status::StatusActor, pub status_ref: &'static StatusActor,
pub qe_w_ref: &'static crate::actors::v1_write::QueryServerWriteV1, pub qe_w_ref: &'static QueryServerWriteV1,
pub qe_r_ref: &'static crate::actors::v1_read::QueryServerReadV1, pub qe_r_ref: &'static QueryServerReadV1,
// Store the token management parts. // Store the token management parts.
pub jws_signer: compact_jwt::JwsHs256Signer, pub jws_signer: JwsHs256Signer,
// The SHA384 hashes of javascript files we're going to serve to users // The SHA384 hashes of javascript files we're going to serve to users
pub js_files: JavaScriptFiles, pub js_files: JavaScriptFiles,
pub(crate) trust_x_forward_for: bool, pub(crate) trust_x_forward_for: bool,

View file

@ -14,8 +14,7 @@ use tokio::time::{interval, sleep, Duration, MissedTickBehavior};
use crate::config::OnlineBackup; use crate::config::OnlineBackup;
use crate::CoreAction; use crate::CoreAction;
use crate::actors::v1_read::QueryServerReadV1; use crate::actors::{QueryServerReadV1, QueryServerWriteV1};
use crate::actors::v1_write::QueryServerWriteV1;
use kanidmd_lib::constants::PURGE_FREQUENCY; use kanidmd_lib::constants::PURGE_FREQUENCY;
use kanidmd_lib::event::{OnlineBackupEvent, PurgeRecycledEvent, PurgeTombstoneEvent}; use kanidmd_lib::event::{OnlineBackupEvent, PurgeRecycledEvent, PurgeTombstoneEvent};

View file

@ -2,7 +2,7 @@ use std::net;
use std::pin::Pin; use std::pin::Pin;
use std::str::FromStr; use std::str::FromStr;
use crate::actors::v1_read::QueryServerReadV1; use crate::actors::QueryServerReadV1;
use futures_util::sink::SinkExt; use futures_util::sink::SinkExt;
use futures_util::stream::StreamExt; use futures_util::stream::StreamExt;
use kanidmd_lib::idm::ldap::{LdapBoundToken, LdapResponseState}; use kanidmd_lib::idm::ldap::{LdapBoundToken, LdapResponseState};

View file

@ -53,8 +53,7 @@ use libc::umask;
use tokio::sync::broadcast; use tokio::sync::broadcast;
use tokio::task::JoinHandle; use tokio::task::JoinHandle;
use crate::actors::v1_read::QueryServerReadV1; use crate::actors::{QueryServerReadV1, QueryServerWriteV1};
use crate::actors::v1_write::QueryServerWriteV1;
use crate::admin::AdminActor; use crate::admin::AdminActor;
use crate::config::{Configuration, ServerRole}; use crate::config::{Configuration, ServerRole};
use crate::interval::IntervalActor; use crate::interval::IntervalActor;
@ -1097,6 +1096,7 @@ pub async fn create_server_core(
let admin_handle = AdminActor::create_admin_sock( let admin_handle = AdminActor::create_admin_sock(
config.adminbindpath.as_str(), config.adminbindpath.as_str(),
server_write_ref, server_write_ref,
server_read_ref,
broadcast_rx, broadcast_rx,
maybe_repl_ctrl_tx, maybe_repl_ctrl_tx,
) )

View file

@ -29,7 +29,10 @@ use clap::{Args, Parser, Subcommand};
use futures::{SinkExt, StreamExt}; use futures::{SinkExt, StreamExt};
#[cfg(not(target_family = "windows"))] // not needed for windows builds #[cfg(not(target_family = "windows"))] // not needed for windows builds
use kanidm_utils_users::{get_current_gid, get_current_uid, get_effective_gid, get_effective_uid}; use kanidm_utils_users::{get_current_gid, get_current_uid, get_effective_gid, get_effective_uid};
use kanidmd_core::admin::{AdminTaskRequest, AdminTaskResponse, ClientCodec, ProtoDomainInfo}; use kanidmd_core::admin::{
AdminTaskRequest, AdminTaskResponse, ClientCodec, ProtoDomainInfo,
ProtoDomainUpgradeCheckReport, ProtoDomainUpgradeCheckStatus,
};
use kanidmd_core::config::{Configuration, ServerConfig}; use kanidmd_core::config::{Configuration, ServerConfig};
use kanidmd_core::{ use kanidmd_core::{
backup_server_core, cert_generate_core, create_server_core, dbscan_get_id2entry_core, backup_server_core, cert_generate_core, create_server_core, dbscan_get_id2entry_core,
@ -83,7 +86,6 @@ impl KanidmdOpt {
KanidmdOpt::DbScan { KanidmdOpt::DbScan {
commands: DbScanOpt::ListIndex(dopt), commands: DbScanOpt::ListIndex(dopt),
} => &dopt.commonopts, } => &dopt.commonopts,
// KanidmdOpt::DbScan(DbScanOpt::GetIndex(dopt)) => &dopt.commonopts,
KanidmdOpt::DbScan { KanidmdOpt::DbScan {
commands: DbScanOpt::GetId2Entry(dopt), commands: DbScanOpt::GetId2Entry(dopt),
} => &dopt.commonopts, } => &dopt.commonopts,
@ -93,6 +95,9 @@ impl KanidmdOpt {
| KanidmdOpt::DomainSettings { | KanidmdOpt::DomainSettings {
commands: DomainSettingsCmds::Change { commonopts }, commands: DomainSettingsCmds::Change { commonopts },
} }
| KanidmdOpt::DomainSettings {
commands: DomainSettingsCmds::UpgradeCheck { commonopts },
}
| KanidmdOpt::DomainSettings { | KanidmdOpt::DomainSettings {
commands: DomainSettingsCmds::Raise { commonopts }, commands: DomainSettingsCmds::Raise { commonopts },
} }
@ -170,6 +175,52 @@ async fn submit_admin_req(path: &str, req: AdminTaskRequest, output_mode: Consol
} }
}, },
Some(Ok(AdminTaskResponse::DomainUpgradeCheck { report })) => match output_mode {
ConsoleOutputMode::JSON => {
let json_output = serde_json::json!({
"domain_upgrade_check": report
});
println!("{}", json_output);
}
ConsoleOutputMode::Text => {
let ProtoDomainUpgradeCheckReport {
name,
uuid,
current_level,
upgrade_level,
report_items,
} = report;
info!("domain_name : {}", name);
info!("domain_uuid : {}", uuid);
info!("domain_current_level : {}", current_level);
info!("domain_upgrade_level : {}", upgrade_level);
for item in report_items {
info!("------------------------");
match item.status {
ProtoDomainUpgradeCheckStatus::Pass6To7Gidnumber => {
info!("upgrade_item : gidnumber range validity");
debug!("from_level : {}", item.from_level);
debug!("to_level : {}", item.to_level);
info!("status : PASS");
}
ProtoDomainUpgradeCheckStatus::Fail6To7Gidnumber => {
info!("upgrade_item : gidnumber range validity");
debug!("from_level : {}", item.from_level);
debug!("to_level : {}", item.to_level);
info!("status : FAIL");
info!("description : The automatically allocated gidnumbers for posix accounts was found to allocate numbers into systemd-reserved ranges. These can no longer be used.");
info!("action : Modify the gidnumber of affected entries so that they are in the range 65536 to 524287 OR reset the gidnumber to cause it to automatically regenerate.");
for entry_id in item.affected_entries {
info!("affected_entry : {}", entry_id);
}
}
}
}
}
},
Some(Ok(AdminTaskResponse::DomainRaise { level })) => match output_mode { Some(Ok(AdminTaskResponse::DomainRaise { level })) => match output_mode {
ConsoleOutputMode::JSON => { ConsoleOutputMode::JSON => {
eprintln!("{{\"success\":\"{}\"}}", level) eprintln!("{{\"success\":\"{}\"}}", level)
@ -837,6 +888,20 @@ async fn kanidm_main() -> ExitCode {
) )
.await; .await;
} }
KanidmdOpt::DomainSettings {
commands: DomainSettingsCmds::UpgradeCheck { commonopts },
} => {
info!("Running domain upgrade check ...");
let output_mode: ConsoleOutputMode = commonopts.output_mode.to_owned().into();
submit_admin_req(
config.adminbindpath.as_str(),
AdminTaskRequest::DomainUpgradeCheck,
output_mode,
)
.await;
}
KanidmdOpt::DomainSettings { KanidmdOpt::DomainSettings {
commands: DomainSettingsCmds::Raise { commonopts }, commands: DomainSettingsCmds::Raise { commonopts },
} => { } => {
@ -849,10 +914,11 @@ async fn kanidm_main() -> ExitCode {
) )
.await; .await;
} }
KanidmdOpt::DomainSettings { KanidmdOpt::DomainSettings {
commands: DomainSettingsCmds::Remigrate { commonopts, level }, commands: DomainSettingsCmds::Remigrate { commonopts, level },
} => { } => {
info!("Running domain remigrate ..."); info!("⚠️ Running domain remigrate ...");
let output_mode: ConsoleOutputMode = commonopts.output_mode.to_owned().into(); let output_mode: ConsoleOutputMode = commonopts.output_mode.to_owned().into();
submit_admin_req( submit_admin_req(
config.adminbindpath.as_str(), config.adminbindpath.as_str(),

View file

@ -28,27 +28,37 @@ struct RestoreOpt {
#[derive(Debug, Subcommand)] #[derive(Debug, Subcommand)]
enum DomainSettingsCmds { enum DomainSettingsCmds {
#[clap(name = "show")]
/// Show the current domain /// Show the current domain
#[clap(name = "show")]
Show { Show {
#[clap(flatten)] #[clap(flatten)]
commonopts: CommonOpt, commonopts: CommonOpt,
}, },
#[clap(name = "rename")]
/// Change the IDM domain name based on the values in the configuration /// Change the IDM domain name based on the values in the configuration
#[clap(name = "rename")]
Change { Change {
#[clap(flatten)] #[clap(flatten)]
commonopts: CommonOpt, commonopts: CommonOpt,
}, },
/// Perform a pre-upgrade-check of this domains content. This will report possible
/// incompatibilities that can block a successful upgrade to the next version of
/// Kanidm. This is a safe read only operation.
#[clap(name = "upgrade-check")]
UpgradeCheck {
#[clap(flatten)]
commonopts: CommonOpt,
},
/// ⚠️ Do not use this command unless directed by a project member. ⚠️
/// - Raise the functional level of this domain to the maximum available.
#[clap(name = "raise")] #[clap(name = "raise")]
/// Raise the functional level of this domain to the maximum available.
Raise { Raise {
#[clap(flatten)] #[clap(flatten)]
commonopts: CommonOpt, commonopts: CommonOpt,
}, },
#[clap(name = "remigrate")] /// ⚠️ Do not use this command unless directed by a project member. ⚠️
/// Rerun migrations of this domains database, optionally nominating the level /// - Rerun migrations of this domains database, optionally nominating the level
/// to start from. /// to start from.
#[clap(name = "remigrate")]
Remigrate { Remigrate {
#[clap(flatten)] #[clap(flatten)]
commonopts: CommonOpt, commonopts: CommonOpt,
@ -195,6 +205,7 @@ impl KanidmdParser {
KanidmdOpt::DomainSettings { ref commands } => match commands { KanidmdOpt::DomainSettings { ref commands } => match commands {
DomainSettingsCmds::Show { ref commonopts } => commonopts.config_path.clone(), DomainSettingsCmds::Show { ref commonopts } => commonopts.config_path.clone(),
DomainSettingsCmds::Change { ref commonopts } => commonopts.config_path.clone(), DomainSettingsCmds::Change { ref commonopts } => commonopts.config_path.clone(),
DomainSettingsCmds::UpgradeCheck { ref commonopts } => commonopts.config_path.clone(),
DomainSettingsCmds::Raise { ref commonopts } => commonopts.config_path.clone(), DomainSettingsCmds::Raise { ref commonopts } => commonopts.config_path.clone(),
DomainSettingsCmds::Remigrate { ref commonopts, .. } => { DomainSettingsCmds::Remigrate { ref commonopts, .. } => {
commonopts.config_path.clone() commonopts.config_path.clone()

View file

@ -45,13 +45,30 @@ pub const SYSTEM_INDEX_VERSION: i64 = 31;
*/ */
pub type DomainVersion = u32; pub type DomainVersion = u32;
/// Domain level 0 - this indicates that this instance
/// is a new install and has never had a domain level
/// previously.
pub const DOMAIN_LEVEL_0: DomainVersion = 0; pub const DOMAIN_LEVEL_0: DomainVersion = 0;
/// Deprcated as of 1.2.0
pub const DOMAIN_LEVEL_1: DomainVersion = 1; pub const DOMAIN_LEVEL_1: DomainVersion = 1;
/// Deprcated as of 1.2.0
pub const DOMAIN_LEVEL_2: DomainVersion = 2; pub const DOMAIN_LEVEL_2: DomainVersion = 2;
/// Deprcated as of 1.2.0
pub const DOMAIN_LEVEL_3: DomainVersion = 3; pub const DOMAIN_LEVEL_3: DomainVersion = 3;
/// Deprcated as of 1.2.0
pub const DOMAIN_LEVEL_4: DomainVersion = 4; pub const DOMAIN_LEVEL_4: DomainVersion = 4;
/// Deprcated as of 1.3.0
pub const DOMAIN_LEVEL_5: DomainVersion = 5; pub const DOMAIN_LEVEL_5: DomainVersion = 5;
/// Domain Level introduced with 1.2.0.
/// Deprcated as of 1.4.0
pub const DOMAIN_LEVEL_6: DomainVersion = 6; pub const DOMAIN_LEVEL_6: DomainVersion = 6;
/// Domain Level introduced with 1.3.0.
/// Deprcated as of 1.5.0
pub const DOMAIN_LEVEL_7: DomainVersion = 7;
// The minimum level that we can re-migrate from // The minimum level that we can re-migrate from
pub const DOMAIN_MIN_REMIGRATION_LEVEL: DomainVersion = DOMAIN_LEVEL_2; pub const DOMAIN_MIN_REMIGRATION_LEVEL: DomainVersion = DOMAIN_LEVEL_2;
// The minimum supported domain functional level // The minimum supported domain functional level
@ -62,6 +79,8 @@ pub const DOMAIN_PREVIOUS_TGT_LEVEL: DomainVersion = DOMAIN_LEVEL_5;
pub const DOMAIN_TGT_LEVEL: DomainVersion = DOMAIN_LEVEL_6; pub const DOMAIN_TGT_LEVEL: DomainVersion = DOMAIN_LEVEL_6;
// The maximum supported domain functional level // The maximum supported domain functional level
pub const DOMAIN_MAX_LEVEL: DomainVersion = DOMAIN_LEVEL_6; pub const DOMAIN_MAX_LEVEL: DomainVersion = DOMAIN_LEVEL_6;
// The maximum supported domain functional level
pub const DOMAIN_NEXT_LEVEL: DomainVersion = DOMAIN_LEVEL_7;
// On test builds define to 60 seconds // On test builds define to 60 seconds
#[cfg(test)] #[cfg(test)]

View file

@ -1435,6 +1435,8 @@ impl Entry<EntrySealed, EntryCommitted> {
#[inline] #[inline]
/// Given this entry, determine it's relative distinguished named for LDAP compatibility. /// Given this entry, determine it's relative distinguished named for LDAP compatibility.
///
/// See also - `get_display_id`
pub(crate) fn get_uuid2rdn(&self) -> String { pub(crate) fn get_uuid2rdn(&self) -> String {
self.attrs self.attrs
.get("spn") .get("spn")

View file

@ -871,7 +871,7 @@ mod tests {
), ),
(Attribute::Description, Value::new_utf8s("testperson1")), (Attribute::Description, Value::new_utf8s("testperson1")),
(Attribute::DisplayName, Value::new_utf8s("testperson1")), (Attribute::DisplayName, Value::new_utf8s("testperson1")),
(Attribute::GidNumber, Value::new_uint32(12345678)), (Attribute::GidNumber, Value::new_uint32(12345)),
(Attribute::LoginShell, Value::new_iutf8("/bin/zsh")), (Attribute::LoginShell, Value::new_iutf8("/bin/zsh")),
( (
Attribute::SshPublicKey, Attribute::SshPublicKey,
@ -918,7 +918,7 @@ mod tests {
(Attribute::Class, EntryClass::PosixAccount.to_string()), (Attribute::Class, EntryClass::PosixAccount.to_string()),
(Attribute::DisplayName, "testperson1"), (Attribute::DisplayName, "testperson1"),
(Attribute::Name, "testperson1"), (Attribute::Name, "testperson1"),
(Attribute::GidNumber, "12345678"), (Attribute::GidNumber, "12345"),
(Attribute::LoginShell, "/bin/zsh"), (Attribute::LoginShell, "/bin/zsh"),
(Attribute::SshPublicKey, ssh_ed25519), (Attribute::SshPublicKey, ssh_ed25519),
(Attribute::Uuid, "cc8e95b4-c24f-4d68-ba54-8bed76f63930") (Attribute::Uuid, "cc8e95b4-c24f-4d68-ba54-8bed76f63930")
@ -953,7 +953,7 @@ mod tests {
(Attribute::ObjectClass, EntryClass::PosixAccount.as_ref()), (Attribute::ObjectClass, EntryClass::PosixAccount.as_ref()),
(Attribute::DisplayName, "testperson1"), (Attribute::DisplayName, "testperson1"),
(Attribute::Name, "testperson1"), (Attribute::Name, "testperson1"),
(Attribute::GidNumber, "12345678"), (Attribute::GidNumber, "12345"),
(Attribute::LoginShell, "/bin/zsh"), (Attribute::LoginShell, "/bin/zsh"),
(Attribute::SshPublicKey, ssh_ed25519), (Attribute::SshPublicKey, ssh_ed25519),
(Attribute::EntryUuid, "cc8e95b4-c24f-4d68-ba54-8bed76f63930"), (Attribute::EntryUuid, "cc8e95b4-c24f-4d68-ba54-8bed76f63930"),
@ -961,7 +961,7 @@ mod tests {
Attribute::EntryDn, Attribute::EntryDn,
"spn=testperson1@example.com,dc=example,dc=com" "spn=testperson1@example.com,dc=example,dc=com"
), ),
(Attribute::UidNumber, "12345678"), (Attribute::UidNumber, "12345"),
(Attribute::Cn, "testperson1"), (Attribute::Cn, "testperson1"),
(Attribute::LdapKeys, ssh_ed25519) (Attribute::LdapKeys, ssh_ed25519)
); );
@ -999,7 +999,7 @@ mod tests {
Attribute::EntryDn, Attribute::EntryDn,
"spn=testperson1@example.com,dc=example,dc=com" "spn=testperson1@example.com,dc=example,dc=com"
), ),
(Attribute::UidNumber, "12345678"), (Attribute::UidNumber, "12345"),
(Attribute::LdapKeys, ssh_ed25519) (Attribute::LdapKeys, ssh_ed25519)
); );
} }
@ -1070,7 +1070,7 @@ mod tests {
), ),
(Attribute::Description, Value::new_utf8s("testperson1")), (Attribute::Description, Value::new_utf8s("testperson1")),
(Attribute::DisplayName, Value::new_utf8s("testperson1")), (Attribute::DisplayName, Value::new_utf8s("testperson1")),
(Attribute::GidNumber, Value::new_uint32(12345678)), (Attribute::GidNumber, Value::new_uint32(12345)),
(Attribute::LoginShell, Value::new_iutf8("/bin/zsh")) (Attribute::LoginShell, Value::new_iutf8("/bin/zsh"))
); );
@ -1366,7 +1366,7 @@ mod tests {
(Attribute::Class, EntryClass::PosixAccount.to_value()), (Attribute::Class, EntryClass::PosixAccount.to_value()),
(Attribute::Name, Value::new_iname("testperson1")), (Attribute::Name, Value::new_iname("testperson1")),
(Attribute::Uuid, Value::Uuid(acct_uuid)), (Attribute::Uuid, Value::Uuid(acct_uuid)),
(Attribute::GidNumber, Value::Uint32(123456)), (Attribute::GidNumber, Value::Uint32(12345)),
(Attribute::Description, Value::new_utf8s("testperson1")), (Attribute::Description, Value::new_utf8s("testperson1")),
(Attribute::DisplayName, Value::new_utf8s("testperson1")) (Attribute::DisplayName, Value::new_utf8s("testperson1"))
); );
@ -1448,8 +1448,8 @@ mod tests {
(Attribute::Uid, "testperson1"), (Attribute::Uid, "testperson1"),
(Attribute::Cn, "testperson1"), (Attribute::Cn, "testperson1"),
(Attribute::Gecos, "testperson1"), (Attribute::Gecos, "testperson1"),
(Attribute::UidNumber, "123456"), (Attribute::UidNumber, "12345"),
(Attribute::GidNumber, "123456"), (Attribute::GidNumber, "12345"),
( (
Attribute::EntryUuid.as_ref(), Attribute::EntryUuid.as_ref(),
"cc8e95b4-c24f-4d68-ba54-8bed76f63930" "cc8e95b4-c24f-4d68-ba54-8bed76f63930"

View file

@ -2,6 +2,7 @@
//! which is used to process authentication, store identities and enforce access controls. //! which is used to process authentication, store identities and enforce access controls.
#![deny(warnings)] #![deny(warnings)]
#![allow(deprecated)]
#![recursion_limit = "512"] #![recursion_limit = "512"]
#![warn(unused_extern_crates)] #![warn(unused_extern_crates)]
// Enable some groups of clippy lints. // Enable some groups of clippy lints.

View file

@ -12,14 +12,60 @@ use crate::utils::uuid_to_gid_u32;
/// Systemd dynamic units allocate between 6118465519, most distros allocate /// Systemd dynamic units allocate between 6118465519, most distros allocate
/// system uids from 0 - 1000, and many others give user ids between 1000 to /// system uids from 0 - 1000, and many others give user ids between 1000 to
/// 2000. This whole numberspace is cursed, lets assume it's not ours. :( /// 2000. This whole numberspace is cursed, lets assume it's not ours. :(
const GID_SYSTEM_NUMBER_MIN: u32 = 65536; ///
/// Per https://systemd.io/UIDS-GIDS/, systemd claims a huge chunk of this
/// space to itself. As a result we can't allocate between 65536 and u32 max
/// because systemd takes most of the usable range for its own containers,
/// and half the range is probably going to trigger linux kernel issues.
///
/// Seriously, linux's uid/gid model is so fundamentally terrible... Windows
/// NT got this right with SIDs.
///
/// Because of this, we have to ensure that anything we allocate is in the
/// range 1879048192 (0x70000000) to 2147483647 (0x7fffffff)
const GID_SYSTEM_NUMBER_PREFIX: u32 = 0x7000_0000;
const GID_SYSTEM_NUMBER_MASK: u32 = 0x0fff_ffff;
/// Systemd claims so many ranges to itself, we have to check we are in certain bounds.
/// This is the normal system range, we MUST NOT allow it to be allocated. /// This is the normal system range, we MUST NOT allow it to be allocated.
const GID_SAFETY_NUMBER_MIN: u32 = 1000; pub const GID_REGULAR_USER_MIN: u32 = 1000;
pub const GID_REGULAR_USER_MAX: u32 = 60000;
/// Systemd homed claims 60001 through 60577
pub const GID_UNUSED_A_MIN: u32 = 60578;
pub const GID_UNUSED_A_MAX: u32 = 61183;
/// Systemd dyn service users 61184 through 65519
pub const GID_UNUSED_B_MIN: u32 = 65520;
pub const GID_UNUSED_B_MAX: u32 = 65533;
/// nobody is 65534
/// 16bit uid -1 65535
pub const GID_UNUSED_C_MIN: u32 = 65536;
const GID_UNUSED_C_MAX: u32 = 524287;
/// systemd claims 524288 through 1879048191 for nspawn
const GID_NSPAWN_MIN: u32 = 524288;
const GID_NSPAWN_MAX: u32 = 1879048191;
const GID_UNUSED_D_MIN: u32 = 0x7000_0000;
pub const GID_UNUSED_D_MAX: u32 = 0x7fff_ffff;
/// Anything above 2147483648 can confuse the kernel (so basicly half the address space
/// can't be accessed.
// const GID_UNSAFE_MAX: u32 = 2147483648;
pub struct GidNumber {} pub struct GidNumber {}
fn apply_gidnumber<T: Clone>(e: &mut Entry<EntryInvalid, T>) -> Result<(), OperationError> { fn apply_gidnumber<T: Clone>(
e: &mut Entry<EntryInvalid, T>,
domain_version: DomainVersion,
) -> Result<(), OperationError> {
if (e.attribute_equality(Attribute::Class, &EntryClass::PosixGroup.into()) if (e.attribute_equality(Attribute::Class, &EntryClass::PosixGroup.into())
|| e.attribute_equality(Attribute::Class, &EntryClass::PosixAccount.into())) || e.attribute_equality(Attribute::Class, &EntryClass::PosixAccount.into()))
&& !e.attribute_pres(Attribute::GidNumber) && !e.attribute_pres(Attribute::GidNumber)
@ -33,31 +79,59 @@ fn apply_gidnumber<T: Clone>(e: &mut Entry<EntryInvalid, T>) -> Result<(), Opera
})?; })?;
let gid = uuid_to_gid_u32(u_ref); let gid = uuid_to_gid_u32(u_ref);
// assert the value is greater than the system range.
if gid < GID_SYSTEM_NUMBER_MIN { // Apply the mask to only take the last 24 bits, and then move them
admin_error!( // to the correct range.
"Requested GID {} is lower than system minimum {}", let gid = gid & GID_SYSTEM_NUMBER_MASK;
gid, let gid = gid | GID_SYSTEM_NUMBER_PREFIX;
GID_SYSTEM_NUMBER_MIN
);
return Err(OperationError::GidOverlapsSystemMin(GID_SYSTEM_NUMBER_MIN));
}
let gid_v = Value::new_uint32(gid); let gid_v = Value::new_uint32(gid);
admin_info!("Generated {} for {:?}", gid, u_ref); admin_info!("Generated {} for {:?}", gid, u_ref);
e.set_ava(Attribute::GidNumber, once(gid_v)); e.set_ava(Attribute::GidNumber, once(gid_v));
Ok(()) Ok(())
} else if let Some(gid) = e.get_ava_single_uint32(Attribute::GidNumber) { } else if let Some(gid) = e.get_ava_single_uint32(Attribute::GidNumber) {
// If they provided us with a gid number, ensure it's in a safe range. if domain_version <= DOMAIN_LEVEL_6 {
if gid <= GID_SAFETY_NUMBER_MIN { if gid < GID_REGULAR_USER_MIN {
admin_error!( error!(
"Requested GID {} is lower or equal to a safe value {}", "Requested GID ({}) overlaps a system range. Allowed ranges are {} to {}, {} to {} and {} to {}",
gid, gid,
GID_SAFETY_NUMBER_MIN GID_REGULAR_USER_MIN, GID_REGULAR_USER_MAX,
); GID_UNUSED_C_MIN, GID_UNUSED_C_MAX,
Err(OperationError::GidOverlapsSystemMin(GID_SAFETY_NUMBER_MIN)) GID_UNUSED_D_MIN, GID_UNUSED_D_MAX
);
Err(OperationError::PL0001GidOverlapsSystemRange)
} else {
Ok(())
}
} else { } else {
Ok(()) // If they provided us with a gid number, ensure it's in a safe range.
if (gid >= GID_REGULAR_USER_MIN && gid <= GID_REGULAR_USER_MAX)
|| (gid >= GID_UNUSED_A_MIN && gid <= GID_UNUSED_A_MAX)
|| (gid >= GID_UNUSED_B_MIN && gid <= GID_UNUSED_B_MAX)
|| (gid >= GID_UNUSED_C_MIN && gid <= GID_UNUSED_C_MAX)
// We won't ever generate an id in the nspawn range, but we do secretly allow
// it to be set for compatability with services like freeipa or openldap. TBH
// most people don't even use systemd nspawn anyway ...
//
// I made this design choice to avoid a tunable that may confuse people to
// it's purpose. This way things "just work" for imports and existing systems
// but we do the right thing in the future.
|| (gid >= GID_NSPAWN_MIN && gid <= GID_NSPAWN_MAX)
|| (gid >= GID_UNUSED_D_MIN && gid <= GID_UNUSED_D_MAX)
{
Ok(())
} else {
// Note that here we don't advertise that we allow the nspawn range to be set, even
// though we do allow it.
error!(
"Requested GID ({}) overlaps a system range. Allowed ranges are {} to {}, {} to {} and {} to {}",
gid,
GID_REGULAR_USER_MIN, GID_REGULAR_USER_MAX,
GID_UNUSED_C_MIN, GID_UNUSED_C_MAX,
GID_UNUSED_D_MIN, GID_UNUSED_D_MAX
);
Err(OperationError::PL0001GidOverlapsSystemRange)
}
} }
} else { } else {
Ok(()) Ok(())
@ -71,276 +145,364 @@ impl Plugin for GidNumber {
#[instrument(level = "debug", name = "gidnumber_pre_create_transform", skip_all)] #[instrument(level = "debug", name = "gidnumber_pre_create_transform", skip_all)]
fn pre_create_transform( fn pre_create_transform(
_qs: &mut QueryServerWriteTransaction, qs: &mut QueryServerWriteTransaction,
cand: &mut Vec<Entry<EntryInvalid, EntryNew>>, cand: &mut Vec<Entry<EntryInvalid, EntryNew>>,
_ce: &CreateEvent, _ce: &CreateEvent,
) -> Result<(), OperationError> { ) -> Result<(), OperationError> {
cand.iter_mut().try_for_each(apply_gidnumber) let dv = qs.get_domain_version();
cand.iter_mut()
.try_for_each(|cand| apply_gidnumber(cand, dv))
} }
#[instrument(level = "debug", name = "gidnumber_pre_modify", skip_all)] #[instrument(level = "debug", name = "gidnumber_pre_modify", skip_all)]
fn pre_modify( fn pre_modify(
_qs: &mut QueryServerWriteTransaction, qs: &mut QueryServerWriteTransaction,
_pre_cand: &[Arc<EntrySealedCommitted>], _pre_cand: &[Arc<EntrySealedCommitted>],
cand: &mut Vec<Entry<EntryInvalid, EntryCommitted>>, cand: &mut Vec<Entry<EntryInvalid, EntryCommitted>>,
_me: &ModifyEvent, _me: &ModifyEvent,
) -> Result<(), OperationError> { ) -> Result<(), OperationError> {
cand.iter_mut().try_for_each(apply_gidnumber) let dv = qs.get_domain_version();
cand.iter_mut()
.try_for_each(|cand| apply_gidnumber(cand, dv))
} }
#[instrument(level = "debug", name = "gidnumber_pre_batch_modify", skip_all)] #[instrument(level = "debug", name = "gidnumber_pre_batch_modify", skip_all)]
fn pre_batch_modify( fn pre_batch_modify(
_qs: &mut QueryServerWriteTransaction, qs: &mut QueryServerWriteTransaction,
_pre_cand: &[Arc<EntrySealedCommitted>], _pre_cand: &[Arc<EntrySealedCommitted>],
cand: &mut Vec<Entry<EntryInvalid, EntryCommitted>>, cand: &mut Vec<Entry<EntryInvalid, EntryCommitted>>,
_me: &BatchModifyEvent, _me: &BatchModifyEvent,
) -> Result<(), OperationError> { ) -> Result<(), OperationError> {
cand.iter_mut().try_for_each(apply_gidnumber) let dv = qs.get_domain_version();
cand.iter_mut()
.try_for_each(|cand| apply_gidnumber(cand, dv))
} }
} }
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::{
GID_REGULAR_USER_MAX, GID_REGULAR_USER_MIN, GID_UNUSED_A_MAX, GID_UNUSED_A_MIN,
GID_UNUSED_B_MAX, GID_UNUSED_B_MIN, GID_UNUSED_C_MIN, GID_UNUSED_D_MAX,
};
use crate::prelude::*; use crate::prelude::*;
fn check_gid(qs_write: &mut QueryServerWriteTransaction, uuid: &str, gid: u32) { use kanidm_proto::internal::DomainUpgradeCheckStatus as ProtoDomainUpgradeCheckStatus;
let u = Uuid::parse_str(uuid).unwrap();
let e = qs_write.internal_search_uuid(u).unwrap(); #[qs_test(domain_level=DOMAIN_LEVEL_7)]
let gidnumber = e.get_ava_single(Attribute::GidNumber).unwrap(); async fn test_gidnumber_generate(server: &QueryServer) {
let ex_gid = Value::new_uint32(gid); let mut server_txn = server.write(duration_from_epoch_now()).await;
assert!(ex_gid == gidnumber);
// Test that the gid number is generated on create
{
let user_a_uuid = uuid!("83a0927f-3de1-45ec-bea0-2f7b997ef244");
let op_result = server_txn.internal_create(vec![entry_init!(
(Attribute::Class, EntryClass::Account.to_value()),
(Attribute::Class, EntryClass::PosixAccount.to_value()),
(Attribute::Name, Value::new_iname("testperson_1")),
(Attribute::Uuid, Value::Uuid(user_a_uuid)),
(Attribute::Description, Value::new_utf8s("testperson")),
(Attribute::DisplayName, Value::new_utf8s("testperson"))
)]);
assert!(op_result.is_ok());
let user_a = server_txn
.internal_search_uuid(user_a_uuid)
.expect("Unable to access user");
let user_a_uid = user_a
.get_ava_single_uint32(Attribute::GidNumber)
.expect("gidnumber not present on account");
assert_eq!(user_a_uid, 0x797ef244);
}
// test that gid is not altered if provided on create.
let user_b_uuid = uuid!("d90fb0cb-6785-4f36-94cb-e364d9c13255");
{
let op_result = server_txn.internal_create(vec![entry_init!(
(Attribute::Class, EntryClass::Account.to_value()),
(Attribute::Class, EntryClass::PosixAccount.to_value()),
(Attribute::Name, Value::new_iname("testperson_2")),
(Attribute::Uuid, Value::Uuid(user_b_uuid)),
(Attribute::GidNumber, Value::Uint32(10001)),
(Attribute::Description, Value::new_utf8s("testperson")),
(Attribute::DisplayName, Value::new_utf8s("testperson"))
)]);
assert!(op_result.is_ok());
let user_b = server_txn
.internal_search_uuid(user_b_uuid)
.expect("Unable to access user");
let user_b_uid = user_b
.get_ava_single_uint32(Attribute::GidNumber)
.expect("gidnumber not present on account");
assert_eq!(user_b_uid, 10001);
}
// Test that if the value is deleted, it is correctly regenerated.
{
let modlist = modlist!([m_purge(Attribute::GidNumber)]);
server_txn
.internal_modify_uuid(user_b_uuid, &modlist)
.expect("Unable to modify user");
let user_b = server_txn
.internal_search_uuid(user_b_uuid)
.expect("Unable to access user");
let user_b_uid = user_b
.get_ava_single_uint32(Attribute::GidNumber)
.expect("gidnumber not present on account");
assert_eq!(user_b_uid, 0x79c13255);
}
let user_c_uuid = uuid!("0d5086b0-74f9-4518-92b4-89df0c55971b");
// Test that an entry when modified to have posix attributes will have
// it's gidnumber generated.
{
let op_result = server_txn.internal_create(vec![entry_init!(
(Attribute::Class, EntryClass::Account.to_value()),
(Attribute::Class, EntryClass::Person.to_value()),
(Attribute::Name, Value::new_iname("testperson_3")),
(Attribute::Uuid, Value::Uuid(user_c_uuid)),
(Attribute::Description, Value::new_utf8s("testperson")),
(Attribute::DisplayName, Value::new_utf8s("testperson"))
)]);
assert!(op_result.is_ok());
let user_c = server_txn
.internal_search_uuid(user_c_uuid)
.expect("Unable to access user");
assert_eq!(user_c.get_ava_single_uint32(Attribute::GidNumber), None);
let modlist = modlist!([m_pres(
Attribute::Class,
&EntryClass::PosixAccount.to_value()
)]);
server_txn
.internal_modify_uuid(user_c_uuid, &modlist)
.expect("Unable to modify user");
let user_c = server_txn
.internal_search_uuid(user_c_uuid)
.expect("Unable to access user");
let user_c_uid = user_c
.get_ava_single_uint32(Attribute::GidNumber)
.expect("gidnumber not present on account");
assert_eq!(user_c_uid, 0x7c55971b);
}
let user_d_uuid = uuid!("36dc9010-d80c-404b-b5ba-8f66657c2f1d");
// Test that an entry when modified to have posix attributes will have
// it's gidnumber generated.
{
let op_result = server_txn.internal_create(vec![entry_init!(
(Attribute::Class, EntryClass::Account.to_value()),
(Attribute::Class, EntryClass::Person.to_value()),
(Attribute::Name, Value::new_iname("testperson_4")),
(Attribute::Uuid, Value::Uuid(user_d_uuid)),
(Attribute::Description, Value::new_utf8s("testperson")),
(Attribute::DisplayName, Value::new_utf8s("testperson"))
)]);
assert!(op_result.is_ok());
let user_d = server_txn
.internal_search_uuid(user_d_uuid)
.expect("Unable to access user");
assert_eq!(user_d.get_ava_single_uint32(Attribute::GidNumber), None);
let modlist = modlist!([m_pres(
Attribute::Class,
&EntryClass::PosixAccount.to_value()
)]);
server_txn
.internal_modify_uuid(user_d_uuid, &modlist)
.expect("Unable to modify user");
let user_d = server_txn
.internal_search_uuid(user_d_uuid)
.expect("Unable to access user");
let user_d_uid = user_d
.get_ava_single_uint32(Attribute::GidNumber)
.expect("gidnumber not present on account");
assert_eq!(user_d_uid, 0x757c2f1d);
}
let user_e_uuid = uuid!("a6dc0d68-9c7a-4dad-b1e2-f6274b691373");
// Test that an entry when modified to have posix attributes, if a gidnumber
// is provided then it is respected.
{
let op_result = server_txn.internal_create(vec![entry_init!(
(Attribute::Class, EntryClass::Account.to_value()),
(Attribute::Class, EntryClass::Person.to_value()),
(Attribute::Name, Value::new_iname("testperson_5")),
(Attribute::Uuid, Value::Uuid(user_e_uuid)),
(Attribute::Description, Value::new_utf8s("testperson")),
(Attribute::DisplayName, Value::new_utf8s("testperson"))
)]);
assert!(op_result.is_ok());
let user_e = server_txn
.internal_search_uuid(user_e_uuid)
.expect("Unable to access user");
assert_eq!(user_e.get_ava_single_uint32(Attribute::GidNumber), None);
let modlist = modlist!([
m_pres(Attribute::Class, &EntryClass::PosixAccount.to_value()),
m_pres(Attribute::GidNumber, &Value::Uint32(10002))
]);
server_txn
.internal_modify_uuid(user_e_uuid, &modlist)
.expect("Unable to modify user");
let user_e = server_txn
.internal_search_uuid(user_e_uuid)
.expect("Unable to access user");
let user_e_uid = user_e
.get_ava_single_uint32(Attribute::GidNumber)
.expect("gidnumber not present on account");
assert_eq!(user_e_uid, 10002);
}
// Test rejection of important gid values.
let user_f_uuid = uuid!("33afc396-2434-47e5-b143-05176148b50e");
// Test that an entry when modified to have posix attributes, if a gidnumber
// is provided then it is respected.
{
let op_result = server_txn.internal_create(vec![entry_init!(
(Attribute::Class, EntryClass::Account.to_value()),
(Attribute::Class, EntryClass::Person.to_value()),
(Attribute::Name, Value::new_iname("testperson_6")),
(Attribute::Uuid, Value::Uuid(user_f_uuid)),
(Attribute::Description, Value::new_utf8s("testperson")),
(Attribute::DisplayName, Value::new_utf8s("testperson"))
)]);
assert!(op_result.is_ok());
for id in [
0,
500,
GID_REGULAR_USER_MIN - 1,
GID_REGULAR_USER_MAX + 1,
GID_UNUSED_A_MIN - 1,
GID_UNUSED_A_MAX + 1,
GID_UNUSED_B_MIN - 1,
GID_UNUSED_B_MAX + 1,
GID_UNUSED_C_MIN - 1,
GID_UNUSED_D_MAX + 1,
u32::MAX,
] {
let modlist = modlist!([
m_pres(Attribute::Class, &EntryClass::PosixAccount.to_value()),
m_pres(Attribute::GidNumber, &Value::Uint32(id))
]);
let op_result = server_txn.internal_modify_uuid(user_f_uuid, &modlist);
trace!(?id);
assert_eq!(op_result, Err(OperationError::PL0001GidOverlapsSystemRange));
}
}
assert!(server_txn.commit().is_ok());
} }
#[test] #[qs_test(domain_level=DOMAIN_LEVEL_6)]
fn test_gidnumber_create_generate() { async fn test_gidnumber_domain_level_6(server: &QueryServer) {
let e = entry_init!( let mut server_txn = server.write(duration_from_epoch_now()).await;
(Attribute::Class, EntryClass::Account.to_value()),
(Attribute::Class, EntryClass::PosixAccount.to_value()), // This will be INVALID in DL 7 but it's allowed for DL6
(Attribute::Name, Value::new_iname("testperson")), let user_a_uuid = uuid!("d90fb0cb-6785-4f36-94cb-e364d9c13255");
( {
Attribute::Uuid, let op_result = server_txn.internal_create(vec![entry_init!(
Value::Uuid(uuid!("83a0927f-3de1-45ec-bea0-2f7b997ef244")) (Attribute::Class, EntryClass::Account.to_value()),
), (Attribute::Class, EntryClass::PosixAccount.to_value()),
(Attribute::Description, Value::new_utf8s("testperson")), (Attribute::Name, Value::new_iname("testperson_2")),
(Attribute::DisplayName, Value::new_utf8s("testperson")) (Attribute::Uuid, Value::Uuid(user_a_uuid)),
// NOTE HERE: We do GID_UNUSED_A_MIN minus 1 which isn't accepted
// on DL7
(Attribute::GidNumber, Value::Uint32(GID_UNUSED_A_MIN - 1)),
(Attribute::Description, Value::new_utf8s("testperson")),
(Attribute::DisplayName, Value::new_utf8s("testperson"))
)]);
assert!(op_result.is_ok());
let user_a = server_txn
.internal_search_uuid(user_a_uuid)
.expect("Unable to access user");
let user_a_uid = user_a
.get_ava_single_uint32(Attribute::GidNumber)
.expect("gidnumber not present on account");
assert_eq!(user_a_uid, GID_UNUSED_A_MIN - 1);
}
assert!(server_txn.commit().is_ok());
// Now, do the DL6 upgrade check - will FAIL because the above user has an invalid ID.
let mut server_txn = server.read().await;
let check_item = server_txn
.domain_upgrade_check_6_to_7_gidnumber()
.expect("Failed to perform migration check.");
assert_eq!(
check_item.status,
ProtoDomainUpgradeCheckStatus::Fail6To7Gidnumber
); );
let create = vec![e]; drop(server_txn);
let preload = Vec::new();
run_create_test!( let mut server_txn = server.write(duration_from_epoch_now()).await;
Ok(()),
preload,
create,
None,
|qs_write: &mut QueryServerWriteTransaction| check_gid(
qs_write,
"83a0927f-3de1-45ec-bea0-2f7b997ef244",
0x997ef244
)
);
}
// test that gid is not altered if provided on create. // Test rejection of important gid values.
#[test] let user_b_uuid = uuid!("33afc396-2434-47e5-b143-05176148b50e");
fn test_gidnumber_create_noaction() { // Test that an entry when modified to have posix attributes, if a gidnumber
let e = entry_init!( // is provided then it is respected.
(Attribute::Class, EntryClass::Account.to_value()), {
(Attribute::Class, EntryClass::PosixAccount.to_value()), let op_result = server_txn.internal_create(vec![entry_init!(
(Attribute::Name, Value::new_iname("testperson")), (Attribute::Class, EntryClass::Account.to_value()),
(Attribute::GidNumber, Value::Uint32(10001)), (Attribute::Class, EntryClass::Person.to_value()),
( (Attribute::Name, Value::new_iname("testperson_6")),
Attribute::Uuid, (Attribute::Uuid, Value::Uuid(user_b_uuid)),
Value::Uuid(uuid!("83a0927f-3de1-45ec-bea0-2f7b997ef244")) (Attribute::Description, Value::new_utf8s("testperson")),
), (Attribute::DisplayName, Value::new_utf8s("testperson"))
(Attribute::Description, Value::new_utf8s("testperson")), )]);
(Attribute::DisplayName, Value::new_utf8s("testperson"))
);
let create = vec![e]; assert!(op_result.is_ok());
let preload = Vec::new();
run_create_test!( for id in [0, 500, GID_REGULAR_USER_MIN - 1] {
Ok(()), let modlist = modlist!([
preload, m_pres(Attribute::Class, &EntryClass::PosixAccount.to_value()),
create, m_pres(Attribute::GidNumber, &Value::Uint32(id))
None, ]);
|qs_write: &mut QueryServerWriteTransaction| check_gid( let op_result = server_txn.internal_modify_uuid(user_b_uuid, &modlist);
qs_write,
"83a0927f-3de1-45ec-bea0-2f7b997ef244",
10001
)
);
}
// Test generated if not on mod (ie adding posixaccount to something) trace!(?id);
#[test] assert_eq!(op_result, Err(OperationError::PL0001GidOverlapsSystemRange));
fn test_gidnumber_modify_generate() { }
let e = entry_init!( }
(Attribute::Class, EntryClass::Account.to_value()),
(Attribute::Class, EntryClass::PosixAccount.to_value()),
(Attribute::Name, Value::new_iname("testperson")),
(
Attribute::Uuid,
Value::Uuid(uuid!("83a0927f-3de1-45ec-bea0-2f7b997ef244"))
),
(Attribute::Description, Value::new_utf8s("testperson")),
(Attribute::DisplayName, Value::new_utf8s("testperson"))
);
let preload = vec![e]; assert!(server_txn.commit().is_ok());
run_modify_test!(
Ok(()),
preload,
filter!(f_eq(Attribute::Name, PartialValue::new_iname("testperson"))),
modlist!([m_pres(Attribute::Class, &EntryClass::PosixGroup.into())]),
None,
|_| {},
|qs_write: &mut QueryServerWriteTransaction| check_gid(
qs_write,
"83a0927f-3de1-45ec-bea0-2f7b997ef244",
0x997ef244
)
);
}
// test generated if DELETED on mod
#[test]
fn test_gidnumber_modify_regenerate() {
let e = entry_init!(
(Attribute::Class, EntryClass::Account.to_value()),
(Attribute::Class, EntryClass::PosixAccount.to_value()),
(Attribute::Name, Value::new_iname("testperson")),
(
Attribute::Uuid,
Value::Uuid(uuid::uuid!("83a0927f-3de1-45ec-bea0-2f7b997ef244"))
),
(Attribute::Description, Value::new_utf8s("testperson")),
(Attribute::DisplayName, Value::new_utf8s("testperson"))
);
let preload = vec![e];
run_modify_test!(
Ok(()),
preload,
filter!(f_eq(Attribute::Name, PartialValue::new_iname("testperson"))),
modlist!([m_purge(Attribute::GidNumber)]),
None,
|_| {},
|qs_write: &mut QueryServerWriteTransaction| check_gid(
qs_write,
"83a0927f-3de1-45ec-bea0-2f7b997ef244",
0x997ef244
)
);
}
// Test NOT regenerated if given on mod
#[test]
fn test_gidnumber_modify_noregen() {
let e = entry_init!(
(Attribute::Class, EntryClass::Account.to_value()),
(Attribute::Class, EntryClass::PosixAccount.to_value()),
(Attribute::Name, Value::new_iname("testperson")),
(
Attribute::Uuid,
Value::Uuid(uuid::uuid!("83a0927f-3de1-45ec-bea0-2f7b997ef244"))
),
(Attribute::Description, Value::new_utf8s("testperson")),
(Attribute::DisplayName, Value::new_utf8s("testperson"))
);
let preload = vec![e];
run_modify_test!(
Ok(()),
preload,
filter!(f_eq(Attribute::Name, PartialValue::new_iname("testperson"))),
modlist!([
m_purge(Attribute::GidNumber),
m_pres(Attribute::GidNumber, &Value::new_uint32(2000))
]),
None,
|_| {},
|qs_write: &mut QueryServerWriteTransaction| check_gid(
qs_write,
"83a0927f-3de1-45ec-bea0-2f7b997ef244",
2000
)
);
}
#[test]
fn test_gidnumber_create_system_reject() {
let e = entry_init!(
(Attribute::Class, EntryClass::Account.to_value()),
(Attribute::Class, EntryClass::PosixAccount.to_value()),
(Attribute::Name, Value::new_iname("testperson")),
(
Attribute::Uuid,
Value::Uuid(uuid::uuid!("83a0927f-3de1-45ec-bea0-000000000244"))
),
(Attribute::Description, Value::new_utf8s("testperson")),
(Attribute::DisplayName, Value::new_utf8s("testperson"))
);
let create = vec![e];
let preload = Vec::new();
run_create_test!(
Err(OperationError::GidOverlapsSystemMin(65536)),
preload,
create,
None,
|_| {}
);
}
#[test]
fn test_gidnumber_create_secure_reject() {
let e = entry_init!(
(Attribute::Class, EntryClass::Account.to_value()),
(Attribute::Class, EntryClass::PosixAccount.to_value()),
(Attribute::Name, Value::new_iname("testperson")),
(Attribute::GidNumber, Value::Uint32(500)),
(Attribute::Description, Value::new_utf8s("testperson")),
(Attribute::DisplayName, Value::new_utf8s("testperson"))
);
let create = vec![e];
let preload = Vec::new();
run_create_test!(
Err(OperationError::GidOverlapsSystemMin(1000)),
preload,
create,
None,
|_| {}
);
}
#[test]
fn test_gidnumber_create_secure_root_reject() {
let e = entry_init!(
(Attribute::Class, EntryClass::Account.to_value()),
(Attribute::Class, EntryClass::PosixAccount.to_value()),
(Attribute::Name, Value::new_iname("testperson")),
(Attribute::GidNumber, Value::Uint32(0)),
(Attribute::Description, Value::new_utf8s("testperson")),
(Attribute::DisplayName, Value::new_utf8s("testperson"))
);
let create = vec![e];
let preload = Vec::new();
run_create_test!(
Err(OperationError::GidOverlapsSystemMin(1000)),
preload,
create,
None,
|_| {}
);
} }
} }

View file

@ -17,7 +17,7 @@ mod default_values;
mod domain; mod domain;
pub(crate) mod dyngroup; pub(crate) mod dyngroup;
mod eckeygen; mod eckeygen;
mod gidnumber; pub(crate) mod gidnumber;
mod jwskeygen; mod jwskeygen;
mod memberof; mod memberof;
mod namehistory; mod namehistory;

View file

@ -3,6 +3,12 @@ use std::time::Duration;
use crate::prelude::*; use crate::prelude::*;
use kanidm_proto::internal::{
DomainUpgradeCheckItem as ProtoDomainUpgradeCheckItem,
DomainUpgradeCheckReport as ProtoDomainUpgradeCheckReport,
DomainUpgradeCheckStatus as ProtoDomainUpgradeCheckStatus,
};
use super::ServerPhase; use super::ServerPhase;
impl QueryServer { impl QueryServer {
@ -910,6 +916,101 @@ impl<'a> QueryServerWriteTransaction<'a> {
Ok(()) Ok(())
} }
/// Migration domain level 6 to 7
#[instrument(level = "info", skip_all)]
pub(crate) fn migrate_domain_6_to_7(&mut self) -> Result<(), OperationError> {
if !cfg!(test) {
error!("Unable to raise domain level from 6 to 7.");
return Err(OperationError::MG0004DomainLevelInDevelopment);
}
// ============== Apply constraints ===============
// Due to changes in gidnumber allocation, in the *extremely* unlikely
// case that a user's ID was generated outside the valid range, we re-request
// the creation of their gid number to proceed.
let filter = filter!(f_and!([
f_or!([
f_eq(Attribute::Class, EntryClass::PosixAccount.into()),
f_eq(Attribute::Class, EntryClass::PosixGroup.into())
]),
// This logic gets a bit messy but it would be:
// If ! (
// (GID_REGULAR_USER_MIN < value < GID_REGULAR_USER_MAX) ||
// (GID_UNUSED_A_MIN < value < GID_UNUSED_A_MAX) ||
// (GID_UNUSED_B_MIN < value < GID_UNUSED_B_MAX) ||
// (GID_UNUSED_C_MIN < value < GID_UNUSED_D_MAX)
// )
f_andnot(f_or!([
f_and!([
// The gid value must be less than GID_REGULAR_USER_MAX
f_lt(
Attribute::GidNumber,
PartialValue::Uint32(crate::plugins::gidnumber::GID_REGULAR_USER_MAX)
),
// This bit of mental gymnastics is "greater than".
// The gid value must not be less than USER_MIN
f_andnot(f_lt(
Attribute::GidNumber,
PartialValue::Uint32(crate::plugins::gidnumber::GID_REGULAR_USER_MIN)
))
]),
f_and!([
f_lt(
Attribute::GidNumber,
PartialValue::Uint32(crate::plugins::gidnumber::GID_UNUSED_A_MAX)
),
f_andnot(f_lt(
Attribute::GidNumber,
PartialValue::Uint32(crate::plugins::gidnumber::GID_UNUSED_A_MIN)
))
]),
f_and!([
f_lt(
Attribute::GidNumber,
PartialValue::Uint32(crate::plugins::gidnumber::GID_UNUSED_B_MAX)
),
f_andnot(f_lt(
Attribute::GidNumber,
PartialValue::Uint32(crate::plugins::gidnumber::GID_UNUSED_B_MIN)
))
]),
// If both of these conditions are true we get:
// C_MIN < value < D_MAX, which the outer and-not inverts.
f_and!([
// The gid value must be less than GID_UNUSED_D_MAX
f_lt(
Attribute::GidNumber,
PartialValue::Uint32(crate::plugins::gidnumber::GID_UNUSED_D_MAX)
),
// This bit of mental gymnastics is "greater than".
// The gid value must not be less than C_MIN
f_andnot(f_lt(
Attribute::GidNumber,
PartialValue::Uint32(crate::plugins::gidnumber::GID_UNUSED_C_MIN)
))
]),
]))
]));
let results = self.internal_search(filter).map_err(|err| {
error!(?err, "migrate_domain_6_to_7 -> Error");
err
})?;
if !results.is_empty() {
error!("Unable to proceed. Not all entries meet gid/uid constraints.");
for entry in results {
error!(gid_invalid = ?entry.get_display_id());
}
return Err(OperationError::MG0005GidConstraintsNotMet);
}
// =========== Apply changes ==============
Ok(())
}
#[instrument(level = "info", skip_all)] #[instrument(level = "info", skip_all)]
pub fn initialise_schema_core(&mut self) -> Result<(), OperationError> { pub fn initialise_schema_core(&mut self) -> Result<(), OperationError> {
admin_debug!("initialise_schema_core -> start ..."); admin_debug!("initialise_schema_core -> start ...");
@ -1186,6 +1287,131 @@ impl<'a> QueryServerWriteTransaction<'a> {
} }
} }
impl<'a> QueryServerReadTransaction<'a> {
/// Retrieve the domain info of this server
pub fn domain_upgrade_check(
&mut self,
) -> Result<ProtoDomainUpgradeCheckReport, OperationError> {
let d_info = &self.d_info;
let name = d_info.d_name.clone();
let uuid = d_info.d_uuid;
let current_level = d_info.d_vers;
let upgrade_level = DOMAIN_NEXT_LEVEL;
let mut report_items = Vec::with_capacity(1);
if current_level <= DOMAIN_LEVEL_6 && upgrade_level >= DOMAIN_LEVEL_7 {
let item = self
.domain_upgrade_check_6_to_7_gidnumber()
.map_err(|err| {
error!(
?err,
"Failed to perform domain upgrade check 6 to 7 - gidnumber"
);
err
})?;
report_items.push(item);
}
Ok(ProtoDomainUpgradeCheckReport {
name,
uuid,
current_level,
upgrade_level,
report_items,
})
}
pub(crate) fn domain_upgrade_check_6_to_7_gidnumber(
&mut self,
) -> Result<ProtoDomainUpgradeCheckItem, OperationError> {
let filter = filter!(f_and!([
f_or!([
f_eq(Attribute::Class, EntryClass::PosixAccount.into()),
f_eq(Attribute::Class, EntryClass::PosixGroup.into())
]),
// This logic gets a bit messy but it would be:
// If ! (
// (GID_REGULAR_USER_MIN < value < GID_REGULAR_USER_MAX) ||
// (GID_UNUSED_A_MIN < value < GID_UNUSED_A_MAX) ||
// (GID_UNUSED_B_MIN < value < GID_UNUSED_B_MAX) ||
// (GID_UNUSED_C_MIN < value < GID_UNUSED_D_MAX)
// )
f_andnot(f_or!([
f_and!([
// The gid value must be less than GID_REGULAR_USER_MAX
f_lt(
Attribute::GidNumber,
PartialValue::Uint32(crate::plugins::gidnumber::GID_REGULAR_USER_MAX)
),
// This bit of mental gymnastics is "greater than".
// The gid value must not be less than USER_MIN
f_andnot(f_lt(
Attribute::GidNumber,
PartialValue::Uint32(crate::plugins::gidnumber::GID_REGULAR_USER_MIN)
))
]),
f_and!([
f_lt(
Attribute::GidNumber,
PartialValue::Uint32(crate::plugins::gidnumber::GID_UNUSED_A_MAX)
),
f_andnot(f_lt(
Attribute::GidNumber,
PartialValue::Uint32(crate::plugins::gidnumber::GID_UNUSED_A_MIN)
))
]),
f_and!([
f_lt(
Attribute::GidNumber,
PartialValue::Uint32(crate::plugins::gidnumber::GID_UNUSED_B_MAX)
),
f_andnot(f_lt(
Attribute::GidNumber,
PartialValue::Uint32(crate::plugins::gidnumber::GID_UNUSED_B_MIN)
))
]),
// If both of these conditions are true we get:
// C_MIN < value < D_MAX, which the outer and-not inverts.
f_and!([
// The gid value must be less than GID_UNUSED_D_MAX
f_lt(
Attribute::GidNumber,
PartialValue::Uint32(crate::plugins::gidnumber::GID_UNUSED_D_MAX)
),
// This bit of mental gymnastics is "greater than".
// The gid value must not be less than C_MIN
f_andnot(f_lt(
Attribute::GidNumber,
PartialValue::Uint32(crate::plugins::gidnumber::GID_UNUSED_C_MIN)
))
]),
]))
]));
let results = self.internal_search(filter)?;
let affected_entries = results
.into_iter()
.map(|entry| entry.get_display_id())
.collect::<Vec<_>>();
let status = if affected_entries.is_empty() {
ProtoDomainUpgradeCheckStatus::Pass6To7Gidnumber
} else {
ProtoDomainUpgradeCheckStatus::Fail6To7Gidnumber
};
Ok(ProtoDomainUpgradeCheckItem {
status,
from_level: DOMAIN_LEVEL_6,
to_level: DOMAIN_LEVEL_7,
affected_entries,
})
}
}
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use crate::prelude::*; use crate::prelude::*;

View file

@ -1042,9 +1042,21 @@ impl<'a> QueryServerReadTransaction<'a> {
&self.trim_cid &self.trim_cid
} }
// Verify the data content of the server is as expected. This will probably /// Retrieve the domain info of this server
// call various functions for validation, including possibly plugin pub fn domain_info(&mut self) -> Result<ProtoDomainInfo, OperationError> {
// verifications. let d_info = &self.d_info;
Ok(ProtoDomainInfo {
name: d_info.d_name.clone(),
displayname: d_info.d_display.clone(),
uuid: d_info.d_uuid,
level: d_info.d_vers,
})
}
/// Verify the data content of the server is as expected. This will probably
/// call various functions for validation, including possibly plugin
/// verifications.
pub(crate) fn verify(&mut self) -> Vec<Result<(), ConsistencyError>> { pub(crate) fn verify(&mut self) -> Vec<Result<(), ConsistencyError>> {
// If we fail after backend, we need to return NOW because we can't // If we fail after backend, we need to return NOW because we can't
// assert any other faith in the DB states. // assert any other faith in the DB states.
@ -1406,17 +1418,6 @@ impl<'a> QueryServerWriteTransaction<'a> {
&mut self.dyngroup_cache &mut self.dyngroup_cache
} }
pub fn domain_info(&mut self) -> Result<ProtoDomainInfo, OperationError> {
let d_info = &self.d_info;
Ok(ProtoDomainInfo {
name: d_info.d_name.clone(),
displayname: d_info.d_display.clone(),
uuid: d_info.d_uuid,
level: d_info.d_vers,
})
}
pub fn domain_raise(&mut self, level: u32) -> Result<(), OperationError> { pub fn domain_raise(&mut self, level: u32) -> Result<(), OperationError> {
if level > DOMAIN_MAX_LEVEL { if level > DOMAIN_MAX_LEVEL {
return Err(OperationError::MG0002RaiseDomainLevelExceedsMaximum); return Err(OperationError::MG0002RaiseDomainLevelExceedsMaximum);

View file

@ -2,7 +2,7 @@
use std::path::Path; use std::path::Path;
use std::time::SystemTime; use std::time::SystemTime;
use kanidm_proto::constants::KSESSIONID; use kanidm_proto::constants::{ATTR_GIDNUMBER, KSESSIONID};
use kanidm_proto::internal::{ use kanidm_proto::internal::{
ApiToken, CURegState, Filter, ImageValue, Modify, ModifyList, UatPurpose, UserAuthToken, ApiToken, CURegState, Filter, ImageValue, Modify, ModifyList, UatPurpose, UserAuthToken,
}; };
@ -550,12 +550,25 @@ async fn test_server_rest_posix_lifecycle(rsclient: KanidmClient) {
.idm_group_unix_extend("posix_group", None) .idm_group_unix_extend("posix_group", None)
.await .await
.unwrap(); .unwrap();
// here we check that we can successfully change the gid without breaking anything
let res = rsclient // here we check that we can successfully change the gid without breaking anything
rsclient
.idm_group_unix_extend("posix_group", Some(59999))
.await
.unwrap();
// Trigger the posix group to regen it's id. We only need this to be an Ok(), because the
// server internal tests already check the underlying logic.
rsclient
.idm_group_purge_attr("posix_group", ATTR_GIDNUMBER)
.await
.unwrap();
// Set the UID back to the expected test value.
rsclient
.idm_group_unix_extend("posix_group", Some(123123)) .idm_group_unix_extend("posix_group", Some(123123))
.await; .await
assert!(res.is_ok()); .unwrap();
let res = rsclient.idm_group_unix_extend("posix_group", None).await; let res = rsclient.idm_group_unix_extend("posix_group", None).await;
assert!(res.is_ok()); assert!(res.is_ok());
@ -1464,18 +1477,14 @@ async fn test_server_api_token_lifecycle(rsclient: KanidmClient) {
assert!(rsclient assert!(rsclient
.idm_service_account_unix_extend( .idm_service_account_unix_extend(
test_service_account_username, test_service_account_username,
Some(58008), Some(5000),
Some("/bin/vim") Some("/bin/vim")
) )
.await .await
.is_ok()); .is_ok());
assert!(rsclient assert!(rsclient
.idm_service_account_unix_extend( .idm_service_account_unix_extend(test_service_account_username, Some(999), Some("/bin/vim"))
test_service_account_username,
Some(1000),
Some("/bin/vim")
)
.await .await
.is_err()); .is_err());

View file

@ -6,8 +6,9 @@ impl DomainOpt {
match self { match self {
DomainOpt::SetDisplayName(copt) => copt.copt.debug, DomainOpt::SetDisplayName(copt) => copt.copt.debug,
DomainOpt::SetLdapBasedn { copt, .. } DomainOpt::SetLdapBasedn { copt, .. }
| DomainOpt::SetLdapAllowUnixPasswordBind { copt, .. } => copt.debug, | DomainOpt::SetLdapAllowUnixPasswordBind { copt, .. }
DomainOpt::Show(copt) | DomainOpt::ResetTokenKey(copt) => copt.debug, | DomainOpt::Show(copt)
| DomainOpt::ResetTokenKey(copt) => copt.debug,
} }
} }

View file

@ -1,5 +1,6 @@
use crate::common::OpType; use crate::common::OpType;
use crate::{handle_client_error, GroupOpt, GroupPosix, OutputMode}; use crate::{handle_client_error, GroupOpt, GroupPosix, OutputMode};
use kanidm_proto::constants::ATTR_GIDNUMBER;
mod account_policy; mod account_policy;
@ -18,6 +19,7 @@ impl GroupOpt {
GroupOpt::Posix { commands } => match commands { GroupOpt::Posix { commands } => match commands {
GroupPosix::Show(gcopt) => gcopt.copt.debug, GroupPosix::Show(gcopt) => gcopt.copt.debug,
GroupPosix::Set(gcopt) => gcopt.copt.debug, GroupPosix::Set(gcopt) => gcopt.copt.debug,
GroupPosix::ResetGidnumber { copt, .. } => copt.debug,
}, },
GroupOpt::AccountPolicy { commands } => commands.debug(), GroupOpt::AccountPolicy { commands } => commands.debug(),
} }
@ -179,6 +181,15 @@ impl GroupOpt {
), ),
} }
} }
GroupPosix::ResetGidnumber { copt, group_id } => {
let client = copt.to_client(OpType::Write).await;
if let Err(e) = client
.idm_group_purge_attr(group_id.as_str(), ATTR_GIDNUMBER)
.await
{
handle_client_error(e, copt.output_mode)
}
}
}, },
GroupOpt::AccountPolicy { commands } => commands.exec().await, GroupOpt::AccountPolicy { commands } => commands.exec().await,
} // end match } // end match

View file

@ -6,7 +6,9 @@ use dialoguer::theme::ColorfulTheme;
use dialoguer::{Confirm, Input, Password, Select}; use dialoguer::{Confirm, Input, Password, Select};
use kanidm_client::ClientError::Http as ClientErrorHttp; use kanidm_client::ClientError::Http as ClientErrorHttp;
use kanidm_client::KanidmClient; use kanidm_client::KanidmClient;
use kanidm_proto::constants::{ATTR_ACCOUNT_EXPIRE, ATTR_ACCOUNT_VALID_FROM, ATTR_SSH_PUBLICKEY}; use kanidm_proto::constants::{
ATTR_ACCOUNT_EXPIRE, ATTR_ACCOUNT_VALID_FROM, ATTR_GIDNUMBER, ATTR_SSH_PUBLICKEY,
};
use kanidm_proto::internal::OperationError::PasswordQuality; use kanidm_proto::internal::OperationError::PasswordQuality;
use kanidm_proto::internal::{ use kanidm_proto::internal::{
CUCredState, CUExtPortal, CUIntentToken, CURegState, CURegWarning, CUSessionToken, CUStatus, CUCredState, CUExtPortal, CUIntentToken, CURegState, CURegWarning, CUSessionToken, CUStatus,
@ -39,6 +41,7 @@ impl PersonOpt {
PersonPosix::Show(apo) => apo.copt.debug, PersonPosix::Show(apo) => apo.copt.debug,
PersonPosix::Set(apo) => apo.copt.debug, PersonPosix::Set(apo) => apo.copt.debug,
PersonPosix::SetPassword(apo) => apo.copt.debug, PersonPosix::SetPassword(apo) => apo.copt.debug,
PersonPosix::ResetGidnumber { copt, .. } => copt.debug,
}, },
PersonOpt::Session { commands } => match commands { PersonOpt::Session { commands } => match commands {
AccountUserAuthToken::Status(apo) => apo.copt.debug, AccountUserAuthToken::Status(apo) => apo.copt.debug,
@ -170,6 +173,15 @@ impl PersonOpt {
handle_client_error(e, aopt.copt.output_mode) handle_client_error(e, aopt.copt.output_mode)
} }
} }
PersonPosix::ResetGidnumber { copt, account_id } => {
let client = copt.to_client(OpType::Write).await;
if let Err(e) = client
.idm_person_account_purge_attr(account_id.as_str(), ATTR_GIDNUMBER)
.await
{
handle_client_error(e, copt.output_mode)
}
}
}, // end PersonOpt::Posix }, // end PersonOpt::Posix
PersonOpt::Session { commands } => match commands { PersonOpt::Session { commands } => match commands {
AccountUserAuthToken::Status(apo) => { AccountUserAuthToken::Status(apo) => {

View file

@ -1,5 +1,7 @@
use crate::common::{try_expire_at_from_string, OpType}; use crate::common::{try_expire_at_from_string, OpType};
use kanidm_proto::constants::{ATTR_ACCOUNT_EXPIRE, ATTR_ACCOUNT_VALID_FROM, ATTR_SSH_PUBLICKEY}; use kanidm_proto::constants::{
ATTR_ACCOUNT_EXPIRE, ATTR_ACCOUNT_VALID_FROM, ATTR_GIDNUMBER, ATTR_SSH_PUBLICKEY,
};
use kanidm_proto::messages::{AccountChangeMessage, ConsoleOutputMode, MessageStatus}; use kanidm_proto::messages::{AccountChangeMessage, ConsoleOutputMode, MessageStatus};
use time::OffsetDateTime; use time::OffsetDateTime;
@ -24,6 +26,7 @@ impl ServiceAccountOpt {
ServiceAccountOpt::Posix { commands } => match commands { ServiceAccountOpt::Posix { commands } => match commands {
ServiceAccountPosix::Show(apo) => apo.copt.debug, ServiceAccountPosix::Show(apo) => apo.copt.debug,
ServiceAccountPosix::Set(apo) => apo.copt.debug, ServiceAccountPosix::Set(apo) => apo.copt.debug,
ServiceAccountPosix::ResetGidnumber { copt, .. } => copt.debug,
}, },
ServiceAccountOpt::Session { commands } => match commands { ServiceAccountOpt::Session { commands } => match commands {
AccountUserAuthToken::Status(apo) => apo.copt.debug, AccountUserAuthToken::Status(apo) => apo.copt.debug,
@ -208,6 +211,15 @@ impl ServiceAccountOpt {
handle_client_error(e, aopt.copt.output_mode) handle_client_error(e, aopt.copt.output_mode)
} }
} }
ServiceAccountPosix::ResetGidnumber { copt, account_id } => {
let client = copt.to_client(OpType::Write).await;
if let Err(e) = client
.idm_service_account_purge_attr(account_id.as_str(), ATTR_GIDNUMBER)
.await
{
handle_client_error(e, copt.output_mode)
}
}
}, // end ServiceAccountOpt::Posix }, // end ServiceAccountOpt::Posix
ServiceAccountOpt::Session { commands } => match commands { ServiceAccountOpt::Session { commands } => match commands {
AccountUserAuthToken::Status(apo) => { AccountUserAuthToken::Status(apo) => {

View file

@ -107,6 +107,13 @@ pub enum GroupPosix {
/// Setup posix group properties, or alter them /// Setup posix group properties, or alter them
#[clap(name = "set")] #[clap(name = "set")]
Set(GroupPosixOpt), Set(GroupPosixOpt),
/// Reset the gidnumber of this group to the generated default
#[clap(name = "reset-gidnumber")]
ResetGidnumber {
group_id: String,
#[clap(flatten)]
copt: CommonOpt,
},
} }
#[derive(Debug, Clone, Copy, Eq, PartialEq)] #[derive(Debug, Clone, Copy, Eq, PartialEq)]
@ -444,6 +451,13 @@ pub enum PersonPosix {
Set(AccountPosixOpt), Set(AccountPosixOpt),
#[clap(name = "set-password")] #[clap(name = "set-password")]
SetPassword(AccountNamedOpt), SetPassword(AccountNamedOpt),
/// Reset the gidnumber of this person to the generated default
#[clap(name = "reset-gidnumber")]
ResetGidnumber {
account_id: String,
#[clap(flatten)]
copt: CommonOpt,
},
} }
#[derive(Debug, Subcommand)] #[derive(Debug, Subcommand)]
@ -452,6 +466,13 @@ pub enum ServiceAccountPosix {
Show(AccountNamedOpt), Show(AccountNamedOpt),
#[clap(name = "set")] #[clap(name = "set")]
Set(AccountPosixOpt), Set(AccountPosixOpt),
/// Reset the gidnumber of this service account to the generated default
#[clap(name = "reset-gidnumber")]
ResetGidnumber {
account_id: String,
#[clap(flatten)]
copt: CommonOpt,
},
} }
#[derive(Debug, Args)] #[derive(Debug, Args)]