181 pam nsswitch name spn (#270)

This allows configuration of which attribute is presented during gid/uid resolution, adds home directory prefixing, and home directory name attribute selection.
This commit is contained in:
Firstyear 2020-06-21 21:57:48 +10:00 committed by GitHub
parent 0b15477ef4
commit 9aa03906f8
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
15 changed files with 369 additions and 117 deletions

45
Cargo.lock generated
View file

@ -383,9 +383,9 @@ checksum = "cff77d8686867eceff3105329d4698d96c2391c176d5d03adc90c7389162b5b8"
[[package]]
name = "async-trait"
version = "0.1.35"
version = "0.1.36"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "89cb5d814ab2a47fd66d3266e9efccb53ca4c740b7451043b8ffcf9a6208f3f8"
checksum = "a265e3abeffdce30b2e26b7a11b222fe37c6067404001b434101457d0385eb92"
dependencies = [
"proc-macro2",
"quote",
@ -575,9 +575,12 @@ checksum = "08c48aae112d48ed9f069b33538ea9e3e90aa263cfa3d1c24309612b1f7472de"
[[package]]
name = "bytes"
version = "0.5.4"
version = "0.5.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "130aac562c0dd69c56b3b1cc8ffd2e17be31d0b6c25b61c96b76231aa23e39e1"
checksum = "118cf036fbb97d0816e3c34b2d7a1e8cfc60f68fcf63d550ddbe9bd5f59c213b"
dependencies = [
"loom",
]
[[package]]
name = "bytestring"
@ -648,9 +651,9 @@ dependencies = [
[[package]]
name = "concread"
version = "0.1.15"
version = "0.1.16"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "39051fb0b539c35c50dfaaa9e703a05d185ea15c29449392ca929370be9ac4fb"
checksum = "86a22319b5f3043efd199c030cef117a4de34a4350ba687010b1c1ac2b649201"
dependencies = [
"crossbeam",
"crossbeam-epoch",
@ -1226,6 +1229,19 @@ dependencies = [
"byteorder",
]
[[package]]
name = "generator"
version = "0.6.21"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "add72f17bb81521258fcc8a7a3245b1e184e916bfbe34f0ea89558f440df5c68"
dependencies = [
"cc",
"libc",
"log",
"rustc_version",
"winapi 0.3.8",
]
[[package]]
name = "generic-array"
version = "0.8.3"
@ -1703,6 +1719,17 @@ dependencies = [
"cfg-if",
]
[[package]]
name = "loom"
version = "0.3.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4ecc775857611e1df29abba5c41355cdf540e7e9d4acfdf0f355eefee82330b7"
dependencies = [
"cfg-if",
"generator",
"scoped-tls",
]
[[package]]
name = "lru-cache"
version = "0.1.2"
@ -2535,6 +2562,12 @@ dependencies = [
"parking_lot",
]
[[package]]
name = "scoped-tls"
version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "332ffa32bf586782a3efaeb58f127980944bbc8c4d6913a86107ac2a5ab24b28"
[[package]]
name = "scopeguard"
version = "1.1.0"

View file

@ -146,26 +146,22 @@ git rebase --abort
### Development Server Quickstart for Interactive Testing
Today the server is still in a state of heavy development, and hasn't been packaged or setup for
production usage.
However, we are able to run test or demo servers that are suitable for previews and testing.
After getting the code, you will need a rust environment. Please investigate rustup for your platform
to establish this.
Once you have the source code, you need certificates to use with the server. I recommend using
let's encrypt, but if this is not possible, please use our insecure cert tool:
let's encrypt, but if this is not possible, please use our insecure cert tool. Without certificates
authentication will fail.
mkdir insecure
cd insecure
../insecure_generate_tls.sh
You can now build and run the server with:
You can now build and run the server with the commands below. It will use a database in /tmp/kanidm.db
cd kanidmd
cargo run -- recover_account -D /tmp/kanidm.db -n admin
cargo run -- server -D /tmp/kanidm.db -C ../insecure/ca.pem -c ../insecure/cert.pem -k ../insecure/key.pem --bindaddr 127.0.0.1:8080
cargo run -- recover_account -c ./server.toml -n admin
cargo run -- server -c ./server.toml
In a new terminal, you can now build and run the client tools with:
@ -174,7 +170,6 @@ In a new terminal, you can now build and run the client tools with:
cargo run -- self whoami -H https://localhost:8080 -D anonymous -C ../insecure/ca.pem
cargo run -- self whoami -H https://localhost:8080 -D admin -C ../insecure/ca.pem
### Using curl with anonymous:
Sometimes you may want to check the json of an endpoint. Before you can do this, you need

View file

@ -25,10 +25,36 @@ This daemon uses connection configuration from /etc/kanidm/config. This is the c
client_tools. You can also configure some details of the unixd daemon in /etc/kanidm/unixd.
pam_allowed_login_groups = ["posix_group"]
default_shell = "/bin/bash"
home_prefix = "/home/"
home_attr = "uuid"
uid_attr_map = "spn"
gid_attr_map = "spn"
The `pam_allowed_login_groups` defines a set of posix groups where membership of any of these
groups will be allowed to login via pam. All posix users and groups can be resolved by nss
regardless of pam login status.
regardless of pam login status. This may be a group name, spn or uuid.
`default_shell` is the default shell for users with none defined. Defaults to /bin/bash.
`home_prefix` is the prepended path to where home directories are stored. Must end with
a trailing `/`. Defaults to `/home/`.
`home_attr` is the default token attribute used for the home directory path. Valid
choices are `uuid`, `name`, `spn`. Defaults to `uuid`.
> **NOTICE:**
> All users in kanidm can change their name (and their spn) at any time. If you change
> `home_attr` from `uuid` you *must* have a plan on how to manage these directory renames
> in your system. We recommend that you have a stable id (like the uuid) and symlinks
> from the name to the uuid folder. The project plans to add automatic support for this
> with https://github.com/kanidm/kanidm/issues/180
`uid_attr_map` chooses which attribute is used for domain local users in presentation. Defaults
to `spn`. Users from a trust will always use spn.
`gid_attr_map` chooses which attribute is used for domain local groups in presentation. Defaults
to `spn`. Groups from a trust will always use spn.
You can then check the communication status of the daemon as any user account.

View file

@ -1,4 +1,5 @@
use crate::db::Db;
use crate::unix_config::{HomeAttr, UidAttr};
use crate::unix_proto::{NssGroup, NssUser};
use kanidm_client::asynchronous::KanidmAsyncClient;
use kanidm_client::ClientError;
@ -29,6 +30,11 @@ pub struct CacheLayer {
state: Mutex<CacheState>,
pam_allow_groups: BTreeSet<String>,
timeout_seconds: u64,
default_shell: String,
home_prefix: String,
home_attr: HomeAttr,
uid_attr_map: UidAttr,
gid_attr_map: UidAttr,
}
impl ToString for Id {
@ -41,6 +47,8 @@ impl ToString for Id {
}
impl CacheLayer {
// TODO: Could consider refactoring this to be better ...
#[allow(clippy::too_many_arguments)]
pub fn new(
// need db path
path: &str,
@ -49,6 +57,11 @@ impl CacheLayer {
//
client: KanidmAsyncClient,
pam_allow_groups: Vec<String>,
default_shell: String,
home_prefix: String,
home_attr: HomeAttr,
uid_attr_map: UidAttr,
gid_attr_map: UidAttr,
) -> Result<Self, ()> {
let db = Db::new(path)?;
@ -67,6 +80,11 @@ impl CacheLayer {
state: Mutex::new(CacheState::OfflineNextCheck(SystemTime::now())),
timeout_seconds,
pam_allow_groups: pam_allow_groups.into_iter().collect(),
default_shell,
home_prefix,
home_attr,
uid_attr_map,
gid_attr_map,
})
}
@ -453,11 +471,7 @@ impl CacheLayer {
.get_group_members(uuid)
.unwrap_or_else(|_| Vec::new())
.into_iter()
.map(|ut| {
// TODO #181: We'll have a switch to convert this to spn in some configs
// in the future.
ut.name
})
.map(|ut| self.token_uidattr(&ut))
.collect()
}
@ -467,18 +481,37 @@ impl CacheLayer {
Ok(token.map(|t| t.sshkeys).unwrap_or_else(Vec::new))
}
#[inline(always)]
fn token_homedirectory(&self, token: &UnixUserToken) -> String {
format!(
"{}{}",
self.home_prefix,
match self.home_attr {
HomeAttr::Uuid => token.uuid.as_str(),
HomeAttr::Spn => token.spn.as_str(),
HomeAttr::Name => token.name.as_str(),
}
)
}
#[inline(always)]
fn token_uidattr(&self, token: &UnixUserToken) -> String {
match self.uid_attr_map {
UidAttr::Spn => token.spn.as_str(),
UidAttr::Name => token.name.as_str(),
}
.to_string()
}
pub fn get_nssaccounts(&self) -> Result<Vec<NssUser>, ()> {
self.get_cached_usertokens().map(|l| {
l.into_iter()
.map(|tok| {
NssUser {
homedir: format!("/home/{}", tok.name),
name: tok.name,
.map(|tok| NssUser {
homedir: self.token_homedirectory(&tok),
name: self.token_uidattr(&tok),
gid: tok.gidnumber,
gecos: tok.displayname,
// TODO #254: default shell override.
shell: tok.shell.unwrap_or_else(|| "/bin/bash".to_string()),
}
shell: tok.shell.unwrap_or_else(|| self.default_shell.clone()),
})
.collect()
})
@ -486,15 +519,12 @@ impl CacheLayer {
async fn get_nssaccount(&self, account_id: Id) -> Result<Option<NssUser>, ()> {
let token = self.get_usertoken(account_id).await?;
Ok(token.map(|tok| {
NssUser {
homedir: format!("/home/{}", tok.name),
name: tok.name,
Ok(token.map(|tok| NssUser {
homedir: self.token_homedirectory(&tok),
name: self.token_uidattr(&tok),
gid: tok.gidnumber,
gecos: tok.displayname,
// TODO #254: default shell override.
shell: tok.shell.unwrap_or_else(|| "/bin/bash".to_string()),
}
shell: tok.shell.unwrap_or_else(|| self.default_shell.clone()),
}))
}
@ -506,13 +536,22 @@ impl CacheLayer {
self.get_nssaccount(Id::Gid(gid)).await
}
#[inline(always)]
fn token_gidattr(&self, token: &UnixGroupToken) -> String {
match self.gid_attr_map {
UidAttr::Spn => token.spn.as_str(),
UidAttr::Name => token.name.as_str(),
}
.to_string()
}
pub fn get_nssgroups(&self) -> Result<Vec<NssGroup>, ()> {
self.get_cached_grouptokens().map(|l| {
l.into_iter()
.map(|tok| {
let members = self.get_groupmembers(&tok.uuid);
NssGroup {
name: tok.name,
name: self.token_gidattr(&tok),
gid: tok.gidnumber,
members,
}
@ -527,7 +566,7 @@ impl CacheLayer {
Ok(token.map(|tok| {
let members = self.get_groupmembers(&tok.uuid);
NssGroup {
name: tok.name,
name: self.token_gidattr(&tok),
gid: tok.gidnumber,
members,
}
@ -628,7 +667,12 @@ impl CacheLayer {
let token = self.get_usertoken(Id::Name(account_id.to_string())).await?;
Ok(token.map(|tok| {
let user_set: BTreeSet<_> = tok.groups.iter().map(|g| g.name.clone()).collect();
let user_set: BTreeSet<_> = tok
.groups
.iter()
.map(|g| vec![g.name.clone(), g.spn.clone(), g.uuid.clone()])
.flatten()
.collect();
debug!(
"Checking if -> {:?} & {:?}",

View file

@ -1,4 +1,11 @@
use crate::unix_config::{HomeAttr, UidAttr};
pub const DEFAULT_SOCK_PATH: &str = "/var/run/kanidm-unixd/sock";
pub const DEFAULT_DB_PATH: &str = "/var/cache/kanidm-unixd/kanidm.cache.db";
pub const DEFAULT_CONN_TIMEOUT: u64 = 2;
pub const DEFAULT_CACHE_TIMEOUT: u64 = 15;
pub const DEFAULT_SHELL: &str = "/bin/bash";
pub const DEFAULT_HOME_PREFIX: &str = "/home/";
pub const DEFAULT_HOME_ATTR: HomeAttr = HomeAttr::Uuid;
pub const DEFAULT_UID_ATTR_MAP: UidAttr = UidAttr::Spn;
pub const DEFAULT_GID_ATTR_MAP: UidAttr = UidAttr::Spn;

View file

@ -225,6 +225,11 @@ async fn main() {
cfg.cache_timeout,
rsclient,
cfg.pam_allowed_login_groups.clone(),
cfg.default_shell.clone(),
cfg.home_prefix.clone(),
cfg.home_attr,
cfg.uid_attr_map,
cfg.gid_attr_map,
)
.expect("Failed to build cache layer."),
);

View file

@ -76,7 +76,7 @@ impl<'a> DbTxn<'a> {
pub fn migrate(&self) -> Result<(), ()> {
self.conn.set_prepared_statement_cache_capacity(16);
self.conn
.prepare_cached("PRAGMA journal_mode=WAL;")
.prepare("PRAGMA journal_mode=WAL;")
.and_then(|mut wal_stmt| wal_stmt.query(NO_PARAMS).map(|_| ()))
.map_err(|e| {
error!("sqlite account_t create error -> {:?}", e);
@ -185,7 +185,7 @@ impl<'a> DbTxn<'a> {
fn get_account_data_name(&self, account_id: &str) -> Result<Vec<(Vec<u8>, i64)>, ()> {
let mut stmt = self.conn
.prepare_cached(
.prepare(
"SELECT token, expiry FROM account_t WHERE uuid = :account_id OR name = :account_id OR spn = :account_id"
)
.map_err(|e| {
@ -211,7 +211,7 @@ impl<'a> DbTxn<'a> {
fn get_account_data_gid(&self, gid: u32) -> Result<Vec<(Vec<u8>, i64)>, ()> {
let mut stmt = self
.conn
.prepare_cached("SELECT token, expiry FROM account_t WHERE gidnumber = :gid")
.prepare("SELECT token, expiry FROM account_t WHERE gidnumber = :gid")
.map_err(|e| {
error!("sqlite select prepare failure -> {:?}", e);
})?;
@ -263,7 +263,7 @@ impl<'a> DbTxn<'a> {
pub fn get_accounts(&self) -> Result<Vec<UnixUserToken>, ()> {
let mut stmt = self
.conn
.prepare_cached("SELECT token FROM account_t")
.prepare("SELECT token FROM account_t")
.map_err(|e| {
error!("sqlite select prepare failure -> {:?}", e);
})?;
@ -337,7 +337,7 @@ impl<'a> DbTxn<'a> {
if updated == 0 {
let mut stmt = self.conn
.prepare_cached("INSERT INTO account_t (uuid, name, spn, gidnumber, token, expiry) VALUES (:uuid, :name, :spn, :gidnumber, :token, :expiry) ON CONFLICT(uuid) DO UPDATE SET name=excluded.name, spn=excluded.name, gidnumber=excluded.gidnumber, token=excluded.token, expiry=excluded.expiry")
.prepare("INSERT INTO account_t (uuid, name, spn, gidnumber, token, expiry) VALUES (:uuid, :name, :spn, :gidnumber, :token, :expiry) ON CONFLICT(uuid) DO UPDATE SET name=excluded.name, spn=excluded.name, gidnumber=excluded.gidnumber, token=excluded.token, expiry=excluded.expiry")
.map_err(|e| {
error!("sqlite prepare error -> {:?}", e);
})?;
@ -363,7 +363,7 @@ impl<'a> DbTxn<'a> {
// First remove everything that already exists:
let mut stmt = self
.conn
.prepare_cached("DELETE FROM memberof_t WHERE a_uuid = :a_uuid")
.prepare("DELETE FROM memberof_t WHERE a_uuid = :a_uuid")
.map_err(|e| {
error!("sqlite prepare error -> {:?}", e);
})?;
@ -377,7 +377,7 @@ impl<'a> DbTxn<'a> {
let mut stmt = self
.conn
.prepare_cached("INSERT INTO memberof_t (a_uuid, g_uuid) VALUES (:a_uuid, :g_uuid)")
.prepare("INSERT INTO memberof_t (a_uuid, g_uuid) VALUES (:a_uuid, :g_uuid)")
.map_err(|e| {
error!("sqlite prepare error -> {:?}", e);
})?;
@ -423,9 +423,7 @@ impl<'a> DbTxn<'a> {
pub fn check_account_password(&self, a_uuid: &str, cred: &str) -> Result<bool, ()> {
let mut stmt = self
.conn
.prepare_cached(
"SELECT password FROM account_t WHERE uuid = :a_uuid AND password IS NOT NULL",
)
.prepare("SELECT password FROM account_t WHERE uuid = :a_uuid AND password IS NOT NULL")
.map_err(|e| {
error!("sqlite select prepare failure -> {:?}", e);
})?;
@ -472,7 +470,7 @@ impl<'a> DbTxn<'a> {
fn get_group_data_name(&self, grp_id: &str) -> Result<Vec<(Vec<u8>, i64)>, ()> {
let mut stmt = self.conn
.prepare_cached(
.prepare(
"SELECT token, expiry FROM group_t WHERE uuid = :grp_id OR name = :grp_id OR spn = :grp_id"
)
.map_err(|e| {
@ -498,7 +496,7 @@ impl<'a> DbTxn<'a> {
fn get_group_data_gid(&self, gid: u32) -> Result<Vec<(Vec<u8>, i64)>, ()> {
let mut stmt = self
.conn
.prepare_cached("SELECT token, expiry FROM group_t WHERE gidnumber = :gid")
.prepare("SELECT token, expiry FROM group_t WHERE gidnumber = :gid")
.map_err(|e| {
error!("sqlite select prepare failure -> {:?}", e);
})?;
@ -550,7 +548,7 @@ impl<'a> DbTxn<'a> {
pub fn get_group_members(&self, g_uuid: &str) -> Result<Vec<UnixUserToken>, ()> {
let mut stmt = self
.conn
.prepare_cached("SELECT account_t.token FROM (account_t, memberof_t) WHERE account_t.uuid = memberof_t.a_uuid AND memberof_t.g_uuid = :g_uuid")
.prepare("SELECT account_t.token FROM (account_t, memberof_t) WHERE account_t.uuid = memberof_t.a_uuid AND memberof_t.g_uuid = :g_uuid")
.map_err(|e| {
error!("sqlite select prepare failure -> {:?}", e);
})?;
@ -584,7 +582,7 @@ impl<'a> DbTxn<'a> {
pub fn get_groups(&self) -> Result<Vec<UnixGroupToken>, ()> {
let mut stmt = self
.conn
.prepare_cached("SELECT token FROM group_t")
.prepare("SELECT token FROM group_t")
.map_err(|e| {
error!("sqlite select prepare failure -> {:?}", e);
})?;
@ -624,7 +622,7 @@ impl<'a> DbTxn<'a> {
})?;
let mut stmt = self.conn
.prepare_cached("INSERT OR REPLACE INTO group_t (uuid, name, spn, gidnumber, token, expiry) VALUES (:uuid, :name, :spn, :gidnumber, :token, :expiry)")
.prepare("INSERT OR REPLACE INTO group_t (uuid, name, spn, gidnumber, token, expiry) VALUES (:uuid, :name, :spn, :gidnumber, :token, :expiry)")
.map_err(|e| {
error!("sqlite prepare error -> {:?}", e);
})?;

View file

@ -8,7 +8,7 @@ extern crate log;
pub mod cache;
pub mod client;
mod constants;
pub mod constants;
pub(crate) mod db;
pub mod unix_config;
pub mod unix_proto;

View file

@ -1,5 +1,6 @@
use crate::constants::{
DEFAULT_CACHE_TIMEOUT, DEFAULT_CONN_TIMEOUT, DEFAULT_DB_PATH, DEFAULT_SOCK_PATH,
DEFAULT_CACHE_TIMEOUT, DEFAULT_CONN_TIMEOUT, DEFAULT_DB_PATH, DEFAULT_GID_ATTR_MAP,
DEFAULT_HOME_ATTR, DEFAULT_HOME_PREFIX, DEFAULT_SHELL, DEFAULT_SOCK_PATH, DEFAULT_UID_ATTR_MAP,
};
use serde_derive::Deserialize;
use std::fs::File;
@ -13,6 +14,24 @@ struct ConfigInt {
conn_timeout: Option<u64>,
cache_timeout: Option<u64>,
pam_allowed_login_groups: Option<Vec<String>>,
default_shell: Option<String>,
home_prefix: Option<String>,
home_attr: Option<String>,
uid_attr_map: Option<String>,
gid_attr_map: Option<String>,
}
#[derive(Debug, Copy, Clone)]
pub enum HomeAttr {
Uuid,
Spn,
Name,
}
#[derive(Debug, Copy, Clone)]
pub enum UidAttr {
Name,
Spn,
}
#[derive(Debug)]
@ -22,6 +41,11 @@ pub struct KanidmUnixdConfig {
pub conn_timeout: u64,
pub cache_timeout: u64,
pub pam_allowed_login_groups: Vec<String>,
pub default_shell: String,
pub home_prefix: String,
pub home_attr: HomeAttr,
pub uid_attr_map: UidAttr,
pub gid_attr_map: UidAttr,
}
impl Default for KanidmUnixdConfig {
@ -38,6 +62,11 @@ impl KanidmUnixdConfig {
conn_timeout: DEFAULT_CONN_TIMEOUT,
cache_timeout: DEFAULT_CACHE_TIMEOUT,
pam_allowed_login_groups: Vec::new(),
default_shell: DEFAULT_SHELL.to_string(),
home_prefix: DEFAULT_HOME_PREFIX.to_string(),
home_attr: DEFAULT_HOME_ATTR,
uid_attr_map: DEFAULT_UID_ATTR_MAP,
gid_attr_map: DEFAULT_GID_ATTR_MAP,
}
}
@ -69,6 +98,42 @@ impl KanidmUnixdConfig {
pam_allowed_login_groups: config
.pam_allowed_login_groups
.unwrap_or(self.pam_allowed_login_groups),
default_shell: config.default_shell.unwrap_or(self.default_shell),
home_prefix: config.home_prefix.unwrap_or(self.home_prefix),
home_attr: config
.home_attr
.and_then(|v| match v.as_str() {
"uuid" => Some(HomeAttr::Uuid),
"spn" => Some(HomeAttr::Spn),
"name" => Some(HomeAttr::Name),
_ => {
warn!("Invalid home_attr configured, using default ...");
None
}
})
.unwrap_or(self.home_attr),
uid_attr_map: config
.uid_attr_map
.and_then(|v| match v.as_str() {
"spn" => Some(UidAttr::Spn),
"name" => Some(UidAttr::Name),
_ => {
warn!("Invalid uid_attr_map configured, using default ...");
None
}
})
.unwrap_or(self.uid_attr_map),
gid_attr_map: config
.gid_attr_map
.and_then(|v| match v.as_str() {
"spn" => Some(UidAttr::Spn),
"name" => Some(UidAttr::Name),
_ => {
warn!("Invalid gid_attr_map configured, using default ...");
None
}
})
.unwrap_or(self.gid_attr_map),
})
}
}

View file

@ -8,6 +8,10 @@ use kanidm::config::{Configuration, IntegrationTestConfig};
use kanidm::core::create_server_core;
use kanidm_unix_common::cache::CacheLayer;
use kanidm_unix_common::constants::{
DEFAULT_GID_ATTR_MAP, DEFAULT_HOME_ATTR, DEFAULT_HOME_PREFIX, DEFAULT_SHELL,
DEFAULT_UID_ATTR_MAP,
};
use tokio::runtime::Runtime;
use kanidm_client::asynchronous::KanidmAsyncClient;
@ -77,6 +81,11 @@ fn run_test(fix_fn: fn(&KanidmClient) -> (), test_fn: fn(CacheLayer, KanidmAsync
300,
rsclient,
vec!["allowed_group".to_string()],
DEFAULT_SHELL.to_string(),
DEFAULT_HOME_PREFIX.to_string(),
DEFAULT_HOME_ATTR,
DEFAULT_UID_ATTR_MAP,
DEFAULT_GID_ATTR_MAP,
)
.expect("Failed to build cache layer.");

View file

@ -17,7 +17,7 @@ use uuid::Uuid;
// use std::borrow::Borrow;
const DEFAULT_CACHE_TARGET: usize = 10240;
const DEFAULT_CACHE_TARGET: usize = 16384;
const DEFAULT_IDL_CACHE_RATIO: usize = 32;
const DEFAULT_NAME_CACHE_RATIO: usize = 8;
const DEFAULT_CACHE_RMISS: usize = 8;
@ -93,7 +93,8 @@ macro_rules! get_identry {
(
$self:expr,
$au:expr,
$idl:expr
$idl:expr,
$is_read_op:expr
) => {{
lperf_trace_segment!($au, "be::idl_arc_sqlite::get_identry", || {
let mut result: Vec<Entry<_, _>> = Vec::new();
@ -114,9 +115,11 @@ macro_rules! get_identry {
// Now, get anything from nidl that is needed.
let mut db_result = $self.db.get_identry($au, &IDL::Partial(nidl))?;
// Clone everything from db_result into the cache.
if $is_read_op {
db_result.iter().for_each(|e| {
$self.entry_cache.insert(e.get_id(), Box::new(e.clone()));
});
}
// Merge the two vecs
result.append(&mut db_result);
}
@ -343,7 +346,7 @@ impl<'a> IdlArcSqliteTransaction for IdlArcSqliteReadTransaction<'a> {
au: &mut AuditScope,
idl: &IDL,
) -> Result<Vec<Entry<EntrySealed, EntryCommitted>>, OperationError> {
get_identry!(self, au, idl)
get_identry!(self, au, idl, true)
}
fn get_identry_raw(
@ -416,7 +419,7 @@ impl<'a> IdlArcSqliteTransaction for IdlArcSqliteWriteTransaction<'a> {
au: &mut AuditScope,
idl: &IDL,
) -> Result<Vec<Entry<EntrySealed, EntryCommitted>>, OperationError> {
get_identry!(self, au, idl)
get_identry!(self, au, idl, false)
}
fn get_identry_raw(
@ -850,6 +853,7 @@ impl IdlArcSqlite {
pool_size as usize,
DEFAULT_CACHE_RMISS,
DEFAULT_CACHE_WMISS,
false,
);
// The idl cache should have smaller items, and is critical for fast searches
// so we allow it to have a higher ratio of items relative to the entries.
@ -858,6 +862,7 @@ impl IdlArcSqlite {
pool_size as usize,
DEFAULT_CACHE_RMISS,
DEFAULT_CACHE_WMISS,
false,
);
let name_cache = Arc::new(
@ -865,6 +870,7 @@ impl IdlArcSqlite {
pool_size as usize,
DEFAULT_CACHE_RMISS,
DEFAULT_CACHE_WMISS,
true,
);
let allids = CowCell::new(IDLBitRange::new());

View file

@ -16,8 +16,6 @@ use uuid::Uuid;
const DBV_ID2ENTRY: &str = "id2entry";
const DBV_INDEXV: &str = "indexv";
// TODO: Needs to change over time as number of indexes grows?
const PREPARE_STMT_CACHE: usize = 256;
#[derive(Debug)]
pub struct IdSqliteEntry {
@ -100,7 +98,7 @@ pub trait IdlSqliteTransaction {
IDL::ALLIDS => {
let mut stmt = self
.get_conn()
.prepare_cached("SELECT id, data FROM id2entry")
.prepare("SELECT id, data FROM id2entry")
.map_err(|e| {
ladmin_error!(au, "SQLite Error {:?}", e);
OperationError::SQLiteError
@ -132,7 +130,7 @@ pub trait IdlSqliteTransaction {
IDL::Partial(idli) | IDL::PartialThreshold(idli) | IDL::Indexed(idli) => {
let mut stmt = self
.get_conn()
.prepare_cached("SELECT id, data FROM id2entry WHERE id = :idl")
.prepare("SELECT id, data FROM id2entry WHERE id = :idl")
.map_err(|e| {
ladmin_error!(au, "SQLite Error {:?}", e);
OperationError::SQLiteError
@ -192,7 +190,7 @@ pub trait IdlSqliteTransaction {
let tname = format!("idx_{}_{}", itype.as_idx_str(), attr);
let mut stmt = self
.get_conn()
.prepare_cached("SELECT COUNT(name) from sqlite_master where name = :tname")
.prepare("SELECT COUNT(name) from sqlite_master where name = :tname")
.map_err(|e| {
ladmin_error!(audit, "SQLite Error {:?}", e);
OperationError::SQLiteError
@ -230,10 +228,7 @@ pub trait IdlSqliteTransaction {
itype.as_idx_str(),
attr
);
let mut stmt = self
.get_conn()
.prepare_cached(query.as_str())
.map_err(|e| {
let mut stmt = self.get_conn().prepare(query.as_str()).map_err(|e| {
ladmin_error!(audit, "SQLite Error {:?}", e);
OperationError::SQLiteError
})?;
@ -268,7 +263,7 @@ pub trait IdlSqliteTransaction {
// The table exists - lets now get the actual index itself.
let mut stmt = self
.get_conn()
.prepare_cached("SELECT uuid FROM idx_name2uuid WHERE name = :name")
.prepare("SELECT uuid FROM idx_name2uuid WHERE name = :name")
.map_err(|e| {
ladmin_error!(audit, "SQLite Error {:?}", e);
OperationError::SQLiteError
@ -299,7 +294,7 @@ pub trait IdlSqliteTransaction {
// The table exists - lets now get the actual index itself.
let mut stmt = self
.get_conn()
.prepare_cached("SELECT spn FROM idx_uuid2spn WHERE uuid = :uuid")
.prepare("SELECT spn FROM idx_uuid2spn WHERE uuid = :uuid")
.map_err(|e| {
ladmin_error!(audit, "SQLite Error {:?}", e);
OperationError::SQLiteError
@ -340,7 +335,7 @@ pub trait IdlSqliteTransaction {
// The table exists - lets now get the actual index itself.
let mut stmt = self
.get_conn()
.prepare_cached("SELECT rdn FROM idx_uuid2rdn WHERE uuid = :uuid")
.prepare("SELECT rdn FROM idx_uuid2rdn WHERE uuid = :uuid")
.map_err(|e| {
ladmin_error!(audit, "SQLite Error {:?}", e);
OperationError::SQLiteError
@ -415,7 +410,7 @@ pub trait IdlSqliteTransaction {
// This allow is critical as it resolves a life time issue in stmt.
#[allow(clippy::let_and_return)]
fn verify(&self) -> Vec<Result<(), ConsistencyError>> {
let mut stmt = match self.get_conn().prepare_cached("PRAGMA integrity_check;") {
let mut stmt = match self.get_conn().prepare("PRAGMA integrity_check;") {
Ok(r) => r,
Err(_) => return vec![Err(ConsistencyError::SqliteIntegrityFailure)],
};
@ -529,7 +524,7 @@ impl IdlSqliteWriteTransaction {
pub fn get_id2entry_max_id(&self) -> Result<u64, OperationError> {
let mut stmt = self
.conn
.prepare_cached("SELECT MAX(id) as id_max FROM id2entry")
.prepare("SELECT MAX(id) as id_max FROM id2entry")
.map_err(|_| OperationError::SQLiteError)?;
// This exists checks for if any rows WERE returned
// that way we know to shortcut or not.
@ -604,7 +599,7 @@ impl IdlSqliteWriteTransaction {
{
let mut stmt = self
.conn
.prepare_cached("INSERT OR REPLACE INTO id2entry (id, data) VALUES(:id, :data)")
.prepare("INSERT OR REPLACE INTO id2entry (id, data) VALUES(:id, :data)")
.map_err(|e| {
ladmin_error!(au, "SQLite Error {:?}", e);
OperationError::SQLiteError
@ -631,7 +626,7 @@ impl IdlSqliteWriteTransaction {
lperf_trace_segment!(au, "be::idl_sqlite::delete_identries", || {
let mut stmt = self
.conn
.prepare_cached("DELETE FROM id2entry WHERE id = :id")
.prepare("DELETE FROM id2entry WHERE id = :id")
.map_err(|e| {
ladmin_error!(au, "SQLite Error {:?}", e);
OperationError::SQLiteError
@ -664,7 +659,7 @@ impl IdlSqliteWriteTransaction {
// lperf_trace_segment!(au, "be::idl_sqlite::delete_identry", || {
let mut stmt = self
.conn
.prepare_cached("DELETE FROM id2entry WHERE id = :id")
.prepare("DELETE FROM id2entry WHERE id = :id")
.map_err(|e| {
ladmin_error!(au, "SQLite Error {:?}", e);
OperationError::SQLiteError
@ -710,7 +705,7 @@ impl IdlSqliteWriteTransaction {
);
self.conn
.prepare_cached(query.as_str())
.prepare(query.as_str())
.and_then(|mut stmt| stmt.execute_named(&[(":key", &idx_key)]))
.map_err(|e| {
ladmin_error!(audit, "SQLite Error {:?}", e);
@ -732,7 +727,7 @@ impl IdlSqliteWriteTransaction {
);
self.conn
.prepare_cached(query.as_str())
.prepare(query.as_str())
.and_then(|mut stmt| {
stmt.execute_named(&[(":key", &idx_key), (":idl", &idl_raw)])
})
@ -768,9 +763,7 @@ impl IdlSqliteWriteTransaction {
let uuids = uuid.to_hyphenated_ref().to_string();
self.conn
.prepare_cached(
"INSERT OR REPLACE INTO idx_name2uuid (name, uuid) VALUES(:name, :uuid)",
)
.prepare("INSERT OR REPLACE INTO idx_name2uuid (name, uuid) VALUES(:name, :uuid)")
.and_then(|mut stmt| stmt.execute_named(&[(":name", &name), (":uuid", &uuids)]))
.map(|_| ())
.map_err(|e| {
@ -785,7 +778,7 @@ impl IdlSqliteWriteTransaction {
name: &str,
) -> Result<(), OperationError> {
self.conn
.prepare_cached("DELETE FROM idx_name2uuid WHERE name = :name")
.prepare("DELETE FROM idx_name2uuid WHERE name = :name")
.and_then(|mut stmt| stmt.execute_named(&[(":name", &name)]))
.map(|_| ())
.map_err(|e| {
@ -820,9 +813,7 @@ impl IdlSqliteWriteTransaction {
let data =
serde_cbor::to_vec(&dbv1).map_err(|_e| OperationError::SerdeCborError)?;
self.conn
.prepare_cached(
"INSERT OR REPLACE INTO idx_uuid2spn (uuid, spn) VALUES(:uuid, :spn)",
)
.prepare("INSERT OR REPLACE INTO idx_uuid2spn (uuid, spn) VALUES(:uuid, :spn)")
.and_then(|mut stmt| stmt.execute_named(&[(":uuid", &uuids), (":spn", &data)]))
.map(|_| ())
.map_err(|e| {
@ -832,7 +823,7 @@ impl IdlSqliteWriteTransaction {
}
None => self
.conn
.prepare_cached("DELETE FROM idx_uuid2spn WHERE uuid = :uuid")
.prepare("DELETE FROM idx_uuid2spn WHERE uuid = :uuid")
.and_then(|mut stmt| stmt.execute_named(&[(":uuid", &uuids)]))
.map(|_| ())
.map_err(|e| {
@ -865,9 +856,7 @@ impl IdlSqliteWriteTransaction {
match k {
Some(k) => self
.conn
.prepare_cached(
"INSERT OR REPLACE INTO idx_uuid2rdn (uuid, rdn) VALUES(:uuid, :rdn)",
)
.prepare("INSERT OR REPLACE INTO idx_uuid2rdn (uuid, rdn) VALUES(:uuid, :rdn)")
.and_then(|mut stmt| stmt.execute_named(&[(":uuid", &uuids), (":rdn", &k)]))
.map(|_| ())
.map_err(|e| {
@ -876,7 +865,7 @@ impl IdlSqliteWriteTransaction {
}),
None => self
.conn
.prepare_cached("DELETE FROM idx_uuid2rdn WHERE uuid = :uuid")
.prepare("DELETE FROM idx_uuid2rdn WHERE uuid = :uuid")
.and_then(|mut stmt| stmt.execute_named(&[(":uuid", &uuids)]))
.map(|_| ())
.map_err(|e| {
@ -915,9 +904,7 @@ impl IdlSqliteWriteTransaction {
pub fn list_idxs(&self, audit: &mut AuditScope) -> Result<Vec<String>, OperationError> {
let mut stmt = self
.get_conn()
.prepare_cached(
"SELECT name from sqlite_master where type='table' and name LIKE 'idx_%'",
)
.prepare("SELECT name from sqlite_master where type='table' and name LIKE 'idx_%'")
.map_err(|e| {
ladmin_error!(audit, "SQLite Error {:?}", e);
OperationError::SQLiteError
@ -945,7 +932,7 @@ impl IdlSqliteWriteTransaction {
idx_table_list.iter().try_for_each(|idx_table| {
ltrace!(audit, "removing idx_table -> {:?}", idx_table);
self.conn
.prepare_cached(format!("DROP TABLE {}", idx_table).as_str())
.prepare(format!("DROP TABLE {}", idx_table).as_str())
.and_then(|mut stmt| stmt.execute(NO_PARAMS).map(|_| ()))
.map_err(|e| {
ladmin_error!(audit, "sqlite error {:?}", e);
@ -1074,10 +1061,7 @@ impl IdlSqliteWriteTransaction {
pub(crate) fn get_allids(&self, au: &mut AuditScope) -> Result<IDLBitRange, OperationError> {
ltrace!(au, "Building allids...");
let mut stmt = self
.conn
.prepare_cached("SELECT id FROM id2entry")
.map_err(|e| {
let mut stmt = self.conn.prepare("SELECT id FROM id2entry").map_err(|e| {
ladmin_error!(au, "SQLite Error {:?}", e);
OperationError::SQLiteError
})?;
@ -1102,14 +1086,12 @@ impl IdlSqliteWriteTransaction {
}
pub fn setup(&self, audit: &mut AuditScope) -> Result<(), OperationError> {
self.conn
.set_prepared_statement_cache_capacity(PREPARE_STMT_CACHE);
// Enable WAL mode, which is just faster and better.
//
// We have to use stmt + prepare_cached because execute can't handle
// We have to use stmt + prepare because execute can't handle
// the "wal" row on result when this works!
self.conn
.prepare_cached("PRAGMA journal_mode=WAL;")
.prepare("PRAGMA journal_mode=WAL;")
.and_then(|mut wal_stmt| wal_stmt.query(NO_PARAMS).map(|_| ()))
.map_err(|e| {
ladmin_error!(audit, "sqlite error {:?}", e);

View file

@ -40,6 +40,10 @@ impl fmt::Display for Configuration {
.and_then(|_| write!(f, "max request size: {}b, ", self.maximum_request))
.and_then(|_| write!(f, "secure cookies: {}, ", self.secure_cookies))
.and_then(|_| write!(f, "with TLS: {}, ", self.tls_config.is_some()))
.and_then(|_| match self.log_level {
Some(u) => write!(f, "with log_level: {:x}, ", u),
None => write!(f, "with log_level: default, "),
})
.and_then(|_| {
write!(
f,

View file

@ -159,7 +159,10 @@ async fn main() {
config.update_ldapbind(&sconfig.ldapbindaddress);
// Apply any cli overrides, normally debug level.
config.update_log_level(opt.commonopt().debug.as_ref().map(|v| v.clone() as u32));
if let Some(dll) = opt.commonopt().debug.as_ref() {
config.update_log_level(Some(dll.clone() as u32));
}
::std::env::set_var("RUST_LOG", "actix_web=info,kanidm=info");
env_logger::builder()

View file

@ -0,0 +1,75 @@
# Developer Principles
As a piece of software that stores the identities of people, the project becomes
bound to social and political matters. The decisions we make have consequences
on many people - many who never have the chance to choose what software is used
to store their identities (think employees in a business).
This means we have a responsibility to not only be aware of our impact on our
direct users (developers, system administrators, dev ops, security and more)
but also the impact on indirect consumers - many of who are unlikely to be in
a position to contact us to ask for changes and help.
## Ethics / Rights
If you have not already, please see our documentation on [rights and ethics]
[rights and ethics]: https://github.com/kanidm/kanidm/blob/master/ethics/README.md
## Humans First
We must at all times make decisions that put humans first. We must respect
all cultures, languages, and identities and how they are represented.
This may mean we make technical choices that are difficult or more complex,
or different to "how things have always been done". But we do this to
ensure that all people can have their identities stored how they choose.
For example, any user may change their name, display name and legal name at
any time. Many applications will break as they primary key from name when
this occurs. But this is the fault of the application. Name changes must
be allowed. Our job as technical experts is to allow that to happen.
We will never put a burden on the user to correct for poor designs on
our part. For example, locking an account if it logs in from a different
country unless the user logs in before hand to indicate where they are
going. This makes the user responsible for a burden (changing the allowed login
country) when the real problem is preventing bruteforce attacks - which
can be technically solved in better ways that don't put administrative
load to humans.
## Correct and Simple
As a piece of security sensitive software we must always put correctness
first. All code must have tests. All developers must be able to run all
tests on their machine and environment of choice.
This means that the following must always work:
git clone ...
cargo test
If a test or change would require extra requirements, dependencies, or
preconfiguration, then we can no longer provide the above. Testing must
be easy and accesible, else we wont do it, and that leads to poor
software quality.
The project must be simple. Any one should be able to understand how it
works and why those decisions were made.
## Languages
The core server will (for now) always be written in Rust. This is due to
the strong type guarantees it gives, and how that can help raise the
quality of our project.
## Over-Configuration
Configuration will be allowed, but only if it does not impact the statements
above. Having configuration is good, but allowing too much (IE a scripting
engine for security rules) can give deployments the ability to violate human
first principles, which reflects badly on us.
All configuration items, must be constrained to fit within our principles
so that every kanidm deployment, will always provide a positive experience
to all people.