kanidm/libs/crypto/src/lib.rs

1676 lines
58 KiB
Rust
Raw Normal View History

2023-06-16 05:26:05 +02:00
#![deny(warnings)]
#![warn(unused_extern_crates)]
#![deny(clippy::todo)]
#![deny(clippy::unimplemented)]
#![deny(clippy::unwrap_used)]
#![deny(clippy::expect_used)]
#![deny(clippy::panic)]
#![deny(clippy::await_holding_lock)]
#![deny(clippy::needless_pass_by_value)]
#![deny(clippy::trivially_copy_pass_by_ref)]
#![deny(clippy::unreachable)]
2023-06-16 05:26:05 +02:00
use argon2::{Algorithm, Argon2, Params, PasswordHash, Version};
use base64::engine::GeneralPurpose;
use base64::{alphabet, Engine};
use tracing::{debug, error, trace, warn};
2023-03-01 04:10:52 +01:00
use base64::engine::general_purpose;
2023-03-01 04:10:52 +01:00
use base64urlsafedata::Base64UrlSafeData;
use rand::Rng;
use serde::{Deserialize, Serialize};
use std::fmt;
use std::time::{Duration, Instant};
2024-02-27 10:25:02 +01:00
use kanidm_proto::internal::OperationError;
use openssl::error::ErrorStack as OpenSSLErrorStack;
2023-03-01 04:10:52 +01:00
use openssl::hash::{self, MessageDigest};
use openssl::nid::Nid;
use openssl::pkcs5::pbkdf2_hmac;
use openssl::sha::{Sha1, Sha256, Sha512};
2023-03-01 04:10:52 +01:00
use kanidm_hsm_crypto::{HmacKey, Tpm};
pub mod mtls;
pub mod prelude;
pub mod serialise;
pub mod x509_cert;
pub use sha2;
pub type Sha256Digest =
sha2::digest::generic_array::GenericArray<u8, sha2::digest::typenum::consts::U32>;
2023-03-01 04:10:52 +01:00
// NIST 800-63.b salt should be 112 bits -> 14 8u8.
const PBKDF2_SALT_LEN: usize = 24;
pub const PBKDF2_MIN_NIST_SALT_LEN: usize = 14;
2023-03-01 04:10:52 +01:00
// Min number of rounds for a pbkdf2
pub const PBKDF2_MIN_NIST_COST: usize = 10000;
20240613 performance improvements (#2844) Thanks to @Seba-T's work with Orca, we were able to identify a number of performance issues in certain high load conditions. This commit contains fixes for the following issues * Unbounded Memory Growth - due to how ARCache works, to maintain temporal consistency it must retain copies of keys (not values) in a special data set for tracking. The Filter Resolve Cache was using unresolved filters as keys. This caused memory explosions when refint or memberof were updating a group with a large number of members because they would emit a query with hundreds of filter terms that would only be used once and never again, causing the ARCache haunted set to grow without bound. To limit this, we no longer cache large/complex queries for resolution, and in future we may implement some other methods to reduce this like sha256/hmac of the queries. * When creating a new account, dyngroups would be engaged to add the account as a member due to the matching scope. However the change to the dyngroup was triggering an update of all the dyngroups *members* related memberof attributes. This would mean that adding an account would trigger every other account to be loaded an updated. * When memberof would iterate over leaf entries and update them one at a time. This mean a large number of small fragmented queries in the case of a lot of leaf entries being updated. Now leaf entries are updated in a single stripe once groups are stabilised. * Member of would always trigger it's members to always update. Instead, we should only update members where a difference is observed, or all members if the group's memberof itself has changed since this needs to propogate to all leaf entries. This significantly reduces the amount of writes and operations to examine the changed member of set. * Referential integrity would examine all reference uuids on entries for validity rather than just the reference uuids that were altered within the transaction. This change means that only uuids that were *added* are validated during an operation. * During async write backs (delayed actions) these were performed one at a time. Instead, when possible this should be done in a single transaction as the write transaction caches all writes in memory until the commit meaning that by batching we reduce overall latency. * In the server there can only be one write transaction and many readers. These are guarded by tokio semaphores that act as fair queues - first in gets the lock next. Due to the design of the server readers would be blocked on the *database* semaphore, and writers would block on the write semaphore and THEN the database semaphore. This arrangement was creating a situation which unfairly advantaged readers over writers, as any write would first have to become the head of it's queue, and then compete with all readers to access a db transaction. Instead, we now have a reader semaphore with size threads minus 1, clamped at a minimum of 1. This means that provided there are two or more threads, then a writer will *always* have a database handle available, and readers will pre-queue with each other before queueing on the db ticket. If there is only one thread, then writes and reads will alternate between each other fairly.
2024-06-20 04:50:00 +02:00
// 32 * u8 -> 256 bits of out.
const PBKDF2_KEY_LEN: usize = 32;
2023-03-01 04:10:52 +01:00
const PBKDF2_MIN_NIST_KEY_LEN: usize = 32;
const PBKDF2_SHA1_MIN_KEY_LEN: usize = 19;
const DS_SHA_SALT_LEN: usize = 8;
const DS_SHA1_HASH_LEN: usize = 20;
const DS_SHA256_HASH_LEN: usize = 32;
const DS_SHA512_HASH_LEN: usize = 64;
2023-03-01 04:10:52 +01:00
// Taken from the argon2 library and rfc 9106
const ARGON2_VERSION: u32 = 19;
const ARGON2_SALT_LEN: usize = 16;
20240613 performance improvements (#2844) Thanks to @Seba-T's work with Orca, we were able to identify a number of performance issues in certain high load conditions. This commit contains fixes for the following issues * Unbounded Memory Growth - due to how ARCache works, to maintain temporal consistency it must retain copies of keys (not values) in a special data set for tracking. The Filter Resolve Cache was using unresolved filters as keys. This caused memory explosions when refint or memberof were updating a group with a large number of members because they would emit a query with hundreds of filter terms that would only be used once and never again, causing the ARCache haunted set to grow without bound. To limit this, we no longer cache large/complex queries for resolution, and in future we may implement some other methods to reduce this like sha256/hmac of the queries. * When creating a new account, dyngroups would be engaged to add the account as a member due to the matching scope. However the change to the dyngroup was triggering an update of all the dyngroups *members* related memberof attributes. This would mean that adding an account would trigger every other account to be loaded an updated. * When memberof would iterate over leaf entries and update them one at a time. This mean a large number of small fragmented queries in the case of a lot of leaf entries being updated. Now leaf entries are updated in a single stripe once groups are stabilised. * Member of would always trigger it's members to always update. Instead, we should only update members where a difference is observed, or all members if the group's memberof itself has changed since this needs to propogate to all leaf entries. This significantly reduces the amount of writes and operations to examine the changed member of set. * Referential integrity would examine all reference uuids on entries for validity rather than just the reference uuids that were altered within the transaction. This change means that only uuids that were *added* are validated during an operation. * During async write backs (delayed actions) these were performed one at a time. Instead, when possible this should be done in a single transaction as the write transaction caches all writes in memory until the commit meaning that by batching we reduce overall latency. * In the server there can only be one write transaction and many readers. These are guarded by tokio semaphores that act as fair queues - first in gets the lock next. Due to the design of the server readers would be blocked on the *database* semaphore, and writers would block on the write semaphore and THEN the database semaphore. This arrangement was creating a situation which unfairly advantaged readers over writers, as any write would first have to become the head of it's queue, and then compete with all readers to access a db transaction. Instead, we now have a reader semaphore with size threads minus 1, clamped at a minimum of 1. This means that provided there are two or more threads, then a writer will *always* have a database handle available, and readers will pre-queue with each other before queueing on the db ticket. If there is only one thread, then writes and reads will alternate between each other fairly.
2024-06-20 04:50:00 +02:00
// 32 * u8 -> 256 bits of out.
2023-06-16 05:26:05 +02:00
const ARGON2_KEY_LEN: usize = 32;
20240613 performance improvements (#2844) Thanks to @Seba-T's work with Orca, we were able to identify a number of performance issues in certain high load conditions. This commit contains fixes for the following issues * Unbounded Memory Growth - due to how ARCache works, to maintain temporal consistency it must retain copies of keys (not values) in a special data set for tracking. The Filter Resolve Cache was using unresolved filters as keys. This caused memory explosions when refint or memberof were updating a group with a large number of members because they would emit a query with hundreds of filter terms that would only be used once and never again, causing the ARCache haunted set to grow without bound. To limit this, we no longer cache large/complex queries for resolution, and in future we may implement some other methods to reduce this like sha256/hmac of the queries. * When creating a new account, dyngroups would be engaged to add the account as a member due to the matching scope. However the change to the dyngroup was triggering an update of all the dyngroups *members* related memberof attributes. This would mean that adding an account would trigger every other account to be loaded an updated. * When memberof would iterate over leaf entries and update them one at a time. This mean a large number of small fragmented queries in the case of a lot of leaf entries being updated. Now leaf entries are updated in a single stripe once groups are stabilised. * Member of would always trigger it's members to always update. Instead, we should only update members where a difference is observed, or all members if the group's memberof itself has changed since this needs to propogate to all leaf entries. This significantly reduces the amount of writes and operations to examine the changed member of set. * Referential integrity would examine all reference uuids on entries for validity rather than just the reference uuids that were altered within the transaction. This change means that only uuids that were *added* are validated during an operation. * During async write backs (delayed actions) these were performed one at a time. Instead, when possible this should be done in a single transaction as the write transaction caches all writes in memory until the commit meaning that by batching we reduce overall latency. * In the server there can only be one write transaction and many readers. These are guarded by tokio semaphores that act as fair queues - first in gets the lock next. Due to the design of the server readers would be blocked on the *database* semaphore, and writers would block on the write semaphore and THEN the database semaphore. This arrangement was creating a situation which unfairly advantaged readers over writers, as any write would first have to become the head of it's queue, and then compete with all readers to access a db transaction. Instead, we now have a reader semaphore with size threads minus 1, clamped at a minimum of 1. This means that provided there are two or more threads, then a writer will *always* have a database handle available, and readers will pre-queue with each other before queueing on the db ticket. If there is only one thread, then writes and reads will alternate between each other fairly.
2024-06-20 04:50:00 +02:00
// Default amount of ram we sacrifice per thread
const ARGON2_MIN_RAM_KIB: u32 = 8 * 1024;
const ARGON2_MAX_RAM_KIB: u32 = 64 * 1024;
// Amount of ram to subtract when we do a T cost iter. This
// is because t=2 m=32 == t=3 m=20. So we just step down a little
// to keep the value about the same.
const ARGON2_TCOST_RAM_ITER_KIB: u32 = 12 * 1024;
const ARGON2_MIN_T_COST: u32 = 2;
const ARGON2_MAX_T_COST: u32 = 16;
const ARGON2_MAX_P_COST: u32 = 1;
2023-06-16 05:26:05 +02:00
#[derive(Clone, Debug)]
pub enum CryptoError {
Hsm,
HsmContextMissing,
OpenSSL(u64),
Md4Disabled,
Argon2,
Argon2Version,
Argon2Parameters,
}
impl From<OpenSSLErrorStack> for CryptoError {
fn from(ossl_err: OpenSSLErrorStack) -> Self {
error!(?ossl_err);
let code = ossl_err.errors().first().map(|e| e.code()).unwrap_or(0);
#[cfg(not(target_family = "windows"))]
let result = CryptoError::OpenSSL(code);
// this is an .into() because on windows it's a u32 not a u64
#[cfg(target_family = "windows")]
let result = CryptoError::OpenSSL(code.into());
result
}
}
2023-06-24 08:15:31 +02:00
#[allow(clippy::from_over_into)]
impl Into<OperationError> for CryptoError {
fn into(self) -> OperationError {
OperationError::CryptographyError
}
}
#[derive(Serialize, Deserialize, PartialEq, Eq, Clone)]
2023-03-01 04:10:52 +01:00
#[allow(non_camel_case_types)]
pub enum DbPasswordV1 {
TPM_ARGON2ID {
m: u32,
t: u32,
p: u32,
v: u32,
s: Base64UrlSafeData,
k: Base64UrlSafeData,
},
2023-06-16 05:26:05 +02:00
ARGON2ID {
m: u32,
t: u32,
p: u32,
v: u32,
s: Base64UrlSafeData,
k: Base64UrlSafeData,
},
2023-03-01 04:10:52 +01:00
PBKDF2(usize, Vec<u8>, Vec<u8>),
PBKDF2_SHA1(usize, Vec<u8>, Vec<u8>),
PBKDF2_SHA512(usize, Vec<u8>, Vec<u8>),
SHA1(Vec<u8>),
SSHA1(Vec<u8>, Vec<u8>),
SHA256(Vec<u8>),
SSHA256(Vec<u8>, Vec<u8>),
SHA512(Vec<u8>),
2023-03-01 04:10:52 +01:00
SSHA512(Vec<u8>, Vec<u8>),
NT_MD4(Vec<u8>),
}
#[derive(Serialize, Deserialize, Debug, PartialEq, Eq)]
#[allow(non_camel_case_types)]
pub enum ReplPasswordV1 {
TPM_ARGON2ID {
m_cost: u32,
t_cost: u32,
p_cost: u32,
version: u32,
salt: Base64UrlSafeData,
key: Base64UrlSafeData,
},
2023-06-16 05:26:05 +02:00
ARGON2ID {
m_cost: u32,
t_cost: u32,
p_cost: u32,
version: u32,
salt: Base64UrlSafeData,
key: Base64UrlSafeData,
},
2023-03-01 04:10:52 +01:00
PBKDF2 {
cost: usize,
salt: Base64UrlSafeData,
hash: Base64UrlSafeData,
},
PBKDF2_SHA1 {
cost: usize,
salt: Base64UrlSafeData,
hash: Base64UrlSafeData,
},
PBKDF2_SHA512 {
cost: usize,
salt: Base64UrlSafeData,
hash: Base64UrlSafeData,
},
SHA1 {
hash: Base64UrlSafeData,
},
SSHA1 {
salt: Base64UrlSafeData,
hash: Base64UrlSafeData,
},
SHA256 {
hash: Base64UrlSafeData,
},
SSHA256 {
salt: Base64UrlSafeData,
hash: Base64UrlSafeData,
},
SHA512 {
hash: Base64UrlSafeData,
},
2023-03-01 04:10:52 +01:00
SSHA512 {
salt: Base64UrlSafeData,
hash: Base64UrlSafeData,
},
NT_MD4 {
hash: Base64UrlSafeData,
},
}
impl fmt::Debug for DbPasswordV1 {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match self {
DbPasswordV1::TPM_ARGON2ID { .. } => write!(f, "TPM_ARGON2ID"),
2023-06-16 05:26:05 +02:00
DbPasswordV1::ARGON2ID { .. } => write!(f, "ARGON2ID"),
2023-03-01 04:10:52 +01:00
DbPasswordV1::PBKDF2(_, _, _) => write!(f, "PBKDF2"),
DbPasswordV1::PBKDF2_SHA1(_, _, _) => write!(f, "PBKDF2_SHA1"),
DbPasswordV1::PBKDF2_SHA512(_, _, _) => write!(f, "PBKDF2_SHA512"),
DbPasswordV1::SHA1(_) => write!(f, "SHA1"),
DbPasswordV1::SSHA1(_, _) => write!(f, "SSHA1"),
DbPasswordV1::SHA256(_) => write!(f, "SHA256"),
DbPasswordV1::SSHA256(_, _) => write!(f, "SSHA256"),
DbPasswordV1::SHA512(_) => write!(f, "SHA512"),
2023-03-01 04:10:52 +01:00
DbPasswordV1::SSHA512(_, _) => write!(f, "SSHA512"),
DbPasswordV1::NT_MD4(_) => write!(f, "NT_MD4"),
}
}
}
#[derive(Debug)]
pub struct CryptoPolicy {
pub(crate) pbkdf2_cost: usize,
2023-06-16 05:26:05 +02:00
// https://docs.rs/argon2/0.5.0/argon2/struct.Params.html
// defaults to 19mb memory, 2 iterations and 1 thread, with a 32byte output.
pub(crate) argon2id_params: Params,
2023-03-01 04:10:52 +01:00
}
impl CryptoPolicy {
pub fn minimum() -> Self {
CryptoPolicy {
pbkdf2_cost: PBKDF2_MIN_NIST_COST,
2023-06-16 05:26:05 +02:00
argon2id_params: Params::default(),
2023-03-01 04:10:52 +01:00
}
}
pub fn danger_test_minimum() -> Self {
CryptoPolicy {
pbkdf2_cost: 1000,
argon2id_params: Params::new(
Params::MIN_M_COST,
Params::MIN_T_COST,
Params::MIN_P_COST,
None,
)
.unwrap_or_default(),
}
}
2023-06-16 05:26:05 +02:00
pub fn time_target(target_time: Duration) -> Self {
const PBKDF2_BENCH_FACTOR: usize = 10;
let pbkdf2_cost = match Password::bench_pbkdf2(PBKDF2_MIN_NIST_COST * PBKDF2_BENCH_FACTOR) {
2023-03-01 04:10:52 +01:00
Some(bt) => {
let ubt = bt.as_nanos() as usize;
// Get the cost per thousand rounds
2023-06-16 05:26:05 +02:00
let per_thou = (PBKDF2_MIN_NIST_COST * PBKDF2_BENCH_FACTOR) / 1000;
2023-03-01 04:10:52 +01:00
let t_per_thou = ubt / per_thou;
trace!("{:010}µs / 1000 rounds", t_per_thou);
2023-03-01 04:10:52 +01:00
// Now we need the attacker work in nanos
2023-06-16 05:26:05 +02:00
let target = target_time.as_nanos() as usize;
let r = (target / t_per_thou) * 1000;
2023-03-01 04:10:52 +01:00
2023-06-16 05:26:05 +02:00
trace!("{}µs target time", target);
trace!("Maybe rounds -> {}", r);
2023-03-01 04:10:52 +01:00
if r < PBKDF2_MIN_NIST_COST {
PBKDF2_MIN_NIST_COST
} else {
r
}
}
None => PBKDF2_MIN_NIST_COST,
};
// Argon2id has multiple parameters. These all are about *exchanges* that you can
2023-06-16 05:26:05 +02:00
// request in how the computation is performed.
//
// rfc9106 explains that there are two algorithms stacked here. Argon2i has defences
// against side-channel timing. Argon2d provides defences for time-memory tradeoffs.
//
// We can see how this impacts timings from sources like:
// https://www.twelve21.io/how-to-choose-the-right-parameters-for-argon2/
//
// M = 256 MB, T = 2, d = 8, Time = 0.732 s
// M = 128 MB, T = 6, d = 8, Time = 0.99 s
// M = 64 MB, T = 12, d = 8, Time = 0.968 s
// M = 32 MB, T = 24, d = 8, Time = 0.896 s
// M = 16 MB, T = 49, d = 8, Time = 0.973 s
// M = 8 MB, T = 96, d = 8, Time = 0.991 s
// M = 4 MB, T = 190, d = 8, Time = 0.977 s
// M = 2 MB, T = 271, d = 8, Time = 0.973 s
// M = 1 MB, T = 639, d = 8, Time = 0.991 s
//
// As we can see, the time taken stays constant, but as ram decreases the amount of
// CPU work required goes up. In our case, our primary threat is from GPU hashcat
// cracking. GPU's tend to have many fast cores but very little amounts of fast ram
// for those cores. So we want to have as much ram as *possible* up to a limit, and
// then we want to increase iterations.
//
// This way a GPU has to expend further GPU time to compensate for the less ram.
//
// We also need to balance this against the fact we are a database, and we do have
// caches. We also don't want to over-use RAM, especially because in the worst case
// every thread will be operating in argon2id at the same time. That means
2023-06-16 05:26:05 +02:00
// thread x ram will be used. If we had 8 threads at 64mb of ram, that would require
// 512mb of ram alone just for hashing. This becomes worse as core counts scale, with
// 24 core xeons easily reaching 1.5GB in these cases.
let mut m_cost = ARGON2_MIN_RAM_KIB;
let mut t_cost = ARGON2_MIN_T_COST;
let p_cost = ARGON2_MAX_P_COST;
2023-06-16 05:26:05 +02:00
// Raise memory usage until an acceptable ram amount is reached.
loop {
2023-06-16 05:26:05 +02:00
let params = if let Ok(p) = Params::new(m_cost, t_cost, p_cost, None) {
p
} else {
// Unable to proceed.
error!(
?m_cost,
?t_cost,
?p_cost,
"Parameters were not valid for argon2"
);
2023-06-16 05:26:05 +02:00
break;
};
if let Some(ubt) = Password::bench_argon2id(params) {
debug!("{}µs - t_cost {} m_cost {}", ubt.as_nanos(), t_cost, m_cost);
// Parameter adjustment
if ubt < target_time {
if m_cost < ARGON2_MAX_RAM_KIB {
// Help narrow in quicker.
let m_adjust = if target_time
.as_nanos()
.checked_div(ubt.as_nanos())
.unwrap_or(1)
>= 2
{
// Very far from target, double m_cost.
m_cost * 2
} else {
// Close! Increase in a small step
m_cost + 1024
};
m_cost = if m_adjust > ARGON2_MAX_RAM_KIB {
ARGON2_MAX_RAM_KIB
} else {
m_adjust
};
continue;
} else if t_cost < ARGON2_MAX_T_COST {
// t=2 with m = 32MB is about the same as t=3 m=20MB, so we want to start with ram
// higher on these iterations. About 12MB appears to be one iteration. We use 8MB
// here though, just to give a little window under that for adjustment.
//
// Similar, once we hit t=4 we just need to have max ram.
t_cost += 1;
// Halve the ram cost.
let m_adjust = m_cost
.checked_sub(ARGON2_TCOST_RAM_ITER_KIB)
.unwrap_or(ARGON2_MIN_RAM_KIB);
// Clamp the value
m_cost = m_adjust.clamp(ARGON2_MIN_RAM_KIB, ARGON2_MAX_RAM_KIB);
continue;
} else {
// Unable to proceed, parameters are maxed out.
warn!("Argon2 parameters have hit their maximums - this may be a bug!");
break;
}
} else {
// Found the target time.
break;
}
2023-06-16 05:26:05 +02:00
} else {
error!("Unable to perform bench of argon2id, stopping benchmark");
break;
2023-06-16 05:26:05 +02:00
}
}
let argon2id_params = Params::new(m_cost, t_cost, p_cost, None)
// fallback
.unwrap_or_default();
let p = CryptoPolicy {
pbkdf2_cost,
argon2id_params,
};
debug!(pbkdf2_cost = %p.pbkdf2_cost, argon2id_m = %p.argon2id_params.m_cost(), argon2id_p = %p.argon2id_params.p_cost(), argon2id_t = %p.argon2id_params.t_cost(), );
2023-06-16 05:26:05 +02:00
p
2023-03-01 04:10:52 +01:00
}
}
// Why PBKDF2? Rust's bcrypt has a number of hardcodings like max pw len of 72
// I don't really feel like adding in so many restrictions, so I'll use
// pbkdf2 in openssl because it doesn't have the same limits.
#[derive(Clone, Debug, PartialEq)]
#[allow(non_camel_case_types)]
enum Kdf {
TPM_ARGON2ID {
m_cost: u32,
t_cost: u32,
p_cost: u32,
version: u32,
salt: Vec<u8>,
key: Vec<u8>,
},
2023-06-16 05:26:05 +02:00
//
ARGON2ID {
m_cost: u32,
t_cost: u32,
p_cost: u32,
version: u32,
salt: Vec<u8>,
key: Vec<u8>,
},
2023-03-01 04:10:52 +01:00
// cost, salt, hash
PBKDF2(usize, Vec<u8>, Vec<u8>),
// Imported types, will upgrade to the above.
// cost, salt, hash
PBKDF2_SHA1(usize, Vec<u8>, Vec<u8>),
// cost, salt, hash
PBKDF2_SHA512(usize, Vec<u8>, Vec<u8>),
// salt hash
SHA1(Vec<u8>),
SSHA1(Vec<u8>, Vec<u8>),
SHA256(Vec<u8>),
SSHA256(Vec<u8>, Vec<u8>),
SHA512(Vec<u8>),
2023-03-01 04:10:52 +01:00
SSHA512(Vec<u8>, Vec<u8>),
// hash
NT_MD4(Vec<u8>),
}
#[derive(Clone, Debug, PartialEq)]
pub struct Password {
material: Kdf,
}
impl TryFrom<DbPasswordV1> for Password {
type Error = ();
fn try_from(value: DbPasswordV1) -> Result<Self, Self::Error> {
match value {
DbPasswordV1::TPM_ARGON2ID { m, t, p, v, s, k } => Ok(Password {
material: Kdf::TPM_ARGON2ID {
m_cost: m,
t_cost: t,
p_cost: p,
version: v,
salt: s.into(),
key: k.into(),
},
}),
2023-06-16 05:26:05 +02:00
DbPasswordV1::ARGON2ID { m, t, p, v, s, k } => Ok(Password {
material: Kdf::ARGON2ID {
m_cost: m,
t_cost: t,
p_cost: p,
version: v,
salt: s.into(),
key: k.into(),
},
}),
2023-03-01 04:10:52 +01:00
DbPasswordV1::PBKDF2(c, s, h) => Ok(Password {
material: Kdf::PBKDF2(c, s, h),
}),
DbPasswordV1::PBKDF2_SHA1(c, s, h) => Ok(Password {
material: Kdf::PBKDF2_SHA1(c, s, h),
}),
DbPasswordV1::PBKDF2_SHA512(c, s, h) => Ok(Password {
material: Kdf::PBKDF2_SHA512(c, s, h),
}),
DbPasswordV1::SHA1(h) => Ok(Password {
material: Kdf::SHA1(h),
}),
DbPasswordV1::SSHA1(s, h) => Ok(Password {
material: Kdf::SSHA1(s, h),
}),
DbPasswordV1::SHA256(h) => Ok(Password {
material: Kdf::SHA256(h),
}),
DbPasswordV1::SSHA256(s, h) => Ok(Password {
material: Kdf::SSHA256(s, h),
}),
DbPasswordV1::SHA512(h) => Ok(Password {
material: Kdf::SHA512(h),
}),
2023-03-01 04:10:52 +01:00
DbPasswordV1::SSHA512(s, h) => Ok(Password {
material: Kdf::SSHA512(s, h),
}),
DbPasswordV1::NT_MD4(h) => Ok(Password {
material: Kdf::NT_MD4(h),
}),
}
}
}
impl TryFrom<&ReplPasswordV1> for Password {
type Error = ();
fn try_from(value: &ReplPasswordV1) -> Result<Self, Self::Error> {
match value {
ReplPasswordV1::TPM_ARGON2ID {
m_cost,
t_cost,
p_cost,
version,
salt,
key,
} => Ok(Password {
material: Kdf::TPM_ARGON2ID {
m_cost: *m_cost,
t_cost: *t_cost,
p_cost: *p_cost,
version: *version,
2024-05-01 06:10:18 +02:00
salt: salt.to_vec(),
key: key.to_vec(),
},
}),
2023-06-16 05:26:05 +02:00
ReplPasswordV1::ARGON2ID {
m_cost,
t_cost,
p_cost,
version,
salt,
key,
} => Ok(Password {
material: Kdf::ARGON2ID {
m_cost: *m_cost,
t_cost: *t_cost,
p_cost: *p_cost,
version: *version,
2024-05-01 06:10:18 +02:00
salt: salt.to_vec(),
key: key.to_vec(),
2023-06-16 05:26:05 +02:00
},
}),
2023-03-01 04:10:52 +01:00
ReplPasswordV1::PBKDF2 { cost, salt, hash } => Ok(Password {
2024-05-01 06:10:18 +02:00
material: Kdf::PBKDF2(*cost, salt.to_vec(), hash.to_vec()),
2023-03-01 04:10:52 +01:00
}),
ReplPasswordV1::PBKDF2_SHA1 { cost, salt, hash } => Ok(Password {
2024-05-01 06:10:18 +02:00
material: Kdf::PBKDF2_SHA1(*cost, salt.to_vec(), hash.to_vec()),
2023-03-01 04:10:52 +01:00
}),
ReplPasswordV1::PBKDF2_SHA512 { cost, salt, hash } => Ok(Password {
2024-05-01 06:10:18 +02:00
material: Kdf::PBKDF2_SHA512(*cost, salt.to_vec(), hash.to_vec()),
2023-03-01 04:10:52 +01:00
}),
ReplPasswordV1::SHA1 { hash } => Ok(Password {
2024-05-01 06:10:18 +02:00
material: Kdf::SHA1(hash.to_vec()),
}),
ReplPasswordV1::SSHA1 { salt, hash } => Ok(Password {
2024-05-01 06:10:18 +02:00
material: Kdf::SSHA1(salt.to_vec(), hash.to_vec()),
}),
ReplPasswordV1::SHA256 { hash } => Ok(Password {
2024-05-01 06:10:18 +02:00
material: Kdf::SHA256(hash.to_vec()),
}),
ReplPasswordV1::SSHA256 { salt, hash } => Ok(Password {
2024-05-01 06:10:18 +02:00
material: Kdf::SSHA256(salt.to_vec(), hash.to_vec()),
}),
ReplPasswordV1::SHA512 { hash } => Ok(Password {
2024-05-01 06:10:18 +02:00
material: Kdf::SHA512(hash.to_vec()),
}),
2023-03-01 04:10:52 +01:00
ReplPasswordV1::SSHA512 { salt, hash } => Ok(Password {
2024-05-01 06:10:18 +02:00
material: Kdf::SSHA512(salt.to_vec(), hash.to_vec()),
2023-03-01 04:10:52 +01:00
}),
ReplPasswordV1::NT_MD4 { hash } => Ok(Password {
2024-05-01 06:10:18 +02:00
material: Kdf::NT_MD4(hash.to_vec()),
2023-03-01 04:10:52 +01:00
}),
}
}
}
// OpenLDAP based their PBKDF2 implementation on passlib from python, that uses a
// non-standard base64 altchar set and padding that is not supported by
// anything else in the world. To manage this, we only ever encode to base64 with
// no pad but we have to remap ab64 to b64. This function allows b64 standard with
// padding to pass, and remaps ab64 to b64 standard with padding.
macro_rules! ab64_to_b64 {
($ab64:expr) => {{
let mut s = $ab64.replace(".", "+");
match s.len() & 3 {
0 => {
// Do nothing
}
1 => {
// One is invalid, do nothing, we'll error in base64
}
2 => s.push_str("=="),
3 => s.push_str("="),
_ => unreachable!(),
}
s
}};
}
impl TryFrom<&str> for Password {
type Error = ();
// As we may add more algos, we keep the match algo single for later.
#[allow(clippy::single_match)]
fn try_from(value: &str) -> Result<Self, Self::Error> {
// There is probably a more efficient way to try this given different types?
// test django - algo$salt$hash
let django_pbkdf: Vec<&str> = value.split('$').collect();
if django_pbkdf.len() == 4 {
let algo = django_pbkdf[0];
let cost = django_pbkdf[1];
let salt = django_pbkdf[2];
let hash = django_pbkdf[3];
match algo {
"pbkdf2_sha256" => {
let c = cost.parse::<usize>().map_err(|_| ())?;
let s: Vec<_> = salt.as_bytes().to_vec();
let h = general_purpose::STANDARD.decode(hash).map_err(|_| ())?;
2023-03-01 04:10:52 +01:00
if h.len() < PBKDF2_MIN_NIST_KEY_LEN {
return Err(());
}
return Ok(Password {
material: Kdf::PBKDF2(c, s, h),
});
}
_ => {}
}
}
if value.starts_with("ipaNTHash: ") {
let nt_md4 = match value.split_once(' ') {
Some((_, v)) => v,
None => {
return Err(());
2023-03-01 04:10:52 +01:00
}
};
// Great work.
let h = base64::engine::general_purpose::URL_SAFE_NO_PAD
.decode(nt_md4)
.or_else(|_| base64::engine::general_purpose::URL_SAFE.decode(nt_md4))
.map_err(|_| ())?;
2023-03-01 04:10:52 +01:00
return Ok(Password {
material: Kdf::NT_MD4(h),
});
}
if value.starts_with("sambaNTPassword: ") {
let nt_md4 = match value.split_once(' ') {
Some((_, v)) => v,
None => {
return Err(());
2023-03-01 04:10:52 +01:00
}
};
let h = hex::decode(nt_md4).map_err(|_| ())?;
return Ok(Password {
material: Kdf::NT_MD4(h),
});
}
// Test 389ds formats
if let Some(ds_ssha1) = value.strip_prefix("{SHA}") {
let h = general_purpose::STANDARD.decode(ds_ssha1).map_err(|_| ())?;
if h.len() != DS_SHA1_HASH_LEN {
return Err(());
}
return Ok(Password {
material: Kdf::SHA1(h.to_vec()),
});
}
if let Some(ds_ssha1) = value.strip_prefix("{SSHA}") {
let sh = general_purpose::STANDARD.decode(ds_ssha1).map_err(|_| ())?;
let (h, s) = sh.split_at(DS_SHA1_HASH_LEN);
if s.len() != DS_SHA_SALT_LEN {
return Err(());
}
return Ok(Password {
material: Kdf::SSHA1(s.to_vec(), h.to_vec()),
});
}
if let Some(ds_ssha256) = value.strip_prefix("{SHA256}") {
let h = general_purpose::STANDARD
.decode(ds_ssha256)
.map_err(|_| ())?;
if h.len() != DS_SHA256_HASH_LEN {
return Err(());
}
return Ok(Password {
material: Kdf::SHA256(h.to_vec()),
});
}
if let Some(ds_ssha256) = value.strip_prefix("{SSHA256}") {
let sh = general_purpose::STANDARD
.decode(ds_ssha256)
.map_err(|_| ())?;
let (h, s) = sh.split_at(DS_SHA256_HASH_LEN);
if s.len() != DS_SHA_SALT_LEN {
return Err(());
}
return Ok(Password {
material: Kdf::SSHA256(s.to_vec(), h.to_vec()),
});
}
if let Some(ds_ssha512) = value.strip_prefix("{SHA512}") {
let h = general_purpose::STANDARD
.decode(ds_ssha512)
.map_err(|_| ())?;
if h.len() != DS_SHA512_HASH_LEN {
return Err(());
}
return Ok(Password {
material: Kdf::SHA512(h.to_vec()),
});
}
2023-03-01 04:10:52 +01:00
if let Some(ds_ssha512) = value.strip_prefix("{SSHA512}") {
let sh = general_purpose::STANDARD
.decode(ds_ssha512)
.map_err(|_| ())?;
let (h, s) = sh.split_at(DS_SHA512_HASH_LEN);
if s.len() != DS_SHA_SALT_LEN {
2023-03-01 04:10:52 +01:00
return Err(());
}
return Ok(Password {
material: Kdf::SSHA512(s.to_vec(), h.to_vec()),
});
}
// Test for OpenLDAP formats
if value.starts_with("{PBKDF2}")
|| value.starts_with("{PBKDF2-SHA1}")
|| value.starts_with("{PBKDF2-SHA256}")
|| value.starts_with("{PBKDF2-SHA512}")
{
let ol_pbkdf2 = match value.split_once('}') {
Some((_, v)) => v,
None => {
return Err(());
2023-03-01 04:10:52 +01:00
}
};
let ol_pbkdf: Vec<&str> = ol_pbkdf2.split('$').collect();
if ol_pbkdf.len() == 3 {
let cost = ol_pbkdf[0];
let salt = ol_pbkdf[1];
let hash = ol_pbkdf[2];
let c = cost.parse::<usize>().map_err(|_| ())?;
let s = ab64_to_b64!(salt);
let base64_decoder_config = general_purpose::GeneralPurposeConfig::new()
.with_decode_allow_trailing_bits(true);
let base64_decoder =
GeneralPurpose::new(&alphabet::STANDARD, base64_decoder_config);
let s = base64_decoder.decode(s).map_err(|e| {
error!(?e, "Invalid base64 in oldap pbkdf2-sha1");
})?;
2023-03-01 04:10:52 +01:00
let h = ab64_to_b64!(hash);
let h = base64_decoder.decode(h).map_err(|e| {
error!(?e, "Invalid base64 in oldap pbkdf2-sha1");
})?;
2023-03-01 04:10:52 +01:00
// This is just sha1 in a trenchcoat.
if value.strip_prefix("{PBKDF2}").is_some()
|| value.strip_prefix("{PBKDF2-SHA1}").is_some()
{
if h.len() < PBKDF2_SHA1_MIN_KEY_LEN {
return Err(());
}
return Ok(Password {
material: Kdf::PBKDF2_SHA1(c, s, h),
});
}
if value.strip_prefix("{PBKDF2-SHA256}").is_some() {
if h.len() < PBKDF2_MIN_NIST_KEY_LEN {
return Err(());
}
return Ok(Password {
material: Kdf::PBKDF2(c, s, h),
});
}
if value.strip_prefix("{PBKDF2-SHA512}").is_some() {
if h.len() < PBKDF2_MIN_NIST_KEY_LEN {
return Err(());
}
return Ok(Password {
material: Kdf::PBKDF2_SHA512(c, s, h),
});
}
// Should be no way to get here!
return Err(());
2023-03-01 04:10:52 +01:00
} else {
warn!("oldap pbkdf2 found but invalid number of elements?");
}
}
2023-06-16 05:26:05 +02:00
if let Some(argon2_phc) = value.strip_prefix("{ARGON2}") {
match PasswordHash::try_from(argon2_phc) {
Ok(PasswordHash {
algorithm,
version,
params,
salt,
hash,
}) => {
if algorithm.as_str() != "argon2id" {
error!(alg = %algorithm.as_str(), "Only argon2id is supported");
return Err(());
}
let version = version.unwrap_or(ARGON2_VERSION);
2023-06-16 05:26:05 +02:00
let version: Version = version.try_into().map_err(|_| {
error!("Failed to convert {} to valid argon2id version", version);
})?;
let m_cost = params.get_decimal("m").ok_or_else(|| {
error!("Failed to access m_cost parameter");
})?;
let t_cost = params.get_decimal("t").ok_or_else(|| {
error!("Failed to access t_cost parameter");
})?;
let p_cost = params.get_decimal("p").ok_or_else(|| {
error!("Failed to access p_cost parameter");
})?;
let salt = salt
.and_then(|s| {
let mut salt_arr = [0u8; 64];
s.decode_b64(&mut salt_arr)
.ok()
.map(|salt_bytes| salt_bytes.to_owned())
})
.ok_or_else(|| {
error!("Failed to access salt");
})?;
error!(?salt);
let key = hash.map(|h| h.as_bytes().into()).ok_or_else(|| {
error!("Failed to access key");
})?;
return Ok(Password {
material: Kdf::ARGON2ID {
m_cost,
t_cost,
p_cost,
version: version as u32,
salt,
key,
},
});
}
Err(e) => {
error!(?e, "Invalid argon2 phc string");
return Err(());
}
}
}
2023-03-01 04:10:52 +01:00
// Nothing matched to this point.
Err(())
}
}
impl Password {
fn bench_pbkdf2(pbkdf2_cost: usize) -> Option<Duration> {
let mut rng = rand::thread_rng();
let salt: Vec<u8> = (0..PBKDF2_SALT_LEN).map(|_| rng.gen()).collect();
let input: Vec<u8> = (0..PBKDF2_SALT_LEN).map(|_| rng.gen()).collect();
// This is 512 bits of output
let mut key: Vec<u8> = (0..PBKDF2_KEY_LEN).map(|_| 0).collect();
let start = Instant::now();
pbkdf2_hmac(
input.as_slice(),
salt.as_slice(),
pbkdf2_cost,
MessageDigest::sha256(),
key.as_mut_slice(),
)
.ok()?;
let end = Instant::now();
end.checked_duration_since(start)
}
2023-06-16 05:26:05 +02:00
fn bench_argon2id(params: Params) -> Option<Duration> {
let mut rng = rand::thread_rng();
20240613 performance improvements (#2844) Thanks to @Seba-T's work with Orca, we were able to identify a number of performance issues in certain high load conditions. This commit contains fixes for the following issues * Unbounded Memory Growth - due to how ARCache works, to maintain temporal consistency it must retain copies of keys (not values) in a special data set for tracking. The Filter Resolve Cache was using unresolved filters as keys. This caused memory explosions when refint or memberof were updating a group with a large number of members because they would emit a query with hundreds of filter terms that would only be used once and never again, causing the ARCache haunted set to grow without bound. To limit this, we no longer cache large/complex queries for resolution, and in future we may implement some other methods to reduce this like sha256/hmac of the queries. * When creating a new account, dyngroups would be engaged to add the account as a member due to the matching scope. However the change to the dyngroup was triggering an update of all the dyngroups *members* related memberof attributes. This would mean that adding an account would trigger every other account to be loaded an updated. * When memberof would iterate over leaf entries and update them one at a time. This mean a large number of small fragmented queries in the case of a lot of leaf entries being updated. Now leaf entries are updated in a single stripe once groups are stabilised. * Member of would always trigger it's members to always update. Instead, we should only update members where a difference is observed, or all members if the group's memberof itself has changed since this needs to propogate to all leaf entries. This significantly reduces the amount of writes and operations to examine the changed member of set. * Referential integrity would examine all reference uuids on entries for validity rather than just the reference uuids that were altered within the transaction. This change means that only uuids that were *added* are validated during an operation. * During async write backs (delayed actions) these were performed one at a time. Instead, when possible this should be done in a single transaction as the write transaction caches all writes in memory until the commit meaning that by batching we reduce overall latency. * In the server there can only be one write transaction and many readers. These are guarded by tokio semaphores that act as fair queues - first in gets the lock next. Due to the design of the server readers would be blocked on the *database* semaphore, and writers would block on the write semaphore and THEN the database semaphore. This arrangement was creating a situation which unfairly advantaged readers over writers, as any write would first have to become the head of it's queue, and then compete with all readers to access a db transaction. Instead, we now have a reader semaphore with size threads minus 1, clamped at a minimum of 1. This means that provided there are two or more threads, then a writer will *always* have a database handle available, and readers will pre-queue with each other before queueing on the db ticket. If there is only one thread, then writes and reads will alternate between each other fairly.
2024-06-20 04:50:00 +02:00
let salt: Vec<u8> = (0..ARGON2_SALT_LEN).map(|_| rng.gen()).collect();
let input: Vec<u8> = (0..ARGON2_SALT_LEN).map(|_| rng.gen()).collect();
let mut key: Vec<u8> = (0..ARGON2_KEY_LEN).map(|_| 0).collect();
2023-06-16 05:26:05 +02:00
let argon = Argon2::new(Algorithm::Argon2id, Version::V0x13, params);
let start = Instant::now();
argon
.hash_password_into(input.as_slice(), salt.as_slice(), key.as_mut_slice())
.ok()?;
let end = Instant::now();
end.checked_duration_since(start)
}
pub fn new_pbkdf2(policy: &CryptoPolicy, cleartext: &str) -> Result<Self, CryptoError> {
2023-06-16 05:26:05 +02:00
let pbkdf2_cost = policy.pbkdf2_cost;
2023-03-01 04:10:52 +01:00
let mut rng = rand::thread_rng();
let salt: Vec<u8> = (0..PBKDF2_SALT_LEN).map(|_| rng.gen()).collect();
let mut key: Vec<u8> = (0..PBKDF2_KEY_LEN).map(|_| 0).collect();
pbkdf2_hmac(
cleartext.as_bytes(),
salt.as_slice(),
pbkdf2_cost,
MessageDigest::sha256(),
key.as_mut_slice(),
)
.map(|()| {
// Turn key to a vec.
Kdf::PBKDF2(pbkdf2_cost, salt, key)
})
2023-06-16 05:26:05 +02:00
.map(|material| Password { material })
.map_err(|e| e.into())
2023-03-01 04:10:52 +01:00
}
pub fn new_argon2id(policy: &CryptoPolicy, cleartext: &str) -> Result<Self, CryptoError> {
2023-06-16 05:26:05 +02:00
let version = Version::V0x13;
let argon = Argon2::new(Algorithm::Argon2id, version, policy.argon2id_params.clone());
let mut rng = rand::thread_rng();
let salt: Vec<u8> = (0..ARGON2_SALT_LEN).map(|_| rng.gen()).collect();
let mut key: Vec<u8> = (0..ARGON2_KEY_LEN).map(|_| 0).collect();
argon
.hash_password_into(cleartext.as_bytes(), salt.as_slice(), key.as_mut_slice())
.map(|()| Kdf::ARGON2ID {
m_cost: policy.argon2id_params.m_cost(),
t_cost: policy.argon2id_params.t_cost(),
p_cost: policy.argon2id_params.p_cost(),
version: version as u32,
salt,
key,
})
.map_err(|_| CryptoError::Argon2)
.map(|material| Password { material })
}
pub fn new_argon2id_hsm(
policy: &CryptoPolicy,
cleartext: &str,
hsm: &mut dyn Tpm,
hmac_key: &HmacKey,
) -> Result<Self, CryptoError> {
let version = Version::V0x13;
let argon = Argon2::new(Algorithm::Argon2id, version, policy.argon2id_params.clone());
let mut rng = rand::thread_rng();
let salt: Vec<u8> = (0..ARGON2_SALT_LEN).map(|_| rng.gen()).collect();
let mut check_key: Vec<u8> = (0..ARGON2_KEY_LEN).map(|_| 0).collect();
argon
.hash_password_into(
cleartext.as_bytes(),
salt.as_slice(),
check_key.as_mut_slice(),
)
.map_err(|_| CryptoError::Argon2)
.and_then(|()| {
hsm.hmac(hmac_key, &check_key).map_err(|err| {
error!(?err, "hsm error");
CryptoError::Hsm
})
})
.map(|key| Kdf::TPM_ARGON2ID {
m_cost: policy.argon2id_params.m_cost(),
t_cost: policy.argon2id_params.t_cost(),
p_cost: policy.argon2id_params.p_cost(),
version: version as u32,
salt,
key,
})
2023-06-16 05:26:05 +02:00
.map(|material| Password { material })
}
#[inline]
pub fn new(policy: &CryptoPolicy, cleartext: &str) -> Result<Self, CryptoError> {
Self::new_argon2id(policy, cleartext)
2023-03-01 04:10:52 +01:00
}
pub fn verify(&self, cleartext: &str) -> Result<bool, CryptoError> {
self.verify_ctx(cleartext, None)
}
2023-06-24 08:15:31 +02:00
pub fn verify_ctx(
&self,
cleartext: &str,
hsm: Option<(&mut dyn Tpm, &HmacKey)>,
) -> Result<bool, CryptoError> {
match (&self.material, hsm) {
(
Kdf::TPM_ARGON2ID {
m_cost,
t_cost,
p_cost,
version,
salt,
key,
},
Some((hsm, hmac_key)),
) => {
2023-06-16 05:26:05 +02:00
let version: Version = (*version).try_into().map_err(|_| {
error!("Failed to convert {} to valid argon2id version", version);
CryptoError::Argon2Version
2023-06-16 05:26:05 +02:00
})?;
let key_len = key.len();
let params =
Params::new(*m_cost, *t_cost, *p_cost, Some(key_len)).map_err(|e| {
error!(err = ?e, "invalid argon2id parameters");
CryptoError::Argon2Parameters
2023-06-16 05:26:05 +02:00
})?;
let argon = Argon2::new(Algorithm::Argon2id, version, params);
let mut check_key: Vec<u8> = (0..key_len).map(|_| 0).collect();
argon
.hash_password_into(
cleartext.as_bytes(),
salt.as_slice(),
check_key.as_mut_slice(),
)
.map_err(|e| {
error!(err = ?e, "unable to perform argon2id hash");
CryptoError::Argon2
})
.and_then(|()| {
hsm.hmac(hmac_key, &check_key).map_err(|err| {
error!(?err, "hsm error");
CryptoError::Hsm
})
})
.map(|hmac_key| {
// Actually compare the outputs.
&hmac_key == key
})
}
(Kdf::TPM_ARGON2ID { .. }, None) => {
error!("Unable to validate password - not hsm context available");
Err(CryptoError::HsmContextMissing)
}
(
Kdf::ARGON2ID {
m_cost,
t_cost,
p_cost,
version,
salt,
key,
},
_,
) => {
let version: Version = (*version).try_into().map_err(|_| {
error!("Failed to convert {} to valid argon2id version", version);
CryptoError::Argon2Version
})?;
let key_len = key.len();
let params =
Params::new(*m_cost, *t_cost, *p_cost, Some(key_len)).map_err(|e| {
error!(err = ?e, "invalid argon2id parameters");
CryptoError::Argon2Parameters
})?;
let argon = Argon2::new(Algorithm::Argon2id, version, params);
let mut check_key: Vec<u8> = (0..key_len).map(|_| 0).collect();
argon
.hash_password_into(
cleartext.as_bytes(),
salt.as_slice(),
check_key.as_mut_slice(),
)
.map_err(|e| {
error!(err = ?e, "unable to perform argon2id hash");
CryptoError::Argon2
2023-06-16 05:26:05 +02:00
})
.map(|()| {
// Actually compare the outputs.
&check_key == key
})
}
(Kdf::PBKDF2(cost, salt, key), _) => {
2023-03-01 04:10:52 +01:00
// We have to get the number of bits to derive from our stored hash
// as some imported hash types may have variable lengths
let key_len = key.len();
debug_assert!(key_len >= PBKDF2_MIN_NIST_KEY_LEN);
let mut chal_key: Vec<u8> = (0..key_len).map(|_| 0).collect();
pbkdf2_hmac(
cleartext.as_bytes(),
salt.as_slice(),
*cost,
MessageDigest::sha256(),
chal_key.as_mut_slice(),
)
.map(|()| {
// Actually compare the outputs.
&chal_key == key
})
.map_err(|e| e.into())
2023-03-01 04:10:52 +01:00
}
(Kdf::PBKDF2_SHA1(cost, salt, key), _) => {
2023-03-01 04:10:52 +01:00
let key_len = key.len();
debug_assert!(key_len >= PBKDF2_SHA1_MIN_KEY_LEN);
let mut chal_key: Vec<u8> = (0..key_len).map(|_| 0).collect();
pbkdf2_hmac(
cleartext.as_bytes(),
salt.as_slice(),
*cost,
MessageDigest::sha1(),
chal_key.as_mut_slice(),
)
.map(|()| {
// Actually compare the outputs.
&chal_key == key
})
.map_err(|e| e.into())
2023-03-01 04:10:52 +01:00
}
(Kdf::PBKDF2_SHA512(cost, salt, key), _) => {
2023-03-01 04:10:52 +01:00
let key_len = key.len();
debug_assert!(key_len >= PBKDF2_MIN_NIST_KEY_LEN);
let mut chal_key: Vec<u8> = (0..key_len).map(|_| 0).collect();
pbkdf2_hmac(
cleartext.as_bytes(),
salt.as_slice(),
*cost,
MessageDigest::sha512(),
chal_key.as_mut_slice(),
)
.map(|()| {
// Actually compare the outputs.
&chal_key == key
})
.map_err(|e| e.into())
2023-03-01 04:10:52 +01:00
}
(Kdf::SHA1(key), _) => {
let mut hasher = Sha1::new();
hasher.update(cleartext.as_bytes());
let r = hasher.finish();
Ok(key == &(r.to_vec()))
}
(Kdf::SSHA1(salt, key), _) => {
let mut hasher = Sha1::new();
hasher.update(cleartext.as_bytes());
hasher.update(salt);
let r = hasher.finish();
Ok(key == &(r.to_vec()))
}
(Kdf::SHA256(key), _) => {
let mut hasher = Sha256::new();
hasher.update(cleartext.as_bytes());
let r = hasher.finish();
Ok(key == &(r.to_vec()))
}
(Kdf::SSHA256(salt, key), _) => {
let mut hasher = Sha256::new();
hasher.update(cleartext.as_bytes());
hasher.update(salt);
let r = hasher.finish();
Ok(key == &(r.to_vec()))
}
(Kdf::SHA512(key), _) => {
let mut hasher = Sha512::new();
hasher.update(cleartext.as_bytes());
let r = hasher.finish();
Ok(key == &(r.to_vec()))
}
(Kdf::SSHA512(salt, key), _) => {
2023-03-01 04:10:52 +01:00
let mut hasher = Sha512::new();
hasher.update(cleartext.as_bytes());
hasher.update(salt);
let r = hasher.finish();
Ok(key == &(r.to_vec()))
}
(Kdf::NT_MD4(key), _) => {
2023-03-01 04:10:52 +01:00
// We need to get the cleartext to utf16le for reasons.
let clear_utf16le: Vec<u8> = cleartext
.encode_utf16()
.map(|c| c.to_le_bytes())
.flat_map(|i| i.into_iter())
.collect();
let dgst = MessageDigest::from_nid(Nid::MD4).ok_or_else(|| {
error!("Unable to access MD4 - fips mode may be enabled, or you may need to activate the legacy provider.");
error!("For more details, see https://wiki.openssl.org/index.php/OpenSSL_3.0#Providers");
CryptoError::Md4Disabled
2023-03-01 04:10:52 +01:00
})?;
hash::hash(dgst, &clear_utf16le)
.map_err(|e| {
debug!(?e);
error!("Unable to digest MD4 - fips mode may be enabled, or you may need to activate the legacy provider.");
error!("For more details, see https://wiki.openssl.org/index.php/OpenSSL_3.0#Providers");
CryptoError::Md4Disabled
2023-03-01 04:10:52 +01:00
})
.map(|chal_key| chal_key.as_ref() == key)
}
}
}
pub fn to_dbpasswordv1(&self) -> DbPasswordV1 {
match &self.material {
Kdf::TPM_ARGON2ID {
m_cost,
t_cost,
p_cost,
version,
salt,
key,
} => DbPasswordV1::TPM_ARGON2ID {
m: *m_cost,
t: *t_cost,
p: *p_cost,
v: *version,
s: salt.clone().into(),
k: key.clone().into(),
},
2023-06-16 05:26:05 +02:00
Kdf::ARGON2ID {
m_cost,
t_cost,
p_cost,
version,
salt,
key,
} => DbPasswordV1::ARGON2ID {
m: *m_cost,
t: *t_cost,
p: *p_cost,
v: *version,
s: salt.clone().into(),
k: key.clone().into(),
},
2023-03-01 04:10:52 +01:00
Kdf::PBKDF2(cost, salt, hash) => {
DbPasswordV1::PBKDF2(*cost, salt.clone(), hash.clone())
}
Kdf::PBKDF2_SHA1(cost, salt, hash) => {
DbPasswordV1::PBKDF2_SHA1(*cost, salt.clone(), hash.clone())
}
Kdf::PBKDF2_SHA512(cost, salt, hash) => {
DbPasswordV1::PBKDF2_SHA512(*cost, salt.clone(), hash.clone())
}
Kdf::SHA1(hash) => DbPasswordV1::SHA1(hash.clone()),
Kdf::SSHA1(salt, hash) => DbPasswordV1::SSHA1(salt.clone(), hash.clone()),
Kdf::SHA256(hash) => DbPasswordV1::SHA256(hash.clone()),
Kdf::SSHA256(salt, hash) => DbPasswordV1::SSHA256(salt.clone(), hash.clone()),
Kdf::SHA512(hash) => DbPasswordV1::SHA512(hash.clone()),
2023-03-01 04:10:52 +01:00
Kdf::SSHA512(salt, hash) => DbPasswordV1::SSHA512(salt.clone(), hash.clone()),
Kdf::NT_MD4(hash) => DbPasswordV1::NT_MD4(hash.clone()),
}
}
pub fn to_repl_v1(&self) -> ReplPasswordV1 {
match &self.material {
Kdf::TPM_ARGON2ID {
m_cost,
t_cost,
p_cost,
version,
salt,
key,
} => ReplPasswordV1::TPM_ARGON2ID {
m_cost: *m_cost,
t_cost: *t_cost,
p_cost: *p_cost,
version: *version,
salt: salt.clone().into(),
key: key.clone().into(),
},
2023-06-16 05:26:05 +02:00
Kdf::ARGON2ID {
m_cost,
t_cost,
p_cost,
version,
salt,
key,
} => ReplPasswordV1::ARGON2ID {
m_cost: *m_cost,
t_cost: *t_cost,
p_cost: *p_cost,
version: *version,
salt: salt.clone().into(),
key: key.clone().into(),
},
2023-03-01 04:10:52 +01:00
Kdf::PBKDF2(cost, salt, hash) => ReplPasswordV1::PBKDF2 {
cost: *cost,
salt: salt.clone().into(),
hash: hash.clone().into(),
},
Kdf::PBKDF2_SHA1(cost, salt, hash) => ReplPasswordV1::PBKDF2_SHA1 {
cost: *cost,
salt: salt.clone().into(),
hash: hash.clone().into(),
},
Kdf::PBKDF2_SHA512(cost, salt, hash) => ReplPasswordV1::PBKDF2_SHA512 {
cost: *cost,
salt: salt.clone().into(),
hash: hash.clone().into(),
},
Kdf::SHA1(hash) => ReplPasswordV1::SHA1 {
hash: hash.clone().into(),
},
Kdf::SSHA1(salt, hash) => ReplPasswordV1::SSHA1 {
salt: salt.clone().into(),
hash: hash.clone().into(),
},
Kdf::SHA256(hash) => ReplPasswordV1::SHA256 {
hash: hash.clone().into(),
},
Kdf::SSHA256(salt, hash) => ReplPasswordV1::SSHA256 {
salt: salt.clone().into(),
hash: hash.clone().into(),
},
Kdf::SHA512(hash) => ReplPasswordV1::SHA512 {
hash: hash.clone().into(),
},
2023-03-01 04:10:52 +01:00
Kdf::SSHA512(salt, hash) => ReplPasswordV1::SSHA512 {
salt: salt.clone().into(),
hash: hash.clone().into(),
},
Kdf::NT_MD4(hash) => ReplPasswordV1::NT_MD4 {
hash: hash.clone().into(),
},
}
}
pub fn requires_upgrade(&self) -> bool {
match &self.material {
Kdf::ARGON2ID {
m_cost,
t_cost,
p_cost,
version,
salt,
key,
} => {
*version < ARGON2_VERSION ||
salt.len() < ARGON2_SALT_LEN ||
key.len() < ARGON2_KEY_LEN ||
// Can't multi-thread
*p_cost > ARGON2_MAX_P_COST ||
// Likely too long on cpu time.
*t_cost > ARGON2_MAX_T_COST ||
// Too much ram
*m_cost > ARGON2_MAX_RAM_KIB
2023-03-01 04:10:52 +01:00
}
// Only used in unixd today
Kdf::TPM_ARGON2ID { .. } => false,
// All now upgraded to argon2id
Kdf::PBKDF2(_, _, _)
| Kdf::PBKDF2_SHA512(_, _, _)
| Kdf::PBKDF2_SHA1(_, _, _)
| Kdf::SHA1(_)
| Kdf::SSHA1(_, _)
| Kdf::SHA256(_)
| Kdf::SSHA256(_, _)
| Kdf::SHA512(_)
| Kdf::SSHA512(_, _)
| Kdf::NT_MD4(_) => true,
2023-03-01 04:10:52 +01:00
}
}
}
2023-03-01 04:10:52 +01:00
#[cfg(test)]
mod tests {
use kanidm_hsm_crypto::soft::SoftTpm;
use kanidm_hsm_crypto::AuthValue;
2023-03-01 04:10:52 +01:00
use std::convert::TryFrom;
use crate::*;
#[test]
fn test_credential_simple() {
let p = CryptoPolicy::minimum();
let c = Password::new(&p, "password").unwrap();
assert!(c.verify("password").unwrap());
assert!(!c.verify("password1").unwrap());
assert!(!c.verify("Password1").unwrap());
assert!(!c.verify("It Works!").unwrap());
assert!(!c.verify("aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa").unwrap());
}
2023-06-16 05:26:05 +02:00
#[test]
fn test_password_pbkdf2() {
let p = CryptoPolicy::minimum();
let c = Password::new_pbkdf2(&p, "password").unwrap();
assert!(c.verify("password").unwrap());
assert!(!c.verify("password1").unwrap());
assert!(!c.verify("Password1").unwrap());
}
#[test]
fn test_password_argon2id() {
let p = CryptoPolicy::minimum();
let c = Password::new_argon2id(&p, "password").unwrap();
assert!(c.verify("password").unwrap());
assert!(!c.verify("password1").unwrap());
assert!(!c.verify("Password1").unwrap());
}
2023-03-01 04:10:52 +01:00
#[test]
fn test_password_from_invalid() {
assert!(Password::try_from("password").is_err())
}
#[test]
fn test_password_from_django_pbkdf2_sha256() {
let im_pw = "pbkdf2_sha256$36000$xIEozuZVAoYm$uW1b35DUKyhvQAf1mBqMvoBDcqSD06juzyO/nmyV0+w=";
let password = "eicieY7ahchaoCh0eeTa";
let r = Password::try_from(im_pw).expect("Failed to parse");
assert!(r.verify(password).unwrap_or(false));
}
#[test]
fn test_password_from_ds_sha1() {
let im_pw = "{SHA}W6ph5Mm5Pz8GgiULbPgzG37mj9g=";
let password = "password";
let r = Password::try_from(im_pw).expect("Failed to parse");
// Known weak, require upgrade.
assert!(r.requires_upgrade());
assert!(r.verify(password).unwrap_or(false));
}
#[test]
fn test_password_from_ds_ssha1() {
let im_pw = "{SSHA}EyzbBiP4u4zxOrLpKTORI/RX3HC6TCTJtnVOCQ==";
let password = "password";
let r = Password::try_from(im_pw).expect("Failed to parse");
// Known weak, require upgrade.
assert!(r.requires_upgrade());
assert!(r.verify(password).unwrap_or(false));
}
#[test]
fn test_password_from_ds_sha256() {
let im_pw = "{SHA256}XohImNooBHFR0OVvjcYpJ3NgPQ1qq73WKhHvch0VQtg=";
let password = "password";
let r = Password::try_from(im_pw).expect("Failed to parse");
// Known weak, require upgrade.
assert!(r.requires_upgrade());
assert!(r.verify(password).unwrap_or(false));
}
#[test]
fn test_password_from_ds_ssha256() {
let im_pw = "{SSHA256}luYWfFJOZgxySTsJXHgIaCYww4yMpu6yest69j/wO5n5OycuHFV/GQ==";
let password = "password";
let r = Password::try_from(im_pw).expect("Failed to parse");
// Known weak, require upgrade.
assert!(r.requires_upgrade());
assert!(r.verify(password).unwrap_or(false));
}
#[test]
fn test_password_from_ds_sha512() {
let im_pw = "{SHA512}sQnzu7wkTrgkQZF+0G1hi5AI3Qmzvv0bXgc5THBqi7mAsdd4Xll27ASbRt9fEyavWi6m0QP9B8lThf+rDKy8hg==";
let password = "password";
let r = Password::try_from(im_pw).expect("Failed to parse");
// Known weak, require upgrade.
assert!(r.requires_upgrade());
assert!(r.verify(password).unwrap_or(false));
}
2023-03-01 04:10:52 +01:00
#[test]
fn test_password_from_ds_ssha512() {
let im_pw = "{SSHA512}JwrSUHkI7FTAfHRVR6KoFlSN0E3dmaQWARjZ+/UsShYlENOqDtFVU77HJLLrY2MuSp0jve52+pwtdVl2QUAHukQ0XUf5LDtM";
let password = "password";
let r = Password::try_from(im_pw).expect("Failed to parse");
// Known weak, require upgrade.
assert!(r.requires_upgrade());
assert!(r.verify(password).unwrap_or(false));
}
// Can be generated with:
// slappasswd -s password -o module-load=/usr/lib64/openldap/pw-argon2.so -h {ARGON2}
#[test]
fn test_password_from_openldap_pkbdf2() {
let im_pw = "{PBKDF2}10000$IlfapjA351LuDSwYC0IQ8Q$saHqQTuYnjJN/tmAndT.8mJt.6w";
let password = "password";
let r = Password::try_from(im_pw).expect("Failed to parse");
assert!(r.requires_upgrade());
assert!(r.verify(password).unwrap_or(false));
}
#[test]
fn test_password_from_openldap_pkbdf2_sha1() {
let im_pw = "{PBKDF2-SHA1}10000$ZBEH6B07rgQpJSikyvMU2w$TAA03a5IYkz1QlPsbJKvUsTqNV";
let password = "password";
let r = Password::try_from(im_pw).expect("Failed to parse");
assert!(r.requires_upgrade());
assert!(r.verify(password).unwrap_or(false));
}
#[test]
fn test_password_from_openldap_pkbdf2_sha256() {
let im_pw = "{PBKDF2-SHA256}10000$henZGfPWw79Cs8ORDeVNrQ$1dTJy73v6n3bnTmTZFghxHXHLsAzKaAy8SksDfZBPIw";
let password = "password";
let r = Password::try_from(im_pw).expect("Failed to parse");
assert!(r.requires_upgrade());
2023-03-01 04:10:52 +01:00
assert!(r.verify(password).unwrap_or(false));
}
#[test]
fn test_password_from_openldap_pkbdf2_sha512() {
let im_pw = "{PBKDF2-SHA512}10000$Je1Uw19Bfv5lArzZ6V3EPw$g4T/1sqBUYWl9o93MVnyQ/8zKGSkPbKaXXsT8WmysXQJhWy8MRP2JFudSL.N9RklQYgDPxPjnfum/F2f/TrppA";
let password = "password";
let r = Password::try_from(im_pw).expect("Failed to parse");
assert!(r.requires_upgrade());
2023-03-01 04:10:52 +01:00
assert!(r.verify(password).unwrap_or(false));
}
// Not supported in openssl, may need an external crate.
#[test]
fn test_password_from_openldap_argon2() {
2023-06-16 05:26:05 +02:00
sketching::test_init();
let im_pw = "{ARGON2}$argon2id$v=19$m=65536,t=2,p=1$IyTQMsvzB2JHDiWx8fq7Ew$VhYOA7AL0kbRXI5g2kOyyp8St1epkNj7WZyUY4pAIQQ";
2023-03-01 04:10:52 +01:00
let password = "password";
let r = Password::try_from(im_pw).expect("Failed to parse");
assert!(!r.requires_upgrade());
2023-03-01 04:10:52 +01:00
assert!(r.verify(password).unwrap_or(false));
}
/*
* wbrown - 20221104 - I tried to programmatically enable the legacy provider, but
* it consistently "did nothing at all", meaning we have to rely on users to enable
* this for this test.
*/
/*
#[cfg(openssl3)]
fn setup_openssl_legacy_provider() -> openssl::lib_ctx::LibCtx {
let ctx = openssl::lib_ctx::LibCtx::new()
.expect("Failed to create new library context");
openssl::provider::Provider::load(Some(&ctx), "legacy")
.expect("Failed to setup provider.");
eprintln!("setup legacy provider maybe??");
ctx
}
*/
#[test]
fn test_password_from_ipa_nt_hash() {
sketching::test_init();
// Base64 no pad
let im_pw = "ipaNTHash: iEb36u6PsRetBr3YMLdYbA";
let password = "password";
let r = Password::try_from(im_pw).expect("Failed to parse");
assert!(r.requires_upgrade());
match r.verify(password) {
Ok(r) => assert!(r),
Err(_) =>
{
#[allow(clippy::panic)]
2023-03-01 04:10:52 +01:00
if cfg!(openssl3) {
warn!("To run this test, enable the legacy provider.");
} else {
panic!("openssl3 not enabled");
2023-03-01 04:10:52 +01:00
}
}
}
let im_pw = "ipaNTHash: pS43DjQLcUYhaNF_cd_Vhw==";
Password::try_from(im_pw).expect("Failed to parse");
2023-03-01 04:10:52 +01:00
}
#[test]
fn test_password_from_samba_nt_hash() {
sketching::test_init();
// Base64 no pad
let im_pw = "sambaNTPassword: 8846F7EAEE8FB117AD06BDD830B7586C";
let password = "password";
let r = Password::try_from(im_pw).expect("Failed to parse");
assert!(r.requires_upgrade());
match r.verify(password) {
Ok(r) => assert!(r),
Err(_) =>
{
#[allow(clippy::panic)]
2023-03-01 04:10:52 +01:00
if cfg!(openssl3) {
warn!("To run this test, enable the legacy provider.");
} else {
panic!("OpenSSL3 feature not enabled")
2023-03-01 04:10:52 +01:00
}
}
}
}
#[test]
fn test_password_argon2id_hsm_bind() {
sketching::test_init();
let mut hsm: Box<dyn Tpm> = Box::new(SoftTpm::new());
let auth_value = AuthValue::ephemeral().unwrap();
let loadable_machine_key = hsm.machine_key_create(&auth_value).unwrap();
let machine_key = hsm
.machine_key_load(&auth_value, &loadable_machine_key)
.unwrap();
let loadable_hmac_key = hsm.hmac_key_create(&machine_key).unwrap();
let key = hsm.hmac_key_load(&machine_key, &loadable_hmac_key).unwrap();
let ctx: &mut dyn Tpm = &mut *hsm;
let p = CryptoPolicy::minimum();
let c = Password::new_argon2id_hsm(&p, "password", ctx, &key).unwrap();
assert!(matches!(
c.verify("password"),
Err(CryptoError::HsmContextMissing)
));
// Assert it fails without the hmac
let dup = match &c.material {
Kdf::TPM_ARGON2ID {
m_cost,
t_cost,
p_cost,
version,
salt,
key,
} => Password {
material: Kdf::ARGON2ID {
m_cost: *m_cost,
t_cost: *t_cost,
p_cost: *p_cost,
version: *version,
salt: salt.clone(),
key: key.clone(),
},
},
#[allow(clippy::unreachable)]
_ => unreachable!(),
};
assert!(!dup.verify("password").unwrap());
assert!(c.verify_ctx("password", Some((ctx, &key))).unwrap());
assert!(!c.verify_ctx("password1", Some((ctx, &key))).unwrap());
assert!(!c.verify_ctx("Password1", Some((ctx, &key))).unwrap());
2023-06-30 04:41:32 +02:00
}
2023-03-01 04:10:52 +01:00
}