mirror of
https://github.com/kanidm/kanidm.git
synced 2025-02-23 20:47:01 +01:00
20190405 refint precursors (#40)
* Improved validation errors * Fix some audit issues * Make verify RO * Added verify and plugin verify hooks * Update plugin testing framework * Update designs and macros
This commit is contained in:
parent
b065b76130
commit
af9ac8f662
|
@ -5,6 +5,8 @@ name = "rsidm"
|
|||
version = "0.1.0"
|
||||
authors = ["William Brown <william@blackhats.net.au>"]
|
||||
default-run = "rsidm_core"
|
||||
#
|
||||
# edition = "2018"
|
||||
|
||||
|
||||
# We need three major binaries. The server itself, the unix client, and a cli
|
||||
|
|
147
designs/auth.rst
147
designs/auth.rst
|
@ -1,6 +1,141 @@
|
|||
|
||||
Auth Summary
|
||||
------------
|
||||
Authentication Use Cases
|
||||
------------------------
|
||||
|
||||
There are many planned integrations for authentication for a service like this. The uses cases
|
||||
for what kind of auth are below. It's important to consider that today a lot of identification
|
||||
is not just who you are, but what device you are using, so device security is paramount in the
|
||||
design of this system. We strongly recommend patching and full disk encryption, as well as
|
||||
high quality webauthn token like yubikeys or macOS touchid.
|
||||
|
||||
As a result, most of the important parts of this system become the auditing and co-operation between
|
||||
admins on high security events and changes, rather than limiting time of credentials. An important
|
||||
part of this also is limitation of scope of the credential rather than time as well.
|
||||
|
||||
|
||||
Kanidm account system
|
||||
=====================
|
||||
|
||||
The login screen is presented to the user. They are challenged for a series of credentials.
|
||||
When they request an action that is of a certain privilege, they must re-provide the strongest
|
||||
credential (ie webauthn token, totp). Some actions may require another account to sign off on
|
||||
the action for it to persist.
|
||||
|
||||
This applies to web or cli usage.
|
||||
|
||||
SSO to websites
|
||||
===============
|
||||
|
||||
The login screen is presented to the user. They are challenged for a series of credentials.
|
||||
They are then able to select any supplemental permissions (if any) they wish to request for
|
||||
the session, which may request further credentials. They are then redirected to the target
|
||||
site with an appropriate (oauth) token describing the requested rights.
|
||||
|
||||
Login to workstation (connected)
|
||||
================================
|
||||
|
||||
The user is prompted for a password and or token auth. These are verified by the kanidm server,
|
||||
and the login proceeds.
|
||||
|
||||
Login to workstation (disconnected)
|
||||
===================================
|
||||
|
||||
The user must have pre-configured their account after a successful authentication as above
|
||||
to support local password and token authentication. They are then able to provide 2fa when
|
||||
disconnected from the network.
|
||||
|
||||
Sudo on workstation
|
||||
===================
|
||||
|
||||
These are re-use of the above two scenarios.
|
||||
|
||||
Access to VPN or Wifi
|
||||
=====================
|
||||
|
||||
The user provides their password OR they provide a distinct network access password which
|
||||
allows them access.
|
||||
|
||||
MFA could be provided here with TOTP?
|
||||
|
||||
SSH to machine (legacy, disconnected)
|
||||
=====================================
|
||||
|
||||
The user pre-enrolls their SSH key to their account via the kanidm console. They are then able
|
||||
to ssh to the machine as usual with their key. SUDO rights are granted via password only once
|
||||
they are connected (see sudo on workstation).
|
||||
|
||||
Agent forwarding is a concern in this scenario to limit scope and lateral movement. Can this be
|
||||
limited correctly? IMO no, so don't allow it.
|
||||
|
||||
SSH to machine
|
||||
==============
|
||||
|
||||
The user calls a special kanidm ssh command. This generates a once-off ssh key, and an authentication
|
||||
request is lodged to the system. Based on policy, the user may need to allow the request via a web
|
||||
console, or another user may need to sign off to allow the access. Once granted the module then
|
||||
allows the authentication to continue, and the ephemeral key is allowed access and the login
|
||||
completes. The key may only be valid for a short time.
|
||||
|
||||
Agent forwarding is not a concern in this scenario due to the fact the key is only allowed to be used
|
||||
for this specific host.
|
||||
|
||||
SSH via a bastion host
|
||||
======================
|
||||
|
||||
This would work with the ssh to machine scenario, but in thiscase the key is granted rights to the
|
||||
bastion and the target machine so that agent forwarding can work.
|
||||
|
||||
Is there a way to ensure that only this series of jumps is allowed?
|
||||
|
||||
|
||||
Additionally:
|
||||
|
||||
* Support services must be able to assist in an account recovery situation
|
||||
* Some sites may wish allow self-sign up for accounts
|
||||
* Some sites may want self supporting account recovery
|
||||
|
||||
* Accounts should support ephemeral or group-requests
|
||||
|
||||
Implementation ideas for use cases
|
||||
----------------------------------
|
||||
|
||||
* For identification:
|
||||
* Issue "ID tokens" as an api where you lookup name/uuid and get the userentry + sshkeys + group
|
||||
entries. This allows one-shot caching of relevent types, and groups would not store the member
|
||||
link on the client. Allows the client to "cache" any extra details into the stored record as
|
||||
required. This would be used for linux/mac to get uid/gid details and ssh keys for distribution.
|
||||
* Would inherit search permissions for connection.
|
||||
* Some service accounts with permission would get the ntpassword field in this for radius.
|
||||
* Hosts can use anonymous or have a service account
|
||||
* Allows cached/disconnected auth.
|
||||
* Need to be checked periodically for validity (IE account revoke)
|
||||
|
||||
* For authentication:
|
||||
* Cookie/Auth proto - this is for checking pw's and mfa details as required from clients both web
|
||||
cli and pam. This is probably the most important and core proto, as everything else will derive
|
||||
from this session in some way.
|
||||
* Must have a max lifetime or refresh time up to max life to allow revoking.
|
||||
* If you want to "gain" higher privs, you need to auth-up to the shadow accounts extra requirements
|
||||
* You would then have two ID's associated, which may have different lifetimes?
|
||||
|
||||
* SSH Key Distribution via the ID tokens (this is great for offline / disconnected auth ...).
|
||||
* Clients can add password hashes to the ID tokens on successful auth.
|
||||
|
||||
* Request based auth proto - a service account creates an auth request, which then must be acknowledged
|
||||
by the correct kanidm api, and when acknowledged the authentication can proceed.
|
||||
|
||||
* OAuth - This would issue a different token type as required with the right details embedded as
|
||||
requested.
|
||||
|
||||
* Another idea: cli tool that says "I want to login" which generates an ephemeral key that only works
|
||||
on that host, for that identity with those specific roles you have requested.
|
||||
|
||||
Authorisation is a client-specific issue, we just need to provide the correct metadata for each client
|
||||
to be able to construct correct authorisations.
|
||||
|
||||
|
||||
Cookie/Token Auth Summary
|
||||
-------------------------
|
||||
|
||||
* auth is a stepped protocol (similar to SASL)
|
||||
* we offer possible authentications
|
||||
|
@ -16,16 +151,16 @@ decisions from a single datapoint
|
|||
* each token can be unique based on the type of auth (ie 2fa needed to get access
|
||||
to admin groups)
|
||||
|
||||
Auth Considerations
|
||||
-------------------
|
||||
Cookie/Token Auth Considerations
|
||||
--------------------------------
|
||||
|
||||
* Must prevent replay attacks from occuring at any point during the authentication process
|
||||
|
||||
* Minimise (but not eliminate) state on the server. This means that an auth process must
|
||||
remain on a single server, but the token granted should be valid on any server.
|
||||
|
||||
Auth Detail
|
||||
-----------
|
||||
Cookie/Token Auth Detail
|
||||
------------------------
|
||||
|
||||
Clients begin with no cookie, and no session.
|
||||
|
||||
|
|
|
@ -9,7 +9,7 @@ use serde_json;
|
|||
|
||||
use audit::AuditScope;
|
||||
use entry::{Entry, EntryCommitted, EntryNew, EntryValid};
|
||||
use error::OperationError;
|
||||
use error::{ConsistencyError, OperationError};
|
||||
use filter::{Filter, FilterValid};
|
||||
|
||||
mod idl;
|
||||
|
@ -148,6 +148,10 @@ pub trait BackendReadTransaction {
|
|||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn verify(&self) -> Vec<Result<(), ConsistencyError>> {
|
||||
Vec::new()
|
||||
}
|
||||
}
|
||||
|
||||
impl Drop for BackendTransaction {
|
||||
|
|
|
@ -423,6 +423,10 @@ impl Entry<EntryValid, EntryCommitted> {
|
|||
attrs: attrs_new,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn get_id(&self) -> i64 {
|
||||
self.id.expect("ID corrupted!?!?")
|
||||
}
|
||||
}
|
||||
|
||||
impl<STATE> Entry<EntryValid, STATE> {
|
||||
|
|
|
@ -18,6 +18,7 @@ pub enum OperationError {
|
|||
EmptyRequest,
|
||||
Backend,
|
||||
NoMatchingEntries,
|
||||
ConsistencyError(Vec<Result<(), ConsistencyError>>),
|
||||
SchemaViolation(SchemaError),
|
||||
Plugin,
|
||||
FilterGeneration,
|
||||
|
@ -28,3 +29,14 @@ pub enum OperationError {
|
|||
BackendEngine,
|
||||
SQLiteError, //(RusqliteError)
|
||||
}
|
||||
|
||||
#[derive(Serialize, Deserialize, Debug, PartialEq)]
|
||||
pub enum ConsistencyError {
|
||||
Unknown,
|
||||
// Class, Attribute
|
||||
SchemaClassMissingAttribute(String, String),
|
||||
QueryServerSearchFailure,
|
||||
EntryUuidCorrupt(i64),
|
||||
UuidIndexCorrupt(String),
|
||||
UuidNotUnique(String),
|
||||
}
|
||||
|
|
|
@ -41,6 +41,7 @@ mod event;
|
|||
mod identity;
|
||||
mod interval;
|
||||
mod modify;
|
||||
#[macro_use]
|
||||
mod plugins;
|
||||
mod schema;
|
||||
mod server;
|
||||
|
|
|
@ -4,11 +4,11 @@ use uuid::Uuid;
|
|||
use audit::AuditScope;
|
||||
use be::{BackendReadTransaction, BackendWriteTransaction};
|
||||
use entry::{Entry, EntryInvalid, EntryNew};
|
||||
use error::OperationError;
|
||||
use error::{ConsistencyError, OperationError};
|
||||
use event::CreateEvent;
|
||||
use filter::Filter;
|
||||
use schema::SchemaWriteTransaction;
|
||||
use server::{QueryServerReadTransaction, QueryServerWriteTransaction};
|
||||
use server::{QueryServerReadTransaction, QueryServerTransaction, QueryServerWriteTransaction};
|
||||
|
||||
// TO FINISH
|
||||
/*
|
||||
|
@ -124,12 +124,86 @@ impl Plugin for Base {
|
|||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn verify(
|
||||
au: &mut AuditScope,
|
||||
qs: &QueryServerTransaction,
|
||||
) -> Vec<Result<(), ConsistencyError>> {
|
||||
let name_uuid = String::from("uuid");
|
||||
// Verify all uuid's are unique?
|
||||
// Probably the literally worst thing ...
|
||||
|
||||
// Search for class = *
|
||||
let entries = match qs.internal_search(au, Filter::Pres("class".to_string())) {
|
||||
Ok(r) => r,
|
||||
Err(e) => {
|
||||
// try_audit?
|
||||
// TODO: So here, should be returning the oper error? That makes the errors
|
||||
// recursive, so what is correct?
|
||||
return vec![Err(ConsistencyError::QueryServerSearchFailure)];
|
||||
}
|
||||
};
|
||||
|
||||
let mut r_uniq = entries
|
||||
.iter()
|
||||
// do an exists checks on the uuid
|
||||
.map(|e| {
|
||||
// TODO: Could this be better?
|
||||
let uuid = match e.get_ava(&name_uuid) {
|
||||
Some(u) => {
|
||||
if u.len() == 1 {
|
||||
Ok(u.first().expect("Ohh ffs, really?").clone())
|
||||
} else {
|
||||
Err(ConsistencyError::EntryUuidCorrupt(e.get_id()))
|
||||
}
|
||||
}
|
||||
None => Err(ConsistencyError::EntryUuidCorrupt(e.get_id())),
|
||||
};
|
||||
|
||||
match uuid {
|
||||
Ok(u) => {
|
||||
let filt = Filter::Eq(name_uuid.clone(), u.clone());
|
||||
match qs.internal_search(au, filt) {
|
||||
Ok(r) => {
|
||||
if r.len() == 0 {
|
||||
Err(ConsistencyError::UuidIndexCorrupt(u))
|
||||
} else if r.len() == 1 {
|
||||
Ok(())
|
||||
} else {
|
||||
Err(ConsistencyError::UuidNotUnique(u))
|
||||
}
|
||||
}
|
||||
Err(_) => Err(ConsistencyError::QueryServerSearchFailure),
|
||||
}
|
||||
}
|
||||
Err(e) => Err(e),
|
||||
}
|
||||
})
|
||||
.filter(|v| v.is_err())
|
||||
.collect();
|
||||
|
||||
/*
|
||||
let mut r_name = entries.iter()
|
||||
// do an eq internal search and validate == 1 (ignore ts + rc)
|
||||
.map(|e| {
|
||||
})
|
||||
.filter(|v| {
|
||||
v.is_err()
|
||||
})
|
||||
.collect();
|
||||
|
||||
r_uniq.append(r_name);
|
||||
*/
|
||||
|
||||
r_uniq
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::super::Plugin;
|
||||
use super::Base;
|
||||
#[macro_use]
|
||||
use plugins::Plugin;
|
||||
use plugins::base::Base;
|
||||
use std::sync::Arc;
|
||||
|
||||
use audit::AuditScope;
|
||||
|
@ -139,54 +213,6 @@ mod tests {
|
|||
use schema::Schema;
|
||||
use server::{QueryServer, QueryServerWriteTransaction};
|
||||
|
||||
macro_rules! run_pre_create_test {
|
||||
(
|
||||
$preload_entries:ident,
|
||||
$create_entries:ident,
|
||||
$ident:ident,
|
||||
$internal:ident,
|
||||
$test_fn:expr
|
||||
) => {{
|
||||
let mut au = AuditScope::new("run_pre_create_test");
|
||||
audit_segment!(au, || {
|
||||
// Create an in memory BE
|
||||
let be = Backend::new(&mut au, "").unwrap();
|
||||
|
||||
let schema_outer = Schema::new(&mut au).unwrap();
|
||||
{
|
||||
let mut schema = schema_outer.write();
|
||||
schema.bootstrap_core(&mut au).unwrap();
|
||||
schema.commit().unwrap();
|
||||
}
|
||||
let qs = QueryServer::new(be, Arc::new(schema_outer));
|
||||
|
||||
if !$preload_entries.is_empty() {
|
||||
let qs_write = qs.write();
|
||||
qs_write.internal_create(&mut au, $preload_entries);
|
||||
assert!(qs_write.commit(&mut au).is_ok());
|
||||
}
|
||||
|
||||
let ce = CreateEvent::from_vec($create_entries.clone());
|
||||
|
||||
let mut au_test = AuditScope::new("pre_create_test");
|
||||
{
|
||||
let qs_write = qs.write();
|
||||
audit_segment!(au_test, || $test_fn(
|
||||
&mut au_test,
|
||||
&qs_write,
|
||||
&mut $create_entries,
|
||||
&ce,
|
||||
));
|
||||
assert!(qs_write.commit(&mut au).is_ok());
|
||||
}
|
||||
|
||||
au.append_scope(au_test);
|
||||
});
|
||||
// Dump the raw audit log.
|
||||
println!("{}", au);
|
||||
}};
|
||||
}
|
||||
|
||||
// Check empty create
|
||||
#[test]
|
||||
fn test_pre_create_empty() {
|
||||
|
|
6
src/lib/plugins/failure.rs
Normal file
6
src/lib/plugins/failure.rs
Normal file
|
@ -0,0 +1,6 @@
|
|||
// Failure inducing plugin
|
||||
//
|
||||
// Designed for complex server tests, this plugin is able to look at Event
|
||||
// metadata and induce failures in various stages of query server operation
|
||||
// execution. The idea is that we should be able to test and assert that
|
||||
// rollback events do not have negative effects on various server elements.
|
66
src/lib/plugins/macros.rs
Normal file
66
src/lib/plugins/macros.rs
Normal file
|
@ -0,0 +1,66 @@
|
|||
#[macro_escape]
|
||||
// Test helpers for all plugins.
|
||||
#[macro_export]
|
||||
macro_rules! run_pre_create_test {
|
||||
(
|
||||
$preload_entries:ident,
|
||||
$create_entries:ident,
|
||||
$ident:ident,
|
||||
$internal:ident,
|
||||
$test_fn:expr
|
||||
) => {{
|
||||
let mut au = AuditScope::new("run_pre_create_test");
|
||||
audit_segment!(au, || {
|
||||
// Create an in memory BE
|
||||
let be = Backend::new(&mut au, "").unwrap();
|
||||
|
||||
let schema_outer = Schema::new(&mut au).unwrap();
|
||||
{
|
||||
let mut schema = schema_outer.write();
|
||||
schema.bootstrap_core(&mut au).unwrap();
|
||||
schema.commit().unwrap();
|
||||
}
|
||||
let qs = QueryServer::new(be, Arc::new(schema_outer));
|
||||
|
||||
if !$preload_entries.is_empty() {
|
||||
let qs_write = qs.write();
|
||||
qs_write.internal_create(&mut au, $preload_entries);
|
||||
assert!(qs_write.commit(&mut au).is_ok());
|
||||
}
|
||||
|
||||
let ce = CreateEvent::from_vec($create_entries.clone());
|
||||
|
||||
let mut au_test = AuditScope::new("pre_create_test");
|
||||
{
|
||||
let qs_write = qs.write();
|
||||
audit_segment!(au_test, || $test_fn(
|
||||
&mut au_test,
|
||||
&qs_write,
|
||||
&mut $create_entries,
|
||||
&ce,
|
||||
));
|
||||
assert!(qs_write.commit(&mut au).is_ok());
|
||||
}
|
||||
// Make sure there are no errors.
|
||||
assert!(qs.verify(&mut au_test).len() == 0);
|
||||
|
||||
au.append_scope(au_test);
|
||||
});
|
||||
// Dump the raw audit log.
|
||||
println!("{}", au);
|
||||
}};
|
||||
}
|
||||
|
||||
/*
|
||||
#[macro_export]
|
||||
macro_rules! run_post_create_test {
|
||||
}
|
||||
|
||||
#[macro_export]
|
||||
macro_rules! run_post_modify_test {
|
||||
}
|
||||
|
||||
#[macro_export]
|
||||
macro_rules! run_post_delete_test {
|
||||
}
|
||||
*/
|
|
@ -1,8 +1,13 @@
|
|||
use audit::AuditScope;
|
||||
use be::BackendReadTransaction;
|
||||
use entry::{Entry, EntryCommitted, EntryInvalid, EntryNew, EntryValid};
|
||||
use error::OperationError;
|
||||
use error::{ConsistencyError, OperationError};
|
||||
use event::{CreateEvent, DeleteEvent, ModifyEvent, SearchEvent};
|
||||
use server::QueryServerWriteTransaction;
|
||||
use schema::SchemaReadTransaction;
|
||||
use server::{QueryServerReadTransaction, QueryServerTransaction, QueryServerWriteTransaction};
|
||||
|
||||
#[macro_use]
|
||||
mod macros;
|
||||
|
||||
mod base;
|
||||
mod failure;
|
||||
|
@ -60,12 +65,11 @@ trait Plugin {
|
|||
Ok(())
|
||||
}
|
||||
|
||||
fn pre_search() -> Result<(), OperationError> {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn post_search() -> Result<(), OperationError> {
|
||||
Ok(())
|
||||
fn verify(
|
||||
_au: &mut AuditScope,
|
||||
_qs: &QueryServerTransaction,
|
||||
) -> Vec<Result<(), ConsistencyError>> {
|
||||
Vec::new()
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -82,7 +86,7 @@ macro_rules! run_pre_create_plugin {
|
|||
$target_plugin:ty
|
||||
) => {{
|
||||
let mut audit_scope = AuditScope::new(<($target_plugin)>::id());
|
||||
let r = audit_segment!(audit_scope, || <($target_plugin)>::pre_create(
|
||||
let mut r = audit_segment!(audit_scope, || <($target_plugin)>::pre_create(
|
||||
&mut audit_scope,
|
||||
$qs,
|
||||
$cand,
|
||||
|
@ -93,6 +97,23 @@ macro_rules! run_pre_create_plugin {
|
|||
}};
|
||||
}
|
||||
|
||||
macro_rules! run_verify_plugin {
|
||||
(
|
||||
$au:ident,
|
||||
$qs:ident,
|
||||
$results:expr,
|
||||
$target_plugin:ty
|
||||
) => {{
|
||||
let mut audit_scope = AuditScope::new(<($target_plugin)>::id());
|
||||
let mut r = audit_segment!(audit_scope, || <($target_plugin)>::verify(
|
||||
&mut audit_scope,
|
||||
$qs,
|
||||
));
|
||||
$results.append(&mut r);
|
||||
$au.append_scope(audit_scope);
|
||||
}};
|
||||
}
|
||||
|
||||
impl Plugins {
|
||||
pub fn run_pre_create(
|
||||
au: &mut AuditScope,
|
||||
|
@ -156,16 +177,13 @@ impl Plugins {
|
|||
Ok(())
|
||||
}
|
||||
|
||||
pub fn run_pre_search(au: &mut AuditScope) -> Result<(), OperationError> {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub fn run_post_search(au: &mut AuditScope) -> Result<(), OperationError> {
|
||||
Ok(())
|
||||
pub fn run_verify(
|
||||
au: &mut AuditScope,
|
||||
qs: &QueryServerTransaction,
|
||||
) -> Vec<Result<(), ConsistencyError>> {
|
||||
let mut results = Vec::new();
|
||||
run_verify_plugin!(au, qs, &mut results, base::Base);
|
||||
run_verify_plugin!(au, qs, &mut results, refint::ReferentialIntegrity);
|
||||
results
|
||||
}
|
||||
}
|
||||
|
||||
// We should define the order that plugins should run
|
||||
|
||||
// How do we deal with plugin activation? Config?
|
||||
// What do plugins default to?
|
||||
|
|
55
src/lib/plugins/refint.rs
Normal file
55
src/lib/plugins/refint.rs
Normal file
|
@ -0,0 +1,55 @@
|
|||
// Referential Integrity
|
||||
//
|
||||
// Given an entry, modification or change, ensure that all referential links
|
||||
// in the database are maintained. IE there are no dangling references that
|
||||
// are unable to be resolved, as this may cause errors in Item -> ProtoItem
|
||||
// translation.
|
||||
//
|
||||
// It will be important to understand the interaction of this plugin with memberof
|
||||
// when that is written, as they *both* manipulate and alter entry reference
|
||||
// data, so we should be careful not to step on each other.
|
||||
|
||||
use audit::AuditScope;
|
||||
use entry::{Entry, EntryCommitted, EntryNew, EntryValid};
|
||||
use error::OperationError;
|
||||
use event::{CreateEvent, DeleteEvent, ModifyEvent, SearchEvent};
|
||||
use plugins::Plugin;
|
||||
use server::QueryServerWriteTransaction;
|
||||
|
||||
pub struct ReferentialIntegrity;
|
||||
|
||||
impl Plugin for ReferentialIntegrity {
|
||||
fn id() -> &'static str {
|
||||
"referential_integrity"
|
||||
}
|
||||
|
||||
fn post_create(
|
||||
_au: &mut AuditScope,
|
||||
_qs: &QueryServerWriteTransaction,
|
||||
_cand: &Vec<Entry<EntryValid, EntryNew>>,
|
||||
_ce: &CreateEvent,
|
||||
) -> Result<(), OperationError> {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn post_modify(
|
||||
_au: &mut AuditScope,
|
||||
_qs: &QueryServerWriteTransaction,
|
||||
_cand: &Vec<Entry<EntryValid, EntryCommitted>>,
|
||||
_ce: &ModifyEvent,
|
||||
) -> Result<(), OperationError> {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn post_delete(
|
||||
_au: &mut AuditScope,
|
||||
_qs: &QueryServerWriteTransaction,
|
||||
_cand: &Vec<Entry<EntryValid, EntryCommitted>>,
|
||||
_ce: &DeleteEvent,
|
||||
) -> Result<(), OperationError> {
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {}
|
|
@ -87,7 +87,14 @@ impl QueryServerV1 {
|
|||
// TODO: Trigger an index? This could be costly ...
|
||||
// Perhaps a config option to say if we index on startup or not.
|
||||
// TODO: Check the results!
|
||||
.and_then(|_| schema_write.validate(&mut audit))
|
||||
.and_then(|_| {
|
||||
let r = schema_write.validate(&mut audit);
|
||||
if r.len() == 0 {
|
||||
Ok(())
|
||||
} else {
|
||||
Err(OperationError::ConsistencyError(r))
|
||||
}
|
||||
})
|
||||
.and_then(|_| be_txn.commit())
|
||||
.and_then(|_| schema_write.commit())
|
||||
{
|
||||
|
|
|
@ -1,11 +1,8 @@
|
|||
use super::audit::AuditScope;
|
||||
use super::constants::*;
|
||||
// use super::entry::Entry;
|
||||
use super::error::{OperationError, SchemaError};
|
||||
// use super::filter::Filter;
|
||||
use std::collections::HashMap;
|
||||
// use modify::ModifyList;
|
||||
use audit::AuditScope;
|
||||
use constants::*;
|
||||
use error::{ConsistencyError, OperationError, SchemaError};
|
||||
use regex::Regex;
|
||||
use std::collections::HashMap;
|
||||
use std::convert::TryFrom;
|
||||
use std::str::FromStr;
|
||||
use uuid::Uuid;
|
||||
|
@ -306,7 +303,7 @@ pub struct SchemaInner {
|
|||
pub trait SchemaReadTransaction {
|
||||
fn get_inner(&self) -> &SchemaInner;
|
||||
|
||||
fn validate(&self, audit: &mut AuditScope) -> Result<(), OperationError> {
|
||||
fn validate(&self, audit: &mut AuditScope) -> Vec<Result<(), ConsistencyError>> {
|
||||
self.get_inner().validate(audit)
|
||||
}
|
||||
|
||||
|
@ -644,14 +641,16 @@ impl SchemaInner {
|
|||
},
|
||||
);
|
||||
|
||||
match s.validate(&mut au) {
|
||||
Ok(_) => Ok(s),
|
||||
Err(e) => Err(e),
|
||||
// TODO: Probably needs to do a collect.
|
||||
let r = s.validate(&mut au);
|
||||
if r.len() == 0 {
|
||||
Ok(s)
|
||||
} else {
|
||||
Err(OperationError::ConsistencyError(r))
|
||||
}
|
||||
});
|
||||
|
||||
audit.append_scope(au);
|
||||
|
||||
r
|
||||
}
|
||||
|
||||
|
@ -855,10 +854,15 @@ impl SchemaInner {
|
|||
|
||||
audit.append_scope(au);
|
||||
|
||||
r
|
||||
if r.len() == 0 {
|
||||
Ok(())
|
||||
} else {
|
||||
Err(OperationError::ConsistencyError(r))
|
||||
}
|
||||
}
|
||||
|
||||
pub fn validate(&self, audit: &mut AuditScope) -> Result<(), OperationError> {
|
||||
pub fn validate(&self, audit: &mut AuditScope) -> Vec<Result<(), ConsistencyError>> {
|
||||
let mut res = Vec::new();
|
||||
// TODO: Does this need to validate anything further at all? The UUID
|
||||
// will be checked as part of the schema migration on startup, so I think
|
||||
// just that all the content is sane is fine.
|
||||
|
@ -873,14 +877,20 @@ impl SchemaInner {
|
|||
a
|
||||
);
|
||||
if !self.attributes.contains_key(a) {
|
||||
return Err(OperationError::SchemaViolation(SchemaError::Corrupted));
|
||||
res.push(Err(ConsistencyError::SchemaClassMissingAttribute(
|
||||
class.name.clone(),
|
||||
a.clone(),
|
||||
)))
|
||||
}
|
||||
}
|
||||
for a in &class.may {
|
||||
// report the attribute.
|
||||
audit_log!(audit, "validate may class:attr -> {}:{}", class.name, a);
|
||||
if !self.attributes.contains_key(a) {
|
||||
return Err(OperationError::SchemaViolation(SchemaError::Corrupted));
|
||||
res.push(Err(ConsistencyError::SchemaClassMissingAttribute(
|
||||
class.name.clone(),
|
||||
a.clone(),
|
||||
)))
|
||||
}
|
||||
}
|
||||
for a in &class.systemmust {
|
||||
|
@ -892,19 +902,25 @@ impl SchemaInner {
|
|||
a
|
||||
);
|
||||
if !self.attributes.contains_key(a) {
|
||||
return Err(OperationError::SchemaViolation(SchemaError::Corrupted));
|
||||
res.push(Err(ConsistencyError::SchemaClassMissingAttribute(
|
||||
class.name.clone(),
|
||||
a.clone(),
|
||||
)))
|
||||
}
|
||||
}
|
||||
for a in &class.must {
|
||||
// report the attribute.
|
||||
audit_log!(audit, "validate must class:attr -> {}:{}", class.name, a);
|
||||
if !self.attributes.contains_key(a) {
|
||||
return Err(OperationError::SchemaViolation(SchemaError::Corrupted));
|
||||
res.push(Err(ConsistencyError::SchemaClassMissingAttribute(
|
||||
class.name.clone(),
|
||||
a.clone(),
|
||||
)))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Ok(())
|
||||
res
|
||||
}
|
||||
|
||||
// Normalise *does not* validate.
|
||||
|
@ -990,17 +1006,25 @@ impl Schema {
|
|||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::super::audit::AuditScope;
|
||||
use super::super::constants::*;
|
||||
use super::super::entry::{Entry, EntryInvalid, EntryNew, EntryValid};
|
||||
use super::super::error::SchemaError;
|
||||
use super::super::filter::{Filter, FilterValid};
|
||||
use super::{IndexType, Schema, SchemaAttribute, SyntaxType};
|
||||
use audit::AuditScope;
|
||||
use constants::*;
|
||||
use entry::{Entry, EntryInvalid, EntryNew, EntryValid};
|
||||
use error::{ConsistencyError, SchemaError};
|
||||
use filter::{Filter, FilterValid};
|
||||
use schema::SchemaReadTransaction;
|
||||
use schema::{IndexType, Schema, SchemaAttribute, SyntaxType};
|
||||
use serde_json;
|
||||
use std::convert::TryFrom;
|
||||
use uuid::Uuid;
|
||||
|
||||
macro_rules! validate_schema {
|
||||
($sch:ident, $au:expr) => {{
|
||||
// Turns into a result type
|
||||
let r: Result<Vec<()>, ConsistencyError> = $sch.validate($au).into_iter().collect();
|
||||
assert!(r.is_ok());
|
||||
}};
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_schema_index_tryfrom() {
|
||||
let r1 = IndexType::try_from("EQUALITY");
|
||||
|
@ -1195,7 +1219,7 @@ mod tests {
|
|||
let mut audit = AuditScope::new("test_schema_simple");
|
||||
let schema = Schema::new(&mut audit).unwrap();
|
||||
let schema_ro = schema.read();
|
||||
assert!(schema_ro.validate(&mut audit).is_ok());
|
||||
validate_schema!(schema_ro, &mut audit);
|
||||
println!("{}", audit);
|
||||
}
|
||||
|
||||
|
|
|
@ -10,7 +10,7 @@ use be::{
|
|||
|
||||
use constants::{JSON_ANONYMOUS_V1, JSON_SYSTEM_INFO_V1};
|
||||
use entry::{Entry, EntryCommitted, EntryInvalid, EntryNew, EntryValid};
|
||||
use error::{OperationError, SchemaError};
|
||||
use error::{ConsistencyError, OperationError, SchemaError};
|
||||
use event::{CreateEvent, DeleteEvent, ExistsEvent, ModifyEvent, ReviveRecycledEvent, SearchEvent};
|
||||
use filter::{Filter, FilterInvalid};
|
||||
use modify::{Modify, ModifyInvalid, ModifyList};
|
||||
|
@ -52,6 +52,7 @@ pub trait QueryServerReadTransaction {
|
|||
|
||||
// TODO: Assert access control allows the filter and requested attrs.
|
||||
|
||||
/*
|
||||
let mut audit_plugin_pre = AuditScope::new("plugin_pre_search");
|
||||
let plug_pre_res = Plugins::run_pre_search(&mut audit_plugin_pre);
|
||||
au.append_scope(audit_plugin_pre);
|
||||
|
@ -63,6 +64,7 @@ pub trait QueryServerReadTransaction {
|
|||
return Err(e);
|
||||
}
|
||||
}
|
||||
*/
|
||||
|
||||
let mut audit_be = AuditScope::new("backend_search");
|
||||
let res = self
|
||||
|
@ -76,6 +78,7 @@ pub trait QueryServerReadTransaction {
|
|||
return res;
|
||||
}
|
||||
|
||||
/*
|
||||
let mut audit_plugin_post = AuditScope::new("plugin_post_search");
|
||||
let plug_post_res = Plugins::run_post_search(&mut audit_plugin_post);
|
||||
au.append_scope(audit_plugin_post);
|
||||
|
@ -87,6 +90,7 @@ pub trait QueryServerReadTransaction {
|
|||
return Err(e);
|
||||
}
|
||||
}
|
||||
*/
|
||||
|
||||
// TODO: We'll add ACI here. I think ACI should transform from
|
||||
// internal -> proto entries since we have to anyway ...
|
||||
|
@ -297,6 +301,22 @@ pub trait QueryServerReadTransaction {
|
|||
// to a concrete uuid.
|
||||
Ok(_) => {
|
||||
// TODO: Should this check existance?
|
||||
// Could this be a security risk for disclosure?
|
||||
// So it would only reveal if a name/uuid did/did not exist
|
||||
// because this pre-acp check, but inversely, this means if we
|
||||
// fail fast here, we would not hae a situation where we would create
|
||||
// then ref-int would invalidate the structure immediately.
|
||||
//
|
||||
// I can see a situation where you would modify, and then immediately
|
||||
// have the mod removed because it would fail the refint (IE add
|
||||
// raw uuid X, then immediately it's removed)
|
||||
//
|
||||
// This would never be the case with resolved uuid's though, because
|
||||
// they are inside the txn. So do we just ignore this as an edge case?
|
||||
//
|
||||
// For now, refint will fight the raw uuid's, and will be tested to
|
||||
// assume they don't exist on create/mod/etc.. If this check was added
|
||||
// then refint may not need post_create handlers.
|
||||
Ok(value.clone())
|
||||
}
|
||||
Err(_) => {
|
||||
|
@ -353,6 +373,55 @@ impl QueryServerReadTransaction for QueryServerTransaction {
|
|||
}
|
||||
}
|
||||
|
||||
impl QueryServerTransaction {
|
||||
// Verify the data content of the server is as expected. This will probably
|
||||
// call various functions for validation, including possibly plugin
|
||||
// verifications.
|
||||
fn verify(&self, au: &mut AuditScope) -> Vec<Result<(), ConsistencyError>> {
|
||||
let mut audit = AuditScope::new("verify");
|
||||
|
||||
// If we fail after backend, we need to return NOW because we can't
|
||||
// assert any other faith in the DB states.
|
||||
// * backend
|
||||
let be_errs = self.get_be_txn().verify();
|
||||
|
||||
if be_errs.len() != 0 {
|
||||
au.append_scope(audit);
|
||||
return be_errs;
|
||||
}
|
||||
|
||||
// * in memory schema consistency.
|
||||
let sc_errs = self.get_schema().validate(&mut audit);
|
||||
|
||||
if sc_errs.len() != 0 {
|
||||
au.append_scope(audit);
|
||||
return sc_errs;
|
||||
}
|
||||
|
||||
// * Indexing (req be + sch )
|
||||
/*
|
||||
idx_errs = self.get_be_txn()
|
||||
.verify_indexes();
|
||||
|
||||
if idx_errs.len() != 0 {
|
||||
au.append_scope(audit);
|
||||
return idx_errs;
|
||||
}
|
||||
*/
|
||||
|
||||
// Ok BE passed, lets move on to the content.
|
||||
// Most of our checks are in the plugins, so we let them
|
||||
// do their job.
|
||||
|
||||
// Now, call the plugins verification system.
|
||||
let pl_errs = Plugins::run_verify(&mut audit, self);
|
||||
|
||||
// Finish up ...
|
||||
au.append_scope(audit);
|
||||
pl_errs
|
||||
}
|
||||
}
|
||||
|
||||
pub struct QueryServerWriteTransaction<'a> {
|
||||
committed: bool,
|
||||
// be_write_txn: BackendWriteTransaction,
|
||||
|
@ -411,6 +480,11 @@ impl QueryServer {
|
|||
schema: self.schema.write(),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn verify(&self, au: &mut AuditScope) -> Vec<Result<(), ConsistencyError>> {
|
||||
let r_txn = self.read();
|
||||
r_txn.verify(au)
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a> QueryServerWriteTransaction<'a> {
|
||||
|
@ -1007,20 +1081,21 @@ impl<'a> QueryServerWriteTransaction<'a> {
|
|||
assert!(!committed);
|
||||
// Begin an audit.
|
||||
// Validate the schema,
|
||||
schema
|
||||
.validate(audit)
|
||||
|
||||
let r = schema.validate(audit);
|
||||
if r.len() == 0 {
|
||||
// TODO: At this point, if validate passes, we probably actually want
|
||||
// to perform a reload BEFORE we commit.
|
||||
// Alternate, we attempt to reload during batch ops, but this seems
|
||||
// costly.
|
||||
.and_then(|_| {
|
||||
// Backend Commit
|
||||
be_txn.commit().and_then(|_| {
|
||||
// Schema commit: Since validate passed and be is good, this
|
||||
// must now also be good.
|
||||
schema.commit()
|
||||
})
|
||||
be_txn.commit().and_then(|_| {
|
||||
// Schema commit: Since validate passed and be is good, this
|
||||
// must now also be good.
|
||||
schema.commit()
|
||||
})
|
||||
} else {
|
||||
Err(OperationError::ConsistencyError(r))
|
||||
}
|
||||
// Audit done
|
||||
}
|
||||
}
|
||||
|
@ -1072,14 +1147,16 @@ mod tests {
|
|||
}
|
||||
let test_server = QueryServer::new(be, Arc::new(schema_outer));
|
||||
|
||||
$test_fn(test_server, &mut audit);
|
||||
$test_fn(&test_server, &mut audit);
|
||||
// Any needed teardown?
|
||||
// Make sure there are no errors.
|
||||
assert!(test_server.verify(&mut audit).len() == 0);
|
||||
}};
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_qs_create_user() {
|
||||
run_test!(|server: QueryServer, audit: &mut AuditScope| {
|
||||
run_test!(|server: &QueryServer, audit: &mut AuditScope| {
|
||||
let server_txn = server.write();
|
||||
let filt = Filter::Pres(String::from("name"));
|
||||
|
||||
|
@ -1123,7 +1200,7 @@ mod tests {
|
|||
|
||||
#[test]
|
||||
fn test_qs_init_idempotent_1() {
|
||||
run_test!(|server: QueryServer, audit: &mut AuditScope| {
|
||||
run_test!(|server: &QueryServer, audit: &mut AuditScope| {
|
||||
{
|
||||
// Setup and abort.
|
||||
let server_txn = server.write();
|
||||
|
@ -1151,7 +1228,7 @@ mod tests {
|
|||
|
||||
#[test]
|
||||
fn test_qs_modify() {
|
||||
run_test!(|server: QueryServer, audit: &mut AuditScope| {
|
||||
run_test!(|server: &QueryServer, audit: &mut AuditScope| {
|
||||
// Create an object
|
||||
let server_txn = server.write();
|
||||
|
||||
|
@ -1268,7 +1345,7 @@ mod tests {
|
|||
fn test_modify_invalid_class() {
|
||||
// Test modifying an entry and adding an extra class, that would cause the entry
|
||||
// to no longer conform to schema.
|
||||
run_test!(|server: QueryServer, audit: &mut AuditScope| {
|
||||
run_test!(|server: &QueryServer, audit: &mut AuditScope| {
|
||||
let server_txn = server.write();
|
||||
|
||||
let e1: Entry<EntryInvalid, EntryNew> = serde_json::from_str(
|
||||
|
@ -1336,7 +1413,7 @@ mod tests {
|
|||
|
||||
#[test]
|
||||
fn test_qs_delete() {
|
||||
run_test!(|server: QueryServer, audit: &mut AuditScope| {
|
||||
run_test!(|server: &QueryServer, audit: &mut AuditScope| {
|
||||
// Create
|
||||
let server_txn = server.write();
|
||||
|
||||
|
@ -1421,7 +1498,7 @@ mod tests {
|
|||
|
||||
#[test]
|
||||
fn test_qs_tombstone() {
|
||||
run_test!(|server: QueryServer, audit: &mut AuditScope| {
|
||||
run_test!(|server: &QueryServer, audit: &mut AuditScope| {
|
||||
let server_txn = server.write();
|
||||
|
||||
let filt_ts = ProtoFilter::Eq(String::from("class"), String::from("tombstone"));
|
||||
|
@ -1497,7 +1574,7 @@ mod tests {
|
|||
|
||||
#[test]
|
||||
fn test_qs_recycle_simple() {
|
||||
run_test!(|server: QueryServer, audit: &mut AuditScope| {
|
||||
run_test!(|server: &QueryServer, audit: &mut AuditScope| {
|
||||
let server_txn = server.write();
|
||||
|
||||
let filt_rc = ProtoFilter::Eq(String::from("class"), String::from("recycled"));
|
||||
|
@ -1628,7 +1705,7 @@ mod tests {
|
|||
// The delete test above should be unaffected by recycle anyway
|
||||
#[test]
|
||||
fn test_qs_recycle_advanced() {
|
||||
run_test!(|server: QueryServer, audit: &mut AuditScope| {
|
||||
run_test!(|server: &QueryServer, audit: &mut AuditScope| {
|
||||
// Create items
|
||||
let server_txn = server.write();
|
||||
|
||||
|
@ -1673,7 +1750,7 @@ mod tests {
|
|||
|
||||
#[test]
|
||||
fn test_qs_name_to_uuid() {
|
||||
run_test!(|server: QueryServer, audit: &mut AuditScope| {
|
||||
run_test!(|server: &QueryServer, audit: &mut AuditScope| {
|
||||
let server_txn = server.write();
|
||||
|
||||
let e1: Entry<EntryInvalid, EntryNew> = serde_json::from_str(
|
||||
|
@ -1711,7 +1788,7 @@ mod tests {
|
|||
|
||||
#[test]
|
||||
fn test_qs_uuid_to_name() {
|
||||
run_test!(|server: QueryServer, audit: &mut AuditScope| {
|
||||
run_test!(|server: &QueryServer, audit: &mut AuditScope| {
|
||||
let server_txn = server.write();
|
||||
|
||||
let e1: Entry<EntryInvalid, EntryNew> = serde_json::from_str(
|
||||
|
@ -1752,7 +1829,7 @@ mod tests {
|
|||
|
||||
#[test]
|
||||
fn test_qs_clone_value() {
|
||||
run_test!(|server: QueryServer, audit: &mut AuditScope| {
|
||||
run_test!(|server: &QueryServer, audit: &mut AuditScope| {
|
||||
let server_txn = server.write();
|
||||
let e1: Entry<EntryInvalid, EntryNew> = serde_json::from_str(
|
||||
r#"{
|
||||
|
|
Loading…
Reference in a new issue