Add docs for backup, restore, reindex and verify (#148)

Implements #136 document backup and restore. This adds documentation into getting started on these actions, as well as reindex and verify .
This commit is contained in:
Firstyear 2019-11-17 12:36:32 +10:00 committed by GitHub
parent 4de0d03eab
commit 44693be17a
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
4 changed files with 166 additions and 24 deletions

View file

@ -221,6 +221,92 @@ of the problem.
Note the radius container *is* configured to provide Tunnel-Private-Group-ID so if you wish to use Note the radius container *is* configured to provide Tunnel-Private-Group-ID so if you wish to use
wifi assigned vlans on your infrastructure, you can assign these by groups in the config.ini. wifi assigned vlans on your infrastructure, you can assign these by groups in the config.ini.
# Backup and Restore
With any idm software, it's important you have the capability to restore in case of a disaster - be
that physical damage or mistake. Kanidm supports backup and restore of the database with two methods.
## Method 1
Method 1 involves taking a backup of the database entry content, which is then re-indexed on restore.
This is the "prefered" method.
To take the backup (assuming our docker environment) you first need to stop the instance:
docker stop <container name>
docker run --rm -i -t -v kanidmd:/data -v kanidmd_backups:/backup \
firstyear/kanidmd:latest /home/kanidm/target/release/kanidmd backup \
/backup/kanidm.backup.json -D /data/kanidm.db
docker start <container name>
You can then restart your instance. It's advised you DO NOT modify the backup.json as it may introduce
data errors into your instance.
To restore from the backup:
docker stop <container name>
docker run --rm -i -t -v kanidmd:/data -v kanidmd_backups:/backup \
firstyear/kanidmd:latest /home/kanidm/target/release/kanidmd restore \
/backup/kanidm.backup.json -D /data/kanidm.db
docker start <container name>
That's it!
## Method 2
This is a simple backup of the data volume.
docker stop <container name>
# Backup your docker's volume folder
docker start <container name>
# Reindexing after schema extension
In some (rare) cases you may need to reindex.
Please note the server will sometimes reindex on startup as a result of the project
changing it's internal schema definitions. This is normal and expected - you may never need
to start a reindex yourself as a result!
You'll likely notice a need to reindex if you add indexes to schema and you see a message in your logs such as:
Index EQUALITY name not found
Index {type} {attribute} not found
This indicates that an index of type equality has been added for name, but the indexing process
has not been run - the server will continue to operate and the query execution code will correctly
process the query however it will not be the optimal method of delivering the results as we need to
disregard this part of the query and act as though it's un-indexed.
Reindexing will resolve this by forcing all indexes to be recreated based on their schema
definitions (this works even though the schema is in the same database!)
docker stop <container name>
docker run --rm -i -t -v kanidmd:/data \
firstyear/kanidmd:latest /home/kanidm/target/release/kanidmd reindex \
-D /data/kanidm.db
docker start <container name>
Generally reindexing is a rare action and should not normally be required.
# Verification
The server ships with a number of verification utilities to ensure that data is consistent such
as referential integrity or memberof.
Note that verification really is a last resort - the server does *a lot* to prevent and self-heal
from errors at run time, so you should rarely if ever require this utility. This utility was
developed to guarantee consistency during development!
You can run a verification with:
docker stop <container name>
docker run --rm -i -t -v kanidmd:/data \
firstyear/kanidmd:latest /home/kanidm/target/release/kanidmd verify \
-D /data/kanidm.db
docker start <container name>
If you have errors, please contact the project to help support you to resolve these.
# Raw actions # Raw actions
The server has a low-level stateful API you can use for more complex or advanced tasks on large numbers The server has a low-level stateful API you can use for more complex or advanced tasks on large numbers

View file

@ -392,6 +392,7 @@ pub trait BackendTransaction {
} }
fn verify(&self) -> Vec<Result<(), ConsistencyError>> { fn verify(&self) -> Vec<Result<(), ConsistencyError>> {
// TODO: Implement this!!!
Vec::new() Vec::new()
} }

View file

@ -1086,26 +1086,6 @@ fn idm_account_set_password(
) )
} }
/*
fn test_resource(
(class, _req, _state): (Path<String>, HttpRequest<AppState> ,State<AppState>),
) -> String {
format!("Hello {:?}!", class)
}
// https://actix.rs/docs/extractors/
#[derive(Deserialize)]
struct RestResource {
class: String,
id: String,
}
fn test_resource_id(
(r, _req, _state): (Path<RestResource>, HttpRequest<AppState> ,State<AppState>),
) -> String {
format!("Hello {:?}/{:?}!", r.class, r.id)
}
*/
// === internal setup helpers // === internal setup helpers
fn setup_backend(config: &Configuration) -> Result<Backend, OperationError> { fn setup_backend(config: &Configuration) -> Result<Backend, OperationError> {
@ -1243,6 +1223,72 @@ pub fn restore_server_core(config: Configuration, dst_path: &str) {
}; };
} }
pub fn reindex_server_core(config: Configuration) {
let be = match setup_backend(&config) {
Ok(be) => be,
Err(e) => {
error!("Failed to setup BE: {:?}", e);
return;
}
};
let mut audit = AuditScope::new("server_reindex");
// First, we provide the in-memory schema so that core attrs are indexed correctly.
let schema = match Schema::new(&mut audit) {
Ok(s) => s,
Err(e) => {
error!("Failed to setup in memory schema: {:?}", e);
std::process::exit(1);
}
};
info!("Start Index Phase 1 ...");
// Limit the scope of the schema txn.
let idxmeta = { schema.write().get_idxmeta() };
// Reindex only the core schema attributes to bootstrap the process.
let be_wr_txn = be.write(idxmeta);
let r = be_wr_txn
.reindex(&mut audit)
.and_then(|_| be_wr_txn.commit(&mut audit));
// Now that's done, setup a minimal qs and reindex from that.
if r.is_err() {
debug!("{}", audit);
error!("Failed to reindex database: {:?}", r);
std::process::exit(1);
}
info!("Index Phase 1 Success!");
info!("Attempting to init query server ...");
let server_id = be.get_db_sid();
let (qs, _idms) = match setup_qs_idms(&mut audit, be, server_id) {
Ok(t) => t,
Err(e) => {
debug!("{}", audit);
error!("Unable to setup query server or idm server -> {:?}", e);
return;
}
};
info!("Init Query Server Success!");
info!("Start Index Phase 2 ...");
let qs_write = qs.write();
let r = qs_write
.reindex(&mut audit)
.and_then(|_| qs_write.commit(&mut audit));
match r {
Ok(_) => info!("Index Phase 2 Success!"),
Err(e) => {
error!("Reindex failed: {:?}", e);
std::process::exit(1);
}
};
}
pub fn reset_sid_core(config: Configuration) { pub fn reset_sid_core(config: Configuration) {
let mut audit = AuditScope::new("reset_sid_core"); let mut audit = AuditScope::new("reset_sid_core");
// Setup the be // Setup the be
@ -1284,6 +1330,7 @@ pub fn verify_server_core(config: Configuration) {
debug!("{}", audit); debug!("{}", audit);
if r.len() == 0 { if r.len() == 0 {
info!("Verification passed!");
std::process::exit(0); std::process::exit(0);
} else { } else {
for er in r { for er in r {

View file

@ -11,8 +11,8 @@ extern crate log;
use kanidm::config::Configuration; use kanidm::config::Configuration;
use kanidm::core::{ use kanidm::core::{
backup_server_core, create_server_core, recover_account_core, reset_sid_core, backup_server_core, create_server_core, recover_account_core, reindex_server_core,
restore_server_core, verify_server_core, reset_sid_core, restore_server_core, verify_server_core,
}; };
use std::path::PathBuf; use std::path::PathBuf;
@ -80,13 +80,15 @@ enum Opt {
RecoverAccount(RecoverAccountOpt), RecoverAccount(RecoverAccountOpt),
#[structopt(name = "reset_server_id")] #[structopt(name = "reset_server_id")]
ResetServerId(CommonOpt), ResetServerId(CommonOpt),
#[structopt(name = "reindex")]
Reindex(CommonOpt),
} }
impl Opt { impl Opt {
fn debug(&self) -> bool { fn debug(&self) -> bool {
match self { match self {
Opt::Server(sopt) => sopt.commonopts.debug, Opt::Server(sopt) => sopt.commonopts.debug,
Opt::Verify(sopt) | Opt::ResetServerId(sopt) => sopt.debug, Opt::Verify(sopt) | Opt::ResetServerId(sopt) | Opt::Reindex(sopt) => sopt.debug,
Opt::Backup(bopt) => bopt.commonopts.debug, Opt::Backup(bopt) => bopt.commonopts.debug,
Opt::Restore(ropt) => ropt.commonopts.debug, Opt::Restore(ropt) => ropt.commonopts.debug,
Opt::RecoverAccount(ropt) => ropt.commonopts.debug, Opt::RecoverAccount(ropt) => ropt.commonopts.debug,
@ -153,7 +155,7 @@ fn main() {
restore_server_core(config, p); restore_server_core(config, p);
} }
Opt::Verify(vopt) => { Opt::Verify(vopt) => {
info!("Running in restore mode ..."); info!("Running in db verification mode ...");
config.update_db_path(&vopt.db_path); config.update_db_path(&vopt.db_path);
verify_server_core(config); verify_server_core(config);
@ -172,5 +174,11 @@ fn main() {
config.update_db_path(&vopt.db_path); config.update_db_path(&vopt.db_path);
reset_sid_core(config); reset_sid_core(config);
} }
Opt::Reindex(copt) => {
info!("Running in reindex mode ...");
config.update_db_path(&copt.db_path);
reindex_server_core(config);
}
} }
} }