129 nsswitch stage 2 groups (#185)

Implements #129, adding the libnss_kanidm.so/dylib, and the related caching parts for properly handling these types.
This commit is contained in:
Firstyear 2020-02-15 10:57:25 +10:30 committed by GitHub
parent d063d358ad
commit 9de7d33293
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
35 changed files with 5662 additions and 155 deletions

3673
Cargo.lock generated Normal file

File diff suppressed because it is too large Load diff

View file

@ -6,5 +6,7 @@ members = [
"kanidm_client", "kanidm_client",
"kanidm_tools", "kanidm_tools",
"kanidm_unix_int", "kanidm_unix_int",
"kanidm_unix_int/nss_kanidm",
"kanidm_unix_int/pam_kanidm"
] ]

View file

@ -1,5 +1,7 @@
* bump all cargo.toml versions * bump all cargo.toml versions
find kani* -name Cargo.toml -exec cat '{}' \; | grep -e '^version ='
* bump index version in constants * bump index version in constants
* check for breaking db entry changes. * check for breaking db entry changes.
@ -13,4 +15,7 @@
* vendor and release to build.opensuse.org * vendor and release to build.opensuse.org
make vendor-prep
git archive --format=tar --prefix=kanidm-1.0.0rc4/ HEAD | gzip >kanidm-1.0.0rc4.tar.gz

View file

@ -5,6 +5,8 @@
- [Administrative Tasks](./administrivia.md) - [Administrative Tasks](./administrivia.md)
- [Interacting with the Server](./client_tools.md) - [Interacting with the Server](./client_tools.md)
- [Accounts and Groups](./accounts_and_groups.md) - [Accounts and Groups](./accounts_and_groups.md)
- [Posix Accounts and Groups](./posix_accounts.md)
- [Pam and nsswitch](./pam_and_nsswitch.md)
- [SSH Key Distribution](./ssh_key_dist.md) - [SSH Key Distribution](./ssh_key_dist.md)
- [RADIUS](./radius.md) - [RADIUS](./radius.md)
- [Password Quality and Badlisting](./password_quality.md) - [Password Quality and Badlisting](./password_quality.md)

View file

@ -0,0 +1,102 @@
# Pam and nsswitch
Pam and nsswitch are the core mechanisms used by Linux and Bsd clients
to resolve identities from an IDM service like kanidm into accounts that
can be used on the machine for various interactive tasks.
## The unix daemon
Kanidm provide a unix daemon that runs on any client that wants to use pam
and nsswitch integration. This is provided as the daemon can cache the accounts
for users who have unreliable networks or leave the site where kanidm is.
Additionally, the daemon means that the pam and nsswitch integration libraries
can be small, helping to reduce the attack surface of the machine.
We recommend you install the client daemon from your system package manager.
You can check the daemon is running on your Linux system with
# systemctl status kanidm_unixd
This daemon uses configuration from /etc/kanidm/config. This is the covered in
client_tools.
You can then check the communication status of the daemon as any user account.
$ kanidm_unixd_status
If the daemon is working, you should see:
[2020-02-14T05:58:37Z INFO kanidm_unixd_status] working!
If it is not working, you will see an error message:
[2020-02-14T05:58:10Z ERROR kanidm_unixd_status] Error -> Os { code: 111, kind: ConnectionRefused, message: "Connection refused" }
For more, see troubleshooting.
## nsswitch
When the daemon is running you can add the nsswitch libraries to /etc/nsswitch.conf
passwd: kanidm compat
group: kanidm compat
You can then test that a posix extended user is able to be resolved with:
$ getent passwd <account name>
$ getent passwd testunix
testunix:x:3524161420:3524161420:testunix:/home/testunix:/bin/bash
You can also do the same for groups.
$ getent group <group name>
$ getent group testgroup
testgroup:x:2439676479:testunix
## PAM
> **WARNING:** Modifications to pam configuration *may* leave your system in a state
> where you are unable to login or authenticate. You should always have a recovery
> shell open while making changes (ie root), or have access to single-user mode
> at the machines console.
TBD
## Troubleshooting
### Check the socket permissions
Check that the /var/run/kanidm.sock is 777, and that non-root readers can see it with
ls or other tools.
### Check you can access the kanidm server
You can check this with the client tools:
kanidm self whoami --name anonymous
### Ensure the libraries are correct.
You should have:
/usr/lib64/libnss_kanidm.so.2
### Invalidate the cache
You can invalidate the kanidm_unixd cache with:
$ kanidm_cache_invalidate
You can clear (wipe) the cache with:
$ kanidm_cache_clear
There is an important distinction between these two - invalidate cache items may still
be yielded to a client request if the communication to the main kanidm server is not
possible. For example, you may have your laptop in a park without wifi.
Clearing the cache however, completely wipes all local data about all accounts and groups.
If you are relying on this cached (but invalid data) you may lose access to your accounts until
other communication issues have been resolved.

View file

@ -0,0 +1,104 @@
# Posix Accounts and Groups
Kanidm has features that enable it's accounts and groups to be consumed on
posix like machines, such as Linux, FreeBSD, or others.
## Notes on Posix Features
There are a number of design decisions that have been made in the posix features
of kanidm that are intended to make distributed systems easier to manage, and
client systems more secure.
### Uid and Gid numbers
In Kanidm there is no difference between a uid and a gid number. On most unix systems
a user will create all files with a primary user and group. The primary group is
effectively equivalent to the permissions of the user. It is very easy to see scenarioes
where someone may change the account to have a shared primary group (ie allusers),
but without change the umask all client systems. This can cause user's data to be
compromised by any member of the same shared group.
To prevent this many system create a user private group, or UPG. This group has the
gid number match the uid number of the user, and the user set's it's primary
group id to the gid number of the UPG.
As there is now an equivalence between the uid and gid number of the user and the UPG,
there is no benefit to seperating these values. As a result kanidm accounts *only*
have a gidnumber, which is also considered to be it's uidnumber as well. This has a benefit
of preventing accidental creation of a seperate group that has an overlapping gidnumber
(the uniqueness attribute of the schema will block the creation).
### UPG generation
Due to the requirement that a user have a UPG for security, many systems create these as
two independent items. For example in /etc/passwd and /etc/group
# passwd
william:x:654401105:654401105::/home/william:/bin/zsh
# group
william:x:654401105:
Other systems like FreeIPA use a plugin that generates a UPG as a database record on
creation of the account.
Kanidm does neither of these. As the gidnumber of the user must by unique, and a user
implies the UPG must exist, we are able to generate UPG's on demand from the account.
This has a single side effect, which is that you are unable to add any members to a
UPG - however, given the nature of a user private group, this is somewhat the point.
### Gid number generation
In the future Kanidm plans to have async replication as a feature between writable
database servers. In this case we need to be able to allocate stable and reliable
gidnumbers to accounts on replicas that may not be in continual communication.
To do this, we use the last 32 bits of the account or group's UUID to generate the
gidnumber.
A valid concern is possibility of duplication in the lower 32 bits. Given the
birthday problem, if you have 77,000 groups and accounts, you have a 50% chance
of duplication. With 50,000 you have 20% chance, 9,300 you have a 1% chance and
with 2900 you have 0.1% chance.
We advise that if you have a site with >10,000 users you should use an external system
to allocate gidnumbers serially or in a consistent manner to avoid potential duplication
events.
This design decision is made as most small sites will benefit greatly from the
autoallocation policy and the simplicity of it's design, while larger enterprises
will already have IDM or Business process applications for HR/People that are
capable of suppling this kind of data in batch jobs.
## Enabling Posix Attributes on Accounts
To enable posix account features and ids on an account, you require the permission `idm_account_unix_extend_priv`.
This is provided to `idm_admins` in the default database.
You can then use the following command to enable posix extensions.
kanidm account posix set --name idm_admin <account_id> [--shell SHELL --gidnumber GID]
kanidm account posix set --name idm_admin demo_user
kanidm account posix set --name idm_admin demo_user --shell /bin/zsh
kanidm account posix set --name idm_admin demo_user --gidnumber 2001
You can view the accounts posix token details with:
kanidm account posix show --name anonymous demo_user
## Enabling Posix Attributes on Groups
To enable posix group features and ids on an account, you require the permission `idm_group_unix_extend_priv`.
This is provided to `idm_admins` in the default database.
You can then use the following command to enable posix extensions.
kanidm group posix set --name idm_admin <group_id> [--gidnumber GID]
kanidm group posix set --name idm_admin demo_group
kanidm group posix set --name idm_admin demo_group --gidnumber 2001
You can view the accounts posix token details with:
kanidm group posix show --name anonymous demo_group
Posix enabled groups will supply their members as posix members to clients. There is no
special or seperate type of membership for posix members required.

View file

@ -37,10 +37,13 @@ Uploading a private key or other data will be rejected. For example:
## Server Configuration ## Server Configuration
The kanidm_ssh_authorizedkeys command is part of the kanidm-clients package, so should be installed ### Public key caching configuration
on the servers.
To configure the tool, you should edit /etc/kanidm/config, as documented in [clients](./client_tools.md) If you have kanidm_unixd running, you can use it to locally cache ssh public keys.
The kanidm_ssh_authorizedkeys command is part of the kanidm-unix-clients package, so should be installed
on the servers. It communicates to kanidm_unixd, so you should have a configured pam/nsswitch
setup as well.
You can test this is configured correctly by running: You can test this is configured correctly by running:
@ -58,3 +61,27 @@ Restart sshd, and then attempt to authenticate with the keys.
It's highly recommended you keep your client configuration and sshd_configuration in a configuration It's highly recommended you keep your client configuration and sshd_configuration in a configuration
management tool such as salt or ansible. management tool such as salt or ansible.
### Direct configuration
The kanidm_ssh_authorizedkeys_direct command is part of the kanidm-clients package, so should be installed
on the servers.
To configure the tool, you should edit /etc/kanidm/config, as documented in [clients](./client_tools.md)
You can test this is configured correctly by running:
kanidm_ssh_authorizedkeys_direct -D anonymous <account name>
If the account has ssh public keys you should see them listed, one per line.
To configure servers to accept these keys, you must change their /etc/ssh/sshd_config to
contain the lines:
AuthorizedKeysCommand /usr/bin/kanidm_ssh_authorizedkeys_direct -D anonymous %u
AuthorizedKeysCommandUser nobody
Restart sshd, and then attempt to authenticate with the keys.
It's highly recommended you keep your client configuration and sshd_configuration in a configuration
management tool such as salt or ansible.

View file

@ -1,6 +1,6 @@
[package] [package]
name = "kanidm_client" name = "kanidm_client"
version = "0.1.0" version = "0.1.1"
authors = ["William Brown <william@blackhats.net.au>"] authors = ["William Brown <william@blackhats.net.au>"]
edition = "2018" edition = "2018"
license = "MPL-2.0" license = "MPL-2.0"

View file

@ -66,6 +66,23 @@ impl KanidmAsyncClient {
Ok(r) Ok(r)
} }
async fn perform_delete_request(&self, dest: &str) -> Result<(), ClientError> {
let dest = format!("{}{}", self.addr, dest);
let response = self
.client
.delete(dest.as_str())
.send()
.await
.map_err(ClientError::Transport)?;
match response.status() {
reqwest::StatusCode::OK => {}
unexpect => return Err(ClientError::Http(unexpect, response.json().await.ok())),
}
Ok(())
}
pub async fn auth_step_init( pub async fn auth_step_init(
&self, &self,
ident: &str, ident: &str,
@ -79,6 +96,32 @@ impl KanidmAsyncClient {
r.map(|v| v.state) r.map(|v| v.state)
} }
pub async fn auth_simple_password(
&self,
ident: &str,
password: &str,
) -> Result<UserAuthToken, ClientError> {
let _state = match self.auth_step_init(ident, None).await {
Ok(s) => s,
Err(e) => return Err(e),
};
let auth_req = AuthRequest {
step: AuthStep::Creds(vec![AuthCredential::Password(password.to_string())]),
};
let r: Result<AuthResponse, _> = self.perform_post_request("/v1/auth", auth_req).await;
let r = r?;
match r.state {
AuthState::Success(uat) => {
debug!("==> Authed as uat; {:?}", uat);
Ok(uat)
}
_ => Err(ClientError::AuthenticationFailed),
}
}
pub async fn auth_anonymous(&self) -> Result<UserAuthToken, ClientError> { pub async fn auth_anonymous(&self) -> Result<UserAuthToken, ClientError> {
// TODO: Check state for auth continue contains anonymous. // TODO: Check state for auth continue contains anonymous.
let _state = match self.auth_step_init("anonymous", None).await { let _state = match self.auth_step_init("anonymous", None).await {
@ -127,4 +170,21 @@ impl KanidmAsyncClient {
self.perform_get_request(["/v1/account/", id, "/_unix/_token"].concat().as_str()) self.perform_get_request(["/v1/account/", id, "/_unix/_token"].concat().as_str())
.await .await
} }
pub async fn idm_group_unix_token_get(&self, id: &str) -> Result<UnixGroupToken, ClientError> {
// Format doesn't work in async
// format!("/v1/account/{}/_unix/_token", id).as_str()
self.perform_get_request(["/v1/group/", id, "/_unix/_token"].concat().as_str())
.await
}
pub async fn idm_account_delete(&self, id: &str) -> Result<(), ClientError> {
self.perform_delete_request(["/v1/account/", id].concat().as_str())
.await
}
pub async fn idm_group_delete(&self, id: &str) -> Result<(), ClientError> {
self.perform_delete_request(["/v1/group/", id].concat().as_str())
.await
}
} }

View file

@ -1,6 +1,6 @@
[package] [package]
name = "kanidm_proto" name = "kanidm_proto"
version = "0.1.0" version = "0.1.1"
authors = ["William Brown <william@blackhats.net.au>"] authors = ["William Brown <william@blackhats.net.au>"]
edition = "2018" edition = "2018"
license = "MPL-2.0" license = "MPL-2.0"

View file

@ -1,6 +1,6 @@
[package] [package]
name = "kanidm_tools" name = "kanidm_tools"
version = "0.1.0" version = "0.1.1"
authors = ["William Brown <william@blackhats.net.au>"] authors = ["William Brown <william@blackhats.net.au>"]
edition = "2018" edition = "2018"
default-run = "kanidm" default-run = "kanidm"

View file

@ -251,6 +251,24 @@ struct GroupNamedMembers {
copt: CommonOpt, copt: CommonOpt,
} }
#[derive(Debug, StructOpt)]
struct GroupPosixOpt {
#[structopt()]
name: String,
#[structopt(long = "gidnumber")]
gidnumber: Option<u32>,
#[structopt(flatten)]
copt: CommonOpt,
}
#[derive(Debug, StructOpt)]
enum GroupPosix {
#[structopt(name = "show")]
Show(GroupNamed),
#[structopt(name = "set")]
Set(GroupPosixOpt),
}
#[derive(Debug, StructOpt)] #[derive(Debug, StructOpt)]
enum GroupOpt { enum GroupOpt {
#[structopt(name = "list")] #[structopt(name = "list")]
@ -267,6 +285,8 @@ enum GroupOpt {
PurgeMembers(GroupNamed), PurgeMembers(GroupNamed),
#[structopt(name = "add_members")] #[structopt(name = "add_members")]
AddMembers(GroupNamedMembers), AddMembers(GroupNamedMembers),
#[structopt(name = "posix")]
Posix(GroupPosix),
} }
#[derive(Debug, StructOpt)] #[derive(Debug, StructOpt)]
@ -334,6 +354,10 @@ impl ClientOpt {
GroupOpt::AddMembers(gcopt) => gcopt.copt.debug, GroupOpt::AddMembers(gcopt) => gcopt.copt.debug,
GroupOpt::SetMembers(gcopt) => gcopt.copt.debug, GroupOpt::SetMembers(gcopt) => gcopt.copt.debug,
GroupOpt::PurgeMembers(gcopt) => gcopt.copt.debug, GroupOpt::PurgeMembers(gcopt) => gcopt.copt.debug,
GroupOpt::Posix(gpopt) => match gpopt {
GroupPosix::Show(gcopt) => gcopt.copt.debug,
GroupPosix::Set(gcopt) => gcopt.copt.debug,
},
}, },
} }
} }
@ -611,6 +635,21 @@ fn main() {
.idm_group_set_members(gcopt.name.as_str(), new_members) .idm_group_set_members(gcopt.name.as_str(), new_members)
.unwrap(); .unwrap();
} }
GroupOpt::Posix(gpopt) => match gpopt {
GroupPosix::Show(gcopt) => {
let client = gcopt.copt.to_client();
let token = client
.idm_group_unix_token_get(gcopt.name.as_str())
.unwrap();
println!("{:?}", token);
}
GroupPosix::Set(gcopt) => {
let client = gcopt.copt.to_client();
client
.idm_group_unix_extend(gcopt.name.as_str(), gcopt.gidnumber)
.unwrap();
}
},
}, // end Group }, // end Group
} }
} }

View file

@ -1,6 +1,6 @@
[package] [package]
name = "kanidm_unix_int" name = "kanidm_unix_int"
version = "0.1.0" version = "0.1.1"
authors = ["William Brown <william@blackhats.net.au>"] authors = ["William Brown <william@blackhats.net.au>"]
edition = "2018" edition = "2018"
license = "MPL-2.0" license = "MPL-2.0"
@ -23,6 +23,18 @@ path = "src/daemon.rs"
name = "kanidm_ssh_authorizedkeys" name = "kanidm_ssh_authorizedkeys"
path = "src/ssh_authorizedkeys.rs" path = "src/ssh_authorizedkeys.rs"
[[bin]]
name = "kanidm_cache_invalidate"
path = "src/cache_invalidate.rs"
[[bin]]
name = "kanidm_cache_clear"
path = "src/cache_clear.rs"
[[bin]]
name = "kanidm_unixd_status"
path = "src/daemon_status.rs"
[dependencies] [dependencies]
kanidm_client = { path = "../kanidm_client", version = "0.1" } kanidm_client = { path = "../kanidm_client", version = "0.1" }
kanidm_proto = { path = "../kanidm_proto", version = "0.1" } kanidm_proto = { path = "../kanidm_proto", version = "0.1" }
@ -34,6 +46,7 @@ tokio-util = { version = "0.2", features = ["codec"] }
futures = "0.3" futures = "0.3"
bytes = "0.5" bytes = "0.5"
libc = "0.2"
log = "0.4" log = "0.4"
env_logger = "0.6" env_logger = "0.6"
serde = "1.0" serde = "1.0"
@ -45,5 +58,7 @@ rusqlite = { version = "0.20", features = ["backup"] }
r2d2 = "0.8" r2d2 = "0.8"
r2d2_sqlite = "0.12" r2d2_sqlite = "0.12"
reqwest = { version = "0.10" }
[dev-dependencies] [dev-dependencies]
kanidm = { path = "../kanidmd", version = "0.1" } kanidm = { path = "../kanidmd", version = "0.1" }

View file

@ -0,0 +1,20 @@
[package]
name = "nss_kanidm"
version = "0.1.1"
authors = ["William Brown <william@blackhats.net.au>"]
edition = "2018"
[lib]
name = "nss_kanidm"
crate-type = [ "cdylib" ]
path = "src/lib.rs"
[dependencies]
kanidm_unix_int = { path = "../", version = "0.1" }
# libnss = "0.2"
libnss = { git = "https://github.com/csnewman/libnss-rs.git", rev = "eab2d93d2438652773699b0807d558ce75b1e748" }
libc = "0.2.0"
paste = "0.1"
lazy_static = "1.3"

View file

@ -0,0 +1,119 @@
#[macro_use]
extern crate libnss;
#[macro_use]
extern crate lazy_static;
use kanidm_unix_common::client::call_daemon_blocking;
use kanidm_unix_common::constants::DEFAULT_SOCK_PATH;
use kanidm_unix_common::unix_proto::{ClientRequest, ClientResponse, NssGroup, NssUser};
use libnss::group::{Group, GroupHooks};
use libnss::interop::Response;
use libnss::passwd::{Passwd, PasswdHooks};
use libc;
struct KanidmPasswd;
libnss_passwd_hooks!(kanidm, KanidmPasswd);
impl PasswdHooks for KanidmPasswd {
fn get_all_entries() -> Response<Vec<Passwd>> {
let req = ClientRequest::NssAccounts;
call_daemon_blocking(DEFAULT_SOCK_PATH, req)
.map(|r| match r {
ClientResponse::NssAccounts(l) => l.into_iter().map(passwd_from_nssuser).collect(),
_ => Vec::new(),
})
.map(|v| Response::Success(v))
.unwrap_or_else(|_| Response::Success(vec![]))
}
fn get_entry_by_uid(uid: libc::uid_t) -> Response<Passwd> {
let req = ClientRequest::NssAccountByUid(uid);
call_daemon_blocking(DEFAULT_SOCK_PATH, req)
.map(|r| match r {
ClientResponse::NssAccount(opt) => opt
.map(passwd_from_nssuser)
.map(|p| Response::Success(p))
.unwrap_or_else(|| Response::NotFound),
_ => Response::NotFound,
})
.unwrap_or_else(|_| Response::NotFound)
}
fn get_entry_by_name(name: String) -> Response<Passwd> {
let req = ClientRequest::NssAccountByName(name);
call_daemon_blocking(DEFAULT_SOCK_PATH, req)
.map(|r| match r {
ClientResponse::NssAccount(opt) => opt
.map(passwd_from_nssuser)
.map(|p| Response::Success(p))
.unwrap_or_else(|| Response::NotFound),
_ => Response::NotFound,
})
.unwrap_or_else(|_| Response::NotFound)
}
}
struct KanidmGroup;
libnss_group_hooks!(kanidm, KanidmGroup);
impl GroupHooks for KanidmGroup {
fn get_all_entries() -> Response<Vec<Group>> {
let req = ClientRequest::NssGroups;
call_daemon_blocking(DEFAULT_SOCK_PATH, req)
.map(|r| match r {
ClientResponse::NssGroups(l) => l.into_iter().map(group_from_nssgroup).collect(),
_ => Vec::new(),
})
.map(|v| Response::Success(v))
.unwrap_or_else(|_| Response::Success(vec![]))
}
fn get_entry_by_gid(gid: libc::gid_t) -> Response<Group> {
let req = ClientRequest::NssGroupByGid(gid);
call_daemon_blocking(DEFAULT_SOCK_PATH, req)
.map(|r| match r {
ClientResponse::NssGroup(opt) => opt
.map(group_from_nssgroup)
.map(|p| Response::Success(p))
.unwrap_or_else(|| Response::NotFound),
_ => Response::NotFound,
})
.unwrap_or_else(|_| Response::NotFound)
}
fn get_entry_by_name(name: String) -> Response<Group> {
let req = ClientRequest::NssGroupByName(name);
call_daemon_blocking(DEFAULT_SOCK_PATH, req)
.map(|r| match r {
ClientResponse::NssGroup(opt) => opt
.map(group_from_nssgroup)
.map(|p| Response::Success(p))
.unwrap_or_else(|| Response::NotFound),
_ => Response::NotFound,
})
.unwrap_or_else(|_| Response::NotFound)
}
}
fn passwd_from_nssuser(nu: NssUser) -> Passwd {
Passwd {
name: nu.name,
gecos: nu.gecos,
passwd: "x".to_string(),
uid: nu.gid,
gid: nu.gid,
dir: nu.homedir,
shell: nu.shell,
}
}
fn group_from_nssgroup(ng: NssGroup) -> Group {
Group {
name: ng.name,
passwd: "x".to_string(),
gid: ng.gid,
members: ng.members,
}
}

View file

@ -0,0 +1,13 @@
[package]
name = "pam_kanidm"
version = "0.1.1"
authors = ["William Brown <william@blackhats.net.au>"]
edition = "2018"
[lib]
name = "pam_kanidm"
crate-type = [ "cdylib" ]
path = "src/lib.rs"
[dependencies]
kanidm_unix_int = { path = "../", version = "0.1" }

View file

@ -0,0 +1,7 @@
#[cfg(test)]
mod tests {
#[test]
fn it_works() {
assert_eq!(2 + 2, 4);
}
}

View file

@ -1,11 +1,19 @@
use crate::db::Db; use crate::db::Db;
use crate::unix_proto::{NssGroup, NssUser};
use kanidm_client::asynchronous::KanidmAsyncClient; use kanidm_client::asynchronous::KanidmAsyncClient;
use kanidm_client::ClientError; use kanidm_client::ClientError;
use kanidm_proto::v1::{UnixGroupToken, UnixUserToken}; use kanidm_proto::v1::{OperationError, UnixGroupToken, UnixUserToken};
use reqwest::StatusCode;
use std::ops::Add; use std::ops::Add;
use std::string::ToString;
use std::time::{Duration, SystemTime}; use std::time::{Duration, SystemTime};
use tokio::sync::Mutex; use tokio::sync::Mutex;
pub enum Id {
Name(String),
Gid(u32),
}
#[derive(Debug, Clone)] #[derive(Debug, Clone)]
enum CacheState { enum CacheState {
Online, Online,
@ -21,6 +29,15 @@ pub struct CacheLayer {
timeout_seconds: u64, timeout_seconds: u64,
} }
impl ToString for Id {
fn to_string(&self) -> String {
match self {
Id::Name(s) => s.clone(),
Id::Gid(g) => g.to_string(),
}
}
}
impl CacheLayer { impl CacheLayer {
pub fn new( pub fn new(
// need db path // need db path
@ -69,14 +86,27 @@ impl CacheLayer {
self.set_cachestate(CacheState::Offline).await; self.set_cachestate(CacheState::Offline).await;
} }
// Invalidate the whole cache. We do this by just deleting the content pub fn clear_cache(&self) -> Result<(), ()> {
// of the sqlite db.
pub fn invalidate(&self) -> Result<(), ()> {
let dbtxn = self.db.write(); let dbtxn = self.db.write();
dbtxn.clear_cache().and_then(|_| dbtxn.commit()) dbtxn.clear_cache().and_then(|_| dbtxn.commit())
} }
fn get_cached_usertoken(&self, account_id: &str) -> Result<(bool, Option<UnixUserToken>), ()> { pub fn invalidate(&self) -> Result<(), ()> {
let dbtxn = self.db.write();
dbtxn.invalidate().and_then(|_| dbtxn.commit())
}
fn get_cached_usertokens(&self) -> Result<Vec<UnixUserToken>, ()> {
let dbtxn = self.db.write();
dbtxn.get_accounts()
}
fn get_cached_grouptokens(&self) -> Result<Vec<UnixGroupToken>, ()> {
let dbtxn = self.db.write();
dbtxn.get_groups()
}
fn get_cached_usertoken(&self, account_id: &Id) -> Result<(bool, Option<UnixUserToken>), ()> {
// Account_id could be: // Account_id could be:
// * gidnumber // * gidnumber
// * name // * name
@ -84,7 +114,34 @@ impl CacheLayer {
// * uuid // * uuid
// Attempt to search these in the db. // Attempt to search these in the db.
let dbtxn = self.db.write(); let dbtxn = self.db.write();
let r = dbtxn.get_account(account_id)?; let r = dbtxn.get_account(&account_id)?;
match r {
Some((ut, ex)) => {
// Are we expired?
let offset = Duration::from_secs(ex);
let ex_time = SystemTime::UNIX_EPOCH + offset;
let now = SystemTime::now();
if now >= ex_time {
Ok((true, Some(ut)))
} else {
Ok((false, Some(ut)))
}
}
None => Ok((true, None)),
}
}
fn get_cached_grouptoken(&self, grp_id: &Id) -> Result<(bool, Option<UnixGroupToken>), ()> {
// grp_id could be:
// * gidnumber
// * name
// * spn
// * uuid
// Attempt to search these in the db.
let dbtxn = self.db.write();
let r = dbtxn.get_group(&grp_id)?;
match r { match r {
Some((ut, ex)) => { Some((ut, ex)) => {
@ -114,17 +171,54 @@ impl CacheLayer {
})?; })?;
let dbtxn = self.db.write(); let dbtxn = self.db.write();
dbtxn // We need to add the groups first
.update_account(token, offset.as_secs()) token
.groups
.iter()
.try_for_each(|g| dbtxn.update_group(g, offset.as_secs()))
.and_then(|_|
// So that when we add the account it can make the relationships.
dbtxn
.update_account(token, offset.as_secs()))
.and_then(|_| dbtxn.commit()) .and_then(|_| dbtxn.commit())
} }
fn set_cache_grouptoken(&self, token: &UnixGroupToken) -> Result<(), ()> {
// Set an expiry
let ex_time = SystemTime::now() + Duration::from_secs(self.timeout_seconds);
let offset = ex_time
.duration_since(SystemTime::UNIX_EPOCH)
.map_err(|e| {
error!("time conversion error - ex_time less than epoch? {:?}", e);
()
})?;
let dbtxn = self.db.write();
dbtxn
.update_group(token, offset.as_secs())
.and_then(|_| dbtxn.commit())
}
fn delete_cache_usertoken(&self, a_uuid: &str) -> Result<(), ()> {
let dbtxn = self.db.write();
dbtxn.delete_account(a_uuid).and_then(|_| dbtxn.commit())
}
fn delete_cache_grouptoken(&self, g_uuid: &str) -> Result<(), ()> {
let dbtxn = self.db.write();
dbtxn.delete_group(g_uuid).and_then(|_| dbtxn.commit())
}
async fn refresh_usertoken( async fn refresh_usertoken(
&self, &self,
account_id: &str, account_id: &Id,
token: Option<UnixUserToken>, token: Option<UnixUserToken>,
) -> Result<Option<UnixUserToken>, ()> { ) -> Result<Option<UnixUserToken>, ()> {
match self.client.idm_account_unix_token_get(account_id).await { match self
.client
.idm_account_unix_token_get(account_id.to_string().as_str())
.await
{
Ok(n_tok) => { Ok(n_tok) => {
// We have the token! // We have the token!
self.set_cache_usertoken(&n_tok)?; self.set_cache_usertoken(&n_tok)?;
@ -140,6 +234,19 @@ impl CacheLayer {
.await; .await;
Ok(token) Ok(token)
} }
ClientError::Http(
StatusCode::BAD_REQUEST,
Some(OperationError::NoMatchingEntries),
) => {
// We wele able to contact the server but the entry has been removed.
debug!("entry has been removed, clearing from cache ...");
token
.map(|tok| self.delete_cache_usertoken(&tok.uuid))
// Now an option<result<t, _>>
.transpose()
// now result<option<t>, _>
.map(|_| None)
}
er => { er => {
error!("client error -> {:?}", er); error!("client error -> {:?}", er);
// Some other transient error, continue with the token. // Some other transient error, continue with the token.
@ -150,10 +257,57 @@ impl CacheLayer {
} }
} }
async fn get_usertoken(&self, account_id: &str) -> Result<Option<UnixUserToken>, ()> { async fn refresh_grouptoken(
&self,
grp_id: &Id,
token: Option<UnixGroupToken>,
) -> Result<Option<UnixGroupToken>, ()> {
match self
.client
.idm_group_unix_token_get(grp_id.to_string().as_str())
.await
{
Ok(n_tok) => {
// We have the token!
self.set_cache_grouptoken(&n_tok)?;
Ok(Some(n_tok))
}
Err(e) => {
match e {
ClientError::Transport(er) => {
error!("transport error, moving to offline -> {:?}", er);
// Something went wrong, mark offline.
let time = SystemTime::now().add(Duration::from_secs(15));
self.set_cachestate(CacheState::OfflineNextCheck(time))
.await;
Ok(token)
}
ClientError::Http(
StatusCode::BAD_REQUEST,
Some(OperationError::NoMatchingEntries),
) => {
debug!("entry has been removed, clearing from cache ...");
token
.map(|tok| self.delete_cache_grouptoken(&tok.uuid))
// Now an option<result<t, _>>
.transpose()
// now result<option<t>, _>
.map(|_| None)
}
er => {
error!("client error -> {:?}", er);
// Some other transient error, continue with the token.
Err(())
}
}
}
}
}
async fn get_usertoken(&self, account_id: Id) -> Result<Option<UnixUserToken>, ()> {
debug!("get_usertoken"); debug!("get_usertoken");
// get the item from the cache // get the item from the cache
let (expired, item) = self.get_cached_usertoken(account_id).map_err(|e| { let (expired, item) = self.get_cached_usertoken(&account_id).map_err(|e| {
debug!("get_usertoken error -> {:?}", e); debug!("get_usertoken error -> {:?}", e);
() ()
})?; })?;
@ -184,7 +338,7 @@ impl CacheLayer {
// Return it. // Return it.
if SystemTime::now() >= time && self.test_connection().await { if SystemTime::now() >= time && self.test_connection().await {
// We brought ourselves online, lets go // We brought ourselves online, lets go
self.refresh_usertoken(account_id, item).await self.refresh_usertoken(&account_id, item).await
} else { } else {
// Unable to bring up connection, return cache. // Unable to bring up connection, return cache.
Ok(item) Ok(item)
@ -194,17 +348,155 @@ impl CacheLayer {
debug!("online expired, refresh cache"); debug!("online expired, refresh cache");
// Attempt to refresh the item // Attempt to refresh the item
// Return it. // Return it.
self.refresh_usertoken(account_id, item).await self.refresh_usertoken(&account_id, item).await
} }
} }
} }
async fn get_grouptoken(&self, grp_id: Id) -> Result<Option<UnixGroupToken>, ()> {
debug!("get_grouptoken");
let (expired, item) = self.get_cached_grouptoken(&grp_id).map_err(|e| {
debug!("get_grouptoken error -> {:?}", e);
()
})?;
let state = self.get_cachestate().await;
match (expired, state) {
(_, CacheState::Offline) => {
debug!("offline, returning cached item");
Ok(item)
}
(false, CacheState::OfflineNextCheck(time)) => {
debug!(
"offline valid, next check {:?}, returning cached item",
time
);
// Still valid within lifetime, return.
Ok(item)
}
(false, CacheState::Online) => {
debug!("online valid, returning cached item");
// Still valid within lifetime, return.
Ok(item)
}
(true, CacheState::OfflineNextCheck(time)) => {
debug!("offline expired, next check {:?}, refresh cache", time);
// Attempt to refresh the item
// Return it.
if SystemTime::now() >= time && self.test_connection().await {
// We brought ourselves online, lets go
self.refresh_grouptoken(&grp_id, item).await
} else {
// Unable to bring up connection, return cache.
Ok(item)
}
}
(true, CacheState::Online) => {
debug!("online expired, refresh cache");
// Attempt to refresh the item
// Return it.
self.refresh_grouptoken(&grp_id, item).await
}
}
}
fn get_groupmembers(&self, uuid: &str) -> Vec<String> {
let dbtxn = self.db.write();
dbtxn
.get_group_members(uuid)
.unwrap_or_else(|_| Vec::new())
.into_iter()
.map(|ut| {
// TODO: We'll have a switch to convert this to spn in some configs
// in the future.
ut.name
})
.collect()
}
// Get ssh keys for an account id // Get ssh keys for an account id
pub async fn get_sshkeys(&self, account_id: &str) -> Result<Vec<String>, ()> { pub async fn get_sshkeys(&self, account_id: &str) -> Result<Vec<String>, ()> {
let token = self.get_usertoken(account_id).await?; let token = self.get_usertoken(Id::Name(account_id.to_string())).await?;
Ok(token.map(|t| t.sshkeys).unwrap_or_else(|| Vec::new())) Ok(token.map(|t| t.sshkeys).unwrap_or_else(|| Vec::new()))
} }
pub fn get_nssaccounts(&self) -> Result<Vec<NssUser>, ()> {
self.get_cached_usertokens().map(|l| {
l.into_iter()
.map(|tok| {
NssUser {
homedir: format!("/home/{}", tok.name),
name: tok.name,
gid: tok.gidnumber,
gecos: tok.displayname,
// TODO: default shell override.
shell: tok.shell.unwrap_or_else(|| "/bin/bash".to_string()),
}
})
.collect()
})
}
async fn get_nssaccount(&self, account_id: Id) -> Result<Option<NssUser>, ()> {
let token = self.get_usertoken(account_id).await?;
Ok(token.map(|tok| {
NssUser {
homedir: format!("/home/{}", tok.name),
name: tok.name,
gid: tok.gidnumber,
gecos: tok.displayname,
// TODO: default shell override.
shell: tok.shell.unwrap_or_else(|| "/bin/bash".to_string()),
}
}))
}
pub async fn get_nssaccount_name(&self, account_id: &str) -> Result<Option<NssUser>, ()> {
self.get_nssaccount(Id::Name(account_id.to_string())).await
}
pub async fn get_nssaccount_gid(&self, gid: u32) -> Result<Option<NssUser>, ()> {
self.get_nssaccount(Id::Gid(gid)).await
}
pub fn get_nssgroups(&self) -> Result<Vec<NssGroup>, ()> {
self.get_cached_grouptokens().map(|l| {
l.into_iter()
.map(|tok| {
let members = self.get_groupmembers(&tok.uuid);
NssGroup {
name: tok.name,
gid: tok.gidnumber,
members: members,
}
})
.collect()
})
}
async fn get_nssgroup(&self, grp_id: Id) -> Result<Option<NssGroup>, ()> {
let token = self.get_grouptoken(grp_id).await?;
// Get members set.
Ok(token.map(|tok| {
let members = self.get_groupmembers(&tok.uuid);
NssGroup {
name: tok.name,
gid: tok.gidnumber,
members: members,
}
}))
}
pub async fn get_nssgroup_name(&self, grp_id: &str) -> Result<Option<NssGroup>, ()> {
self.get_nssgroup(Id::Name(grp_id.to_string())).await
}
pub async fn get_nssgroup_gid(&self, gid: u32) -> Result<Option<NssGroup>, ()> {
self.get_nssgroup(Id::Gid(gid)).await
}
pub async fn test_connection(&self) -> bool { pub async fn test_connection(&self) -> bool {
let state = self.get_cachestate().await; let state = self.get_cachestate().await;
match state { match state {

View file

@ -0,0 +1,51 @@
#[macro_use]
extern crate log;
use log::debug;
use structopt::StructOpt;
use futures::executor::block_on;
use kanidm_unix_common::client::call_daemon;
use kanidm_unix_common::constants::DEFAULT_SOCK_PATH;
use kanidm_unix_common::unix_proto::{ClientRequest, ClientResponse};
#[derive(Debug, StructOpt)]
struct ClientOpt {
#[structopt(short = "d", long = "debug")]
debug: bool,
#[structopt(long = "really")]
really: bool,
}
#[tokio::main]
async fn main() {
let opt = ClientOpt::from_args();
if opt.debug {
::std::env::set_var("RUST_LOG", "kanidm=debug,kanidm_client=debug");
} else {
::std::env::set_var("RUST_LOG", "kanidm=info,kanidm_client=info");
}
env_logger::init();
debug!("Starting cache invalidate tool ...");
if !opt.really {
error!("Are you sure you want to proceed? If so use --really");
return;
}
let req = ClientRequest::InvalidateCache;
match block_on(call_daemon(DEFAULT_SOCK_PATH, req)) {
Ok(r) => match r {
ClientResponse::Ok => info!("success"),
_ => {
error!("Error: unexpected response -> {:?}", r);
}
},
Err(e) => {
error!("Error -> {:?}", e);
}
}
}

View file

@ -0,0 +1,43 @@
#[macro_use]
extern crate log;
use log::debug;
use structopt::StructOpt;
use futures::executor::block_on;
use kanidm_unix_common::client::call_daemon;
use kanidm_unix_common::constants::DEFAULT_SOCK_PATH;
use kanidm_unix_common::unix_proto::{ClientRequest, ClientResponse};
#[derive(Debug, StructOpt)]
struct ClientOpt {
#[structopt(short = "d", long = "debug")]
debug: bool,
}
#[tokio::main]
async fn main() {
let opt = ClientOpt::from_args();
if opt.debug {
::std::env::set_var("RUST_LOG", "kanidm=debug,kanidm_client=debug");
} else {
::std::env::set_var("RUST_LOG", "kanidm=info,kanidm_client=info");
}
env_logger::init();
debug!("Starting cache invalidate tool ...");
let req = ClientRequest::InvalidateCache;
match block_on(call_daemon(DEFAULT_SOCK_PATH, req)) {
Ok(r) => match r {
ClientResponse::Ok => info!("success"),
_ => {
error!("Error: unexpected response -> {:?}", r);
}
},
Err(e) => {
error!("Error -> {:?}", e);
}
}
}

View file

@ -0,0 +1,79 @@
use bytes::{BufMut, BytesMut};
use futures::SinkExt;
use futures::StreamExt;
use std::error::Error;
use std::io::Error as IoError;
use std::io::ErrorKind;
use tokio::net::UnixStream;
use tokio::runtime::Runtime;
use tokio_util::codec::Framed;
use tokio_util::codec::{Decoder, Encoder};
use crate::unix_proto::{ClientRequest, ClientResponse};
struct ClientCodec;
impl Decoder for ClientCodec {
type Item = ClientResponse;
type Error = IoError;
fn decode(&mut self, src: &mut BytesMut) -> Result<Option<Self::Item>, Self::Error> {
match serde_cbor::from_slice::<ClientResponse>(&src) {
Ok(msg) => {
// Clear the buffer for the next message.
src.clear();
Ok(Some(msg))
}
_ => Ok(None),
}
}
}
impl Encoder for ClientCodec {
type Item = ClientRequest;
type Error = IoError;
fn encode(&mut self, msg: ClientRequest, dst: &mut BytesMut) -> Result<(), Self::Error> {
let data = serde_cbor::to_vec(&msg).map_err(|e| {
error!("socket encoding error -> {:?}", e);
IoError::new(ErrorKind::Other, "CBOR encode error")
})?;
debug!("Attempting to send request -> {:?} ...", data);
dst.put(data.as_slice());
Ok(())
}
}
impl ClientCodec {
fn new() -> Self {
ClientCodec
}
}
pub async fn call_daemon(path: &str, req: ClientRequest) -> Result<ClientResponse, Box<dyn Error>> {
let stream = UnixStream::connect(path).await?;
let mut reqs = Framed::new(stream, ClientCodec::new());
reqs.send(req).await?;
reqs.flush().await?;
match reqs.next().await {
Some(Ok(res)) => {
debug!("Response -> {:?}", res);
Ok(res)
}
_ => {
error!("Error");
Err(Box::new(IoError::new(ErrorKind::Other, "oh no!")))
}
}
}
pub fn call_daemon_blocking(
path: &str,
req: ClientRequest,
) -> Result<ClientResponse, Box<dyn Error>> {
let mut rt = Runtime::new()?;
rt.block_on(call_daemon(path, req))
}

View file

@ -1,4 +1,4 @@
pub const DEFAULT_SOCK_PATH: &'static str = "/tmp/kanidm.sock"; pub const DEFAULT_SOCK_PATH: &'static str = "/var/run/kanidm_unixd.sock";
pub const DEFAULT_DB_PATH: &'static str = "/tmp/kanidm.cache.db"; pub const DEFAULT_DB_PATH: &'static str = "/var/lib/kanidm_unixd/kanidm.cache.db";
pub const DEFAULT_CONN_TIMEOUT: u64 = 2; pub const DEFAULT_CONN_TIMEOUT: u64 = 2;
pub const DEFAULT_CACHE_TIMEOUT: u64 = 15; pub const DEFAULT_CACHE_TIMEOUT: u64 = 15;

View file

@ -4,6 +4,7 @@ extern crate log;
use bytes::{BufMut, BytesMut}; use bytes::{BufMut, BytesMut};
use futures::SinkExt; use futures::SinkExt;
use futures::StreamExt; use futures::StreamExt;
use libc::umask;
use std::error::Error; use std::error::Error;
use std::io; use std::io;
use std::sync::Arc; use std::sync::Arc;
@ -76,21 +77,108 @@ async fn handle_client(
let mut reqs = Framed::new(sock, ClientCodec::new()); let mut reqs = Framed::new(sock, ClientCodec::new());
while let Some(Ok(req)) = reqs.next().await { while let Some(Ok(req)) = reqs.next().await {
match req { let resp = match req {
ClientRequest::SshKey(account_id) => { ClientRequest::SshKey(account_id) => {
let resp = match cachelayer.get_sshkeys(account_id.as_str()).await { debug!("sshkey req");
Ok(r) => ClientResponse::SshKeys(r), cachelayer
Err(_) => { .get_sshkeys(account_id.as_str())
.await
.map(|r| ClientResponse::SshKeys(r))
.unwrap_or_else(|_| {
error!("unable to load keys, returning empty set."); error!("unable to load keys, returning empty set.");
ClientResponse::SshKeys(vec![]) ClientResponse::SshKeys(vec![])
} })
};
reqs.send(resp).await?;
reqs.flush().await?;
debug!("flushed response!");
} }
} ClientRequest::NssAccounts => {
debug!("nssaccounts req");
cachelayer
.get_nssaccounts()
.map(|r| ClientResponse::NssAccounts(r))
.unwrap_or_else(|_| {
error!("unable to enum accounts");
ClientResponse::NssAccounts(Vec::new())
})
}
ClientRequest::NssAccountByUid(gid) => {
debug!("nssaccountbyuid req");
cachelayer
.get_nssaccount_gid(gid)
.await
.map(|acc| ClientResponse::NssAccount(acc))
.unwrap_or_else(|_| {
error!("unable to load account, returning empty.");
ClientResponse::NssAccount(None)
})
}
ClientRequest::NssAccountByName(account_id) => {
debug!("nssaccountbyname req");
cachelayer
.get_nssaccount_name(account_id.as_str())
.await
.map(|acc| ClientResponse::NssAccount(acc))
.unwrap_or_else(|_| {
error!("unable to load account, returning empty.");
ClientResponse::NssAccount(None)
})
}
ClientRequest::NssGroups => {
debug!("nssgroups req");
cachelayer
.get_nssgroups()
.map(|r| ClientResponse::NssGroups(r))
.unwrap_or_else(|_| {
error!("unable to enum groups");
ClientResponse::NssGroups(Vec::new())
})
}
ClientRequest::NssGroupByGid(gid) => {
debug!("nssgroupbygid req");
cachelayer
.get_nssgroup_gid(gid)
.await
.map(|grp| ClientResponse::NssGroup(grp))
.unwrap_or_else(|_| {
error!("unable to load group, returning empty.");
ClientResponse::NssGroup(None)
})
}
ClientRequest::NssGroupByName(grp_id) => {
debug!("nssgroupbyname req");
cachelayer
.get_nssgroup_name(grp_id.as_str())
.await
.map(|grp| ClientResponse::NssGroup(grp))
.unwrap_or_else(|_| {
error!("unable to load group, returning empty.");
ClientResponse::NssGroup(None)
})
}
ClientRequest::InvalidateCache => {
debug!("invalidate cache");
cachelayer
.invalidate()
.map(|_| ClientResponse::Ok)
.unwrap_or(ClientResponse::Error)
}
ClientRequest::ClearCache => {
debug!("clear cache");
cachelayer
.clear_cache()
.map(|_| ClientResponse::Ok)
.unwrap_or(ClientResponse::Error)
}
ClientRequest::Status => {
debug!("status check");
if cachelayer.test_connection().await {
ClientResponse::Ok
} else {
ClientResponse::Error
}
}
};
reqs.send(resp).await?;
reqs.flush().await?;
debug!("flushed response!");
} }
// Disconnect them // Disconnect them
@ -122,7 +210,11 @@ async fn main() {
.expect("Failed to build cache layer."), .expect("Failed to build cache layer."),
); );
// Set the umask while we open the path
let before = unsafe { umask(0) };
let mut listener = UnixListener::bind(DEFAULT_SOCK_PATH).unwrap(); let mut listener = UnixListener::bind(DEFAULT_SOCK_PATH).unwrap();
// Undo it.
let _ = unsafe { umask(before) };
let server = async move { let server = async move {
let mut incoming = listener.incoming(); let mut incoming = listener.incoming();

View file

@ -0,0 +1,43 @@
#[macro_use]
extern crate log;
use log::debug;
use structopt::StructOpt;
use futures::executor::block_on;
use kanidm_unix_common::client::call_daemon;
use kanidm_unix_common::constants::DEFAULT_SOCK_PATH;
use kanidm_unix_common::unix_proto::{ClientRequest, ClientResponse};
#[derive(Debug, StructOpt)]
struct ClientOpt {
#[structopt(short = "d", long = "debug")]
debug: bool,
}
#[tokio::main]
async fn main() {
let opt = ClientOpt::from_args();
if opt.debug {
::std::env::set_var("RUST_LOG", "kanidm=debug,kanidm_client=debug");
} else {
::std::env::set_var("RUST_LOG", "kanidm=info,kanidm_client=info");
}
env_logger::init();
debug!("Starting cache invalidate tool ...");
let req = ClientRequest::Status;
match block_on(call_daemon(DEFAULT_SOCK_PATH, req)) {
Ok(r) => match r {
ClientResponse::Ok => info!("working!"),
_ => {
error!("Error: unexpected response -> {:?}", r);
}
},
Err(e) => {
error!("Error -> {:?}", e);
}
}
}

View file

@ -5,6 +5,7 @@ use rusqlite::NO_PARAMS;
use std::convert::TryFrom; use std::convert::TryFrom;
use std::fmt; use std::fmt;
use crate::cache::Id;
use std::sync::{Mutex, MutexGuard}; use std::sync::{Mutex, MutexGuard};
pub struct Db { pub struct Db {
@ -108,6 +109,22 @@ impl<'a> DbTxn<'a> {
() ()
})?; })?;
self.conn
.execute(
"CREATE TABLE IF NOT EXISTS memberof_t (
g_uuid TEXT,
a_uuid TEXT,
FOREIGN KEY(g_uuid) REFERENCES group_t(uuid) ON DELETE CASCADE,
FOREIGN KEY(a_uuid) REFERENCES account_t(uuid) ON DELETE CASCADE
)
",
NO_PARAMS,
)
.map_err(|e| {
error!("sqlite memberof_t create error -> {:?}", e);
()
})?;
Ok(()) Ok(())
} }
@ -125,6 +142,24 @@ impl<'a> DbTxn<'a> {
}) })
} }
pub fn invalidate(&self) -> Result<(), ()> {
self.conn
.execute("UPDATE group_t SET expiry = 0", NO_PARAMS)
.map_err(|e| {
debug!("sqlite update group_t failure -> {:?}", e);
()
})?;
self.conn
.execute("UPDATE account_t SET expiry = 0", NO_PARAMS)
.map_err(|e| {
debug!("sqlite update account_t failure -> {:?}", e);
()
})?;
Ok(())
}
pub fn clear_cache(&self) -> Result<(), ()> { pub fn clear_cache(&self) -> Result<(), ()> {
self.conn self.conn
.execute("DELETE FROM group_t", NO_PARAMS) .execute("DELETE FROM group_t", NO_PARAMS)
@ -143,7 +178,7 @@ impl<'a> DbTxn<'a> {
Ok(()) Ok(())
} }
pub fn get_account(&self, account_id: &str) -> Result<Option<(UnixUserToken, u64)>, ()> { fn get_account_data_name(&self, account_id: &str) -> Result<Vec<(Vec<u8>, i64)>, ()> {
let mut stmt = self.conn let mut stmt = self.conn
.prepare( .prepare(
"SELECT token, expiry FROM account_t WHERE uuid = :account_id OR name = :account_id OR spn = :account_id" "SELECT token, expiry FROM account_t WHERE uuid = :account_id OR name = :account_id OR spn = :account_id"
@ -168,8 +203,41 @@ impl<'a> DbTxn<'a> {
}) })
}) })
.collect(); .collect();
data
}
let data = data?; fn get_account_data_gid(&self, gid: &u32) -> Result<Vec<(Vec<u8>, i64)>, ()> {
let mut stmt = self
.conn
.prepare("SELECT token, expiry FROM account_t WHERE gidnumber = :gid")
.map_err(|e| {
error!("sqlite select prepare failure -> {:?}", e);
()
})?;
// Makes tuple (token, expiry)
let data_iter = stmt
.query_map(&[gid], |row| Ok((row.get(0)?, row.get(1)?)))
.map_err(|e| {
error!("sqlite query_map failure -> {:?}", e);
()
})?;
let data: Result<Vec<(Vec<u8>, i64)>, _> = data_iter
.map(|v| {
v.map_err(|e| {
error!("sqlite map failure -> {:?}", e);
()
})
})
.collect();
data
}
pub fn get_account(&self, account_id: &Id) -> Result<Option<(UnixUserToken, u64)>, ()> {
let data = match account_id {
Id::Name(n) => self.get_account_data_name(n.as_str()),
Id::Gid(g) => self.get_account_data_gid(g),
}?;
// Assert only one result? // Assert only one result?
if data.len() >= 2 { if data.len() >= 2 {
@ -192,10 +260,47 @@ impl<'a> DbTxn<'a> {
Ok((t, e)) Ok((t, e))
}) })
.transpose(); .transpose();
r r
} }
pub fn get_accounts(&self) -> Result<Vec<UnixUserToken>, ()> {
let mut stmt = self
.conn
.prepare("SELECT token FROM account_t")
.map_err(|e| {
error!("sqlite select prepare failure -> {:?}", e);
()
})?;
let data_iter = stmt
.query_map(NO_PARAMS, |row| Ok(row.get(0)?))
.map_err(|e| {
error!("sqlite query_map failure -> {:?}", e);
()
})?;
let data: Result<Vec<Vec<u8>>, _> = data_iter
.map(|v| {
v.map_err(|e| {
error!("sqlite map failure -> {:?}", e);
()
})
})
.collect();
let data = data?;
data.iter()
.map(|token| {
// token convert with cbor.
debug!("{:?}", token);
serde_cbor::from_slice(token.as_slice()).map_err(|e| {
error!("cbor error -> {:?}", e);
()
})
})
.collect()
}
pub fn update_account(&self, account: &UnixUserToken, expire: u64) -> Result<(), ()> { pub fn update_account(&self, account: &UnixUserToken, expire: u64) -> Result<(), ()> {
let data = serde_cbor::to_vec(account).map_err(|e| { let data = serde_cbor::to_vec(account).map_err(|e| {
error!("cbor error -> {:?}", e); error!("cbor error -> {:?}", e);
@ -228,8 +333,264 @@ impl<'a> DbTxn<'a> {
.map_err(|e| { .map_err(|e| {
error!("sqlite execute_named error -> {:?}", e); error!("sqlite execute_named error -> {:?}", e);
() ()
})?;
// Now, we have to update the group memberships.
// First remove everything that already exists:
let mut stmt = self
.conn
.prepare("DELETE FROM memberof_t WHERE a_uuid = :a_uuid")
.map_err(|e| {
error!("sqlite prepare error -> {:?}", e);
()
})?;
stmt.execute(&[&account.uuid])
.map(|r| {
debug!("delete memberships -> {:?}", r);
()
})
.map_err(|e| {
error!("sqlite execute error -> {:?}", e);
()
})?;
let mut stmt = self
.conn
.prepare("INSERT INTO memberof_t (a_uuid, g_uuid) VALUES (:a_uuid, :g_uuid)")
.map_err(|e| {
error!("sqlite prepare error -> {:?}", e);
()
})?;
// Now for each group, add the relation.
account.groups.iter().try_for_each(|g| {
stmt.execute_named(&[(":a_uuid", &account.uuid), (":g_uuid", &g.uuid)])
.map(|r| {
debug!("insert membership -> {:?}", r);
()
})
.map_err(|e| {
error!("sqlite execute_named error -> {:?}", e);
()
})
}) })
} }
pub fn delete_account(&self, a_uuid: &str) -> Result<(), ()> {
self.conn
.execute("DELETE FROM account_t WHERE uuid = :a_uuid", &[a_uuid])
.map(|_| ())
.map_err(|e| {
error!("sqlite memberof_t create error -> {:?}", e);
()
})
}
fn get_group_data_name(&self, grp_id: &str) -> Result<Vec<(Vec<u8>, i64)>, ()> {
let mut stmt = self.conn
.prepare(
"SELECT token, expiry FROM group_t WHERE uuid = :grp_id OR name = :grp_id OR spn = :grp_id"
)
.map_err(|e| {
error!("sqlite select prepare failure -> {:?}", e);
()
})?;
// Makes tuple (token, expiry)
let data_iter = stmt
.query_map(&[grp_id], |row| Ok((row.get(0)?, row.get(1)?)))
.map_err(|e| {
error!("sqlite query_map failure -> {:?}", e);
()
})?;
let data: Result<Vec<(Vec<u8>, i64)>, _> = data_iter
.map(|v| {
v.map_err(|e| {
error!("sqlite map failure -> {:?}", e);
()
})
})
.collect();
data
}
fn get_group_data_gid(&self, gid: &u32) -> Result<Vec<(Vec<u8>, i64)>, ()> {
let mut stmt = self
.conn
.prepare("SELECT token, expiry FROM group_t WHERE gidnumber = :gid")
.map_err(|e| {
error!("sqlite select prepare failure -> {:?}", e);
()
})?;
// Makes tuple (token, expiry)
let data_iter = stmt
.query_map(&[gid], |row| Ok((row.get(0)?, row.get(1)?)))
.map_err(|e| {
error!("sqlite query_map failure -> {:?}", e);
()
})?;
let data: Result<Vec<(Vec<u8>, i64)>, _> = data_iter
.map(|v| {
v.map_err(|e| {
error!("sqlite map failure -> {:?}", e);
()
})
})
.collect();
data
}
pub fn get_group(&self, grp_id: &Id) -> Result<Option<(UnixGroupToken, u64)>, ()> {
let data = match grp_id {
Id::Name(n) => self.get_group_data_name(n.as_str()),
Id::Gid(g) => self.get_group_data_gid(g),
}?;
// Assert only one result?
if data.len() >= 2 {
error!("invalid db state, multiple entries matched query?");
return Err(());
}
let r: Result<Option<(_, _)>, ()> = data
.first()
.map(|(token, expiry)| {
// token convert with cbor.
let t = serde_cbor::from_slice(token.as_slice()).map_err(|e| {
error!("cbor error -> {:?}", e);
()
})?;
let e = u64::try_from(*expiry).map_err(|e| {
error!("u64 convert error -> {:?}", e);
()
})?;
Ok((t, e))
})
.transpose();
r
}
pub fn get_group_members(&self, g_uuid: &str) -> Result<Vec<UnixUserToken>, ()> {
let mut stmt = self
.conn
.prepare("SELECT account_t.token FROM (account_t, memberof_t) WHERE account_t.uuid = memberof_t.a_uuid AND memberof_t.g_uuid = :g_uuid")
.map_err(|e| {
error!("sqlite select prepare failure -> {:?}", e);
()
})?;
let data_iter = stmt
.query_map(&[g_uuid], |row| Ok(row.get(0)?))
.map_err(|e| {
error!("sqlite query_map failure -> {:?}", e);
()
})?;
let data: Result<Vec<Vec<u8>>, _> = data_iter
.map(|v| {
v.map_err(|e| {
error!("sqlite map failure -> {:?}", e);
()
})
})
.collect();
let data = data?;
data.iter()
.map(|token| {
// token convert with cbor.
debug!("{:?}", token);
serde_cbor::from_slice(token.as_slice()).map_err(|e| {
error!("cbor error -> {:?}", e);
()
})
})
.collect()
}
pub fn get_groups(&self) -> Result<Vec<UnixGroupToken>, ()> {
let mut stmt = self
.conn
.prepare("SELECT token FROM group_t")
.map_err(|e| {
error!("sqlite select prepare failure -> {:?}", e);
()
})?;
let data_iter = stmt
.query_map(NO_PARAMS, |row| Ok(row.get(0)?))
.map_err(|e| {
error!("sqlite query_map failure -> {:?}", e);
()
})?;
let data: Result<Vec<Vec<u8>>, _> = data_iter
.map(|v| {
v.map_err(|e| {
error!("sqlite map failure -> {:?}", e);
()
})
})
.collect();
let data = data?;
data.iter()
.map(|token| {
// token convert with cbor.
debug!("{:?}", token);
serde_cbor::from_slice(token.as_slice()).map_err(|e| {
error!("cbor error -> {:?}", e);
()
})
})
.collect()
}
pub fn update_group(&self, grp: &UnixGroupToken, expire: u64) -> Result<(), ()> {
let data = serde_cbor::to_vec(grp).map_err(|e| {
error!("cbor error -> {:?}", e);
()
})?;
let expire = i64::try_from(expire).map_err(|e| {
error!("i64 convert error -> {:?}", e);
()
})?;
let mut stmt = self.conn
.prepare("INSERT OR REPLACE INTO group_t (uuid, name, spn, gidnumber, token, expiry) VALUES (:uuid, :name, :spn, :gidnumber, :token, :expiry)")
.map_err(|e| {
error!("sqlite prepare error -> {:?}", e);
()
})?;
stmt.execute_named(&[
(":uuid", &grp.uuid),
(":name", &grp.name),
(":spn", &grp.spn),
(":gidnumber", &grp.gidnumber),
(":token", &data),
(":expiry", &expire),
])
.map(|r| {
debug!("insert -> {:?}", r);
()
})
.map_err(|e| {
error!("sqlite execute_named error -> {:?}", e);
()
})
}
pub fn delete_group(&self, g_uuid: &str) -> Result<(), ()> {
self.conn
.execute("DELETE FROM group_t WHERE uuid = :g_uuid", &[g_uuid])
.map(|_| ())
.map_err(|e| {
error!("sqlite memberof_t create error -> {:?}", e);
()
})
}
} }
impl<'a> fmt::Debug for DbTxn<'a> { impl<'a> fmt::Debug for DbTxn<'a> {
@ -253,6 +614,7 @@ impl<'a> Drop for DbTxn<'a> {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::Db; use super::Db;
use crate::cache::Id;
use kanidm_proto::v1::{UnixGroupToken, UnixUserToken}; use kanidm_proto::v1::{UnixGroupToken, UnixUserToken};
#[test] #[test]
@ -273,32 +635,35 @@ mod tests {
sshkeys: vec!["key-a".to_string()], sshkeys: vec!["key-a".to_string()],
}; };
let id_name = Id::Name("testuser".to_string());
let id_name2 = Id::Name("testuser2".to_string());
let id_spn = Id::Name("testuser@example.com".to_string());
let id_spn2 = Id::Name("testuser2@example.com".to_string());
let id_uuid = Id::Name("0302b99c-f0f6-41ab-9492-852692b0fd16".to_string());
let id_gid = Id::Gid(2000);
// test finding no account // test finding no account
let r1 = dbtxn.get_account("testuser").unwrap(); let r1 = dbtxn.get_account(&id_name).unwrap();
assert!(r1.is_none()); assert!(r1.is_none());
let r2 = dbtxn.get_account("testuser@example.com").unwrap(); let r2 = dbtxn.get_account(&id_spn).unwrap();
assert!(r2.is_none()); assert!(r2.is_none());
let r3 = dbtxn let r3 = dbtxn.get_account(&id_uuid).unwrap();
.get_account("0302b99c-f0f6-41ab-9492-852692b0fd16")
.unwrap();
assert!(r3.is_none()); assert!(r3.is_none());
/* let r4 = dbtxn.get_account(&id_gid).unwrap();
let r4 = dbtxn.get_account("2000").unwrap();
assert!(r4.is_none()); assert!(r4.is_none());
*/
// test adding an account // test adding an account
dbtxn.update_account(&ut1, 0).unwrap(); dbtxn.update_account(&ut1, 0).unwrap();
// test we can get it. // test we can get it.
let r1 = dbtxn.get_account("testuser").unwrap(); let r1 = dbtxn.get_account(&id_name).unwrap();
assert!(r1.is_some()); assert!(r1.is_some());
let r2 = dbtxn.get_account("testuser@example.com").unwrap(); let r2 = dbtxn.get_account(&id_spn).unwrap();
assert!(r2.is_some()); assert!(r2.is_some());
let r3 = dbtxn let r3 = dbtxn.get_account(&id_uuid).unwrap();
.get_account("0302b99c-f0f6-41ab-9492-852692b0fd16")
.unwrap();
assert!(r3.is_some()); assert!(r3.is_some());
let r4 = dbtxn.get_account(&id_gid).unwrap();
assert!(r4.is_some());
// test adding an account that was renamed // test adding an account that was renamed
ut1.name = "testuser2".to_string(); ut1.name = "testuser2".to_string();
@ -306,31 +671,31 @@ mod tests {
dbtxn.update_account(&ut1, 0).unwrap(); dbtxn.update_account(&ut1, 0).unwrap();
// get the account // get the account
let r1 = dbtxn.get_account("testuser").unwrap(); let r1 = dbtxn.get_account(&id_name).unwrap();
assert!(r1.is_none()); assert!(r1.is_none());
let r2 = dbtxn.get_account("testuser@example.com").unwrap(); let r2 = dbtxn.get_account(&id_spn).unwrap();
assert!(r2.is_none()); assert!(r2.is_none());
let r1 = dbtxn.get_account("testuser2").unwrap(); let r1 = dbtxn.get_account(&id_name2).unwrap();
assert!(r1.is_some()); assert!(r1.is_some());
let r2 = dbtxn.get_account("testuser2@example.com").unwrap(); let r2 = dbtxn.get_account(&id_spn2).unwrap();
assert!(r2.is_some()); assert!(r2.is_some());
let r3 = dbtxn let r3 = dbtxn.get_account(&id_uuid).unwrap();
.get_account("0302b99c-f0f6-41ab-9492-852692b0fd16")
.unwrap();
assert!(r3.is_some()); assert!(r3.is_some());
let r4 = dbtxn.get_account(&id_gid).unwrap();
assert!(r4.is_some());
// Clear cache // Clear cache
assert!(dbtxn.clear_cache().is_ok()); assert!(dbtxn.clear_cache().is_ok());
// should be nothing // should be nothing
let r1 = dbtxn.get_account("testuser").unwrap(); let r1 = dbtxn.get_account(&id_name2).unwrap();
assert!(r1.is_none()); assert!(r1.is_none());
let r2 = dbtxn.get_account("testuser@example.com").unwrap(); let r2 = dbtxn.get_account(&id_spn2).unwrap();
assert!(r2.is_none()); assert!(r2.is_none());
let r3 = dbtxn let r3 = dbtxn.get_account(&id_uuid).unwrap();
.get_account("0302b99c-f0f6-41ab-9492-852692b0fd16")
.unwrap();
assert!(r3.is_none()); assert!(r3.is_none());
let r4 = dbtxn.get_account(&id_gid).unwrap();
assert!(r4.is_none());
assert!(dbtxn.commit().is_ok()); assert!(dbtxn.commit().is_ok());
} }
@ -342,9 +707,138 @@ mod tests {
let dbtxn = db.write(); let dbtxn = db.write();
assert!(dbtxn.migrate().is_ok()); assert!(dbtxn.migrate().is_ok());
// test finding no account let mut gt1 = UnixGroupToken {
name: "testgroup".to_string(),
spn: "testgroup@example.com".to_string(),
gidnumber: 2000,
uuid: "0302b99c-f0f6-41ab-9492-852692b0fd16".to_string(),
};
let id_name = Id::Name("testgroup".to_string());
let id_name2 = Id::Name("testgroup2".to_string());
let id_spn = Id::Name("testgroup@example.com".to_string());
let id_spn2 = Id::Name("testgroup2@example.com".to_string());
let id_uuid = Id::Name("0302b99c-f0f6-41ab-9492-852692b0fd16".to_string());
let id_gid = Id::Gid(2000);
// test finding no group
let r1 = dbtxn.get_group(&id_name).unwrap();
assert!(r1.is_none());
let r2 = dbtxn.get_group(&id_spn).unwrap();
assert!(r2.is_none());
let r3 = dbtxn.get_group(&id_uuid).unwrap();
assert!(r3.is_none());
let r4 = dbtxn.get_group(&id_gid).unwrap();
assert!(r4.is_none());
// test adding a group
dbtxn.update_group(&gt1, 0).unwrap();
let r1 = dbtxn.get_group(&id_name).unwrap();
assert!(r1.is_some());
let r2 = dbtxn.get_group(&id_spn).unwrap();
assert!(r2.is_some());
let r3 = dbtxn.get_group(&id_uuid).unwrap();
assert!(r3.is_some());
let r4 = dbtxn.get_group(&id_gid).unwrap();
assert!(r4.is_some());
// add a group via update
gt1.name = "testgroup2".to_string();
gt1.spn = "testgroup2@example.com".to_string();
dbtxn.update_group(&gt1, 0).unwrap();
let r1 = dbtxn.get_group(&id_name).unwrap();
assert!(r1.is_none());
let r2 = dbtxn.get_group(&id_spn).unwrap();
assert!(r2.is_none());
let r1 = dbtxn.get_group(&id_name2).unwrap();
assert!(r1.is_some());
let r2 = dbtxn.get_group(&id_spn2).unwrap();
assert!(r2.is_some());
let r3 = dbtxn.get_group(&id_uuid).unwrap();
assert!(r3.is_some());
let r4 = dbtxn.get_group(&id_gid).unwrap();
assert!(r4.is_some());
// clear cache
assert!(dbtxn.clear_cache().is_ok());
// should be nothing.
let r1 = dbtxn.get_group(&id_name2).unwrap();
assert!(r1.is_none());
let r2 = dbtxn.get_group(&id_spn2).unwrap();
assert!(r2.is_none());
let r3 = dbtxn.get_group(&id_uuid).unwrap();
assert!(r3.is_none());
let r4 = dbtxn.get_group(&id_gid).unwrap();
assert!(r4.is_none());
assert!(dbtxn.commit().is_ok());
}
#[test]
fn test_cache_db_account_group_update() {
let _ = env_logger::builder().is_test(true).try_init();
let db = Db::new("").expect("failed to create.");
let dbtxn = db.write();
assert!(dbtxn.migrate().is_ok());
let gt1 = UnixGroupToken {
name: "testuser".to_string(),
spn: "testuser@example.com".to_string(),
gidnumber: 2000,
uuid: "0302b99c-f0f6-41ab-9492-852692b0fd16".to_string(),
};
let gt2 = UnixGroupToken {
name: "testgroup".to_string(),
spn: "testgroup@example.com".to_string(),
gidnumber: 2001,
uuid: "b500be97-8552-42a5-aca0-668bc5625705".to_string(),
};
let mut ut1 = UnixUserToken {
name: "testuser".to_string(),
spn: "testuser@example.com".to_string(),
displayname: "Test User".to_string(),
gidnumber: 2000,
uuid: "0302b99c-f0f6-41ab-9492-852692b0fd16".to_string(),
shell: None,
groups: vec![gt1.clone(), gt2],
sshkeys: vec!["key-a".to_string()],
};
// First, add the groups.
ut1.groups.iter().for_each(|g| {
dbtxn.update_group(&g, 0).unwrap();
});
// The add the account
dbtxn.update_account(&ut1, 0).unwrap();
// Now, get the memberships of the two groups.
let m1 = dbtxn
.get_group_members("0302b99c-f0f6-41ab-9492-852692b0fd16")
.unwrap();
let m2 = dbtxn
.get_group_members("b500be97-8552-42a5-aca0-668bc5625705")
.unwrap();
assert!(m1[0].name == "testuser");
assert!(m2[0].name == "testuser");
// Now alter testuser, remove gt2, update.
ut1.groups = vec![gt1];
dbtxn.update_account(&ut1, 0).unwrap();
// Check that the memberships have updated correctly.
let m1 = dbtxn
.get_group_members("0302b99c-f0f6-41ab-9492-852692b0fd16")
.unwrap();
let m2 = dbtxn
.get_group_members("b500be97-8552-42a5-aca0-668bc5625705")
.unwrap();
assert!(m1[0].name == "testuser");
assert!(m2.len() == 0);
assert!(dbtxn.commit().is_ok()); assert!(dbtxn.commit().is_ok());
// unimplemented!();
} }
} }

View file

@ -7,6 +7,7 @@ extern crate serde_derive;
extern crate log; extern crate log;
pub mod cache; pub mod cache;
pub mod client;
pub mod constants; pub mod constants;
pub(crate) mod db; pub(crate) mod db;
pub mod unix_proto; pub mod unix_proto;

View file

@ -4,59 +4,12 @@ extern crate log;
use log::debug; use log::debug;
use structopt::StructOpt; use structopt::StructOpt;
use bytes::{BufMut, BytesMut};
use futures::executor::block_on; use futures::executor::block_on;
use futures::SinkExt;
use futures::StreamExt;
use std::error::Error;
use std::io::Error as IoError;
use std::io::ErrorKind;
use tokio::net::UnixStream;
use tokio_util::codec::Framed;
use tokio_util::codec::{Decoder, Encoder};
use kanidm_unix_common::client::call_daemon;
use kanidm_unix_common::constants::DEFAULT_SOCK_PATH; use kanidm_unix_common::constants::DEFAULT_SOCK_PATH;
use kanidm_unix_common::unix_proto::{ClientRequest, ClientResponse}; use kanidm_unix_common::unix_proto::{ClientRequest, ClientResponse};
struct ClientCodec;
impl Decoder for ClientCodec {
type Item = ClientResponse;
type Error = IoError;
fn decode(&mut self, src: &mut BytesMut) -> Result<Option<Self::Item>, Self::Error> {
match serde_cbor::from_slice::<ClientResponse>(&src) {
Ok(msg) => {
// Clear the buffer for the next message.
src.clear();
Ok(Some(msg))
}
_ => Ok(None),
}
}
}
impl Encoder for ClientCodec {
type Item = ClientRequest;
type Error = IoError;
fn encode(&mut self, msg: ClientRequest, dst: &mut BytesMut) -> Result<(), Self::Error> {
let data = serde_cbor::to_vec(&msg).map_err(|e| {
error!("socket encoding error -> {:?}", e);
IoError::new(ErrorKind::Other, "CBOR encode error")
})?;
debug!("Attempting to send request -> {:?} ...", data);
dst.put(data.as_slice());
Ok(())
}
}
impl ClientCodec {
fn new() -> Self {
ClientCodec
}
}
#[derive(Debug, StructOpt)] #[derive(Debug, StructOpt)]
struct ClientOpt { struct ClientOpt {
#[structopt(short = "d", long = "debug")] #[structopt(short = "d", long = "debug")]
@ -65,26 +18,6 @@ struct ClientOpt {
account_id: String, account_id: String,
} }
async fn call_daemon(path: &str, req: ClientRequest) -> Result<ClientResponse, Box<dyn Error>> {
let stream = UnixStream::connect(path).await?;
let mut reqs = Framed::new(stream, ClientCodec::new());
reqs.send(req).await?;
reqs.flush().await?;
match reqs.next().await {
Some(Ok(res)) => {
debug!("Response -> {:?}", res);
Ok(res)
}
_ => {
error!("Error");
Err(Box::new(IoError::new(ErrorKind::Other, "oh no!")))
}
}
}
#[tokio::main] #[tokio::main]
async fn main() { async fn main() {
let opt = ClientOpt::from_args(); let opt = ClientOpt::from_args();

View file

@ -1,9 +1,40 @@
#[derive(Serialize, Deserialize, Debug)]
pub struct NssUser {
pub name: String,
pub gid: u32,
pub gecos: String,
pub homedir: String,
pub shell: String,
}
#[derive(Serialize, Deserialize, Debug)]
pub struct NssGroup {
pub name: String,
pub gid: u32,
pub members: Vec<String>,
}
#[derive(Serialize, Deserialize, Debug)] #[derive(Serialize, Deserialize, Debug)]
pub enum ClientRequest { pub enum ClientRequest {
SshKey(String), SshKey(String),
NssAccounts,
NssAccountByUid(u32),
NssAccountByName(String),
NssGroups,
NssGroupByGid(u32),
NssGroupByName(String),
InvalidateCache,
ClearCache,
Status,
} }
#[derive(Serialize, Deserialize, Debug)] #[derive(Serialize, Deserialize, Debug)]
pub enum ClientResponse { pub enum ClientResponse {
SshKeys(Vec<String>), SshKeys(Vec<String>),
NssAccounts(Vec<NssUser>),
NssAccount(Option<NssUser>),
NssGroups(Vec<NssGroup>),
NssGroup(Option<NssGroup>),
Ok,
Error,
} }

View file

@ -9,12 +9,13 @@ use kanidm::core::create_server_core;
use kanidm_unix_common::cache::CacheLayer; use kanidm_unix_common::cache::CacheLayer;
use tokio::runtime::Runtime; use tokio::runtime::Runtime;
use kanidm_client::asynchronous::KanidmAsyncClient;
use kanidm_client::{KanidmClient, KanidmClientBuilder}; use kanidm_client::{KanidmClient, KanidmClientBuilder};
static PORT_ALLOC: AtomicUsize = AtomicUsize::new(18080); static PORT_ALLOC: AtomicUsize = AtomicUsize::new(18080);
static ADMIN_TEST_PASSWORD: &str = "integration test admin password"; static ADMIN_TEST_PASSWORD: &str = "integration test admin password";
fn run_test(fix_fn: fn(KanidmClient) -> (), test_fn: fn(CacheLayer) -> ()) { fn run_test(fix_fn: fn(&KanidmClient) -> (), test_fn: fn(CacheLayer, KanidmAsyncClient) -> ()) {
// ::std::env::set_var("RUST_LOG", "actix_web=debug,kanidm=debug"); // ::std::env::set_var("RUST_LOG", "actix_web=debug,kanidm=debug");
let _ = env_logger::builder().is_test(true).try_init(); let _ = env_logger::builder().is_test(true).try_init();
let (tx, rx) = mpsc::channel(); let (tx, rx) = mpsc::channel();
@ -45,11 +46,16 @@ fn run_test(fix_fn: fn(KanidmClient) -> (), test_fn: fn(CacheLayer) -> ()) {
let addr = format!("http://127.0.0.1:{}", port); let addr = format!("http://127.0.0.1:{}", port);
// Run fixtures // Run fixtures
let rsclient = KanidmClientBuilder::new() let adminclient = KanidmClientBuilder::new()
.address(addr.clone()) .address(addr.clone())
.build() .build()
.expect("Failed to build sync client"); .expect("Failed to build sync client");
fix_fn(rsclient); fix_fn(&adminclient);
let client = KanidmClientBuilder::new()
.address(addr.clone())
.build_async()
.expect("Failed to build async admin client");
let rsclient = KanidmClientBuilder::new() let rsclient = KanidmClientBuilder::new()
.address(addr) .address(addr)
@ -62,14 +68,14 @@ fn run_test(fix_fn: fn(KanidmClient) -> (), test_fn: fn(CacheLayer) -> ()) {
) )
.expect("Failed to build cache layer."); .expect("Failed to build cache layer.");
test_fn(cachelayer); test_fn(cachelayer, client);
// We DO NOT need teardown, as sqlite is in mem // We DO NOT need teardown, as sqlite is in mem
// let the tables hit the floor // let the tables hit the floor
sys.stop(); sys.stop();
} }
fn test_fixture(rsclient: KanidmClient) -> () { fn test_fixture(rsclient: &KanidmClient) -> () {
let res = rsclient.auth_simple_password("admin", ADMIN_TEST_PASSWORD); let res = rsclient.auth_simple_password("admin", ADMIN_TEST_PASSWORD);
assert!(res.is_ok()); assert!(res.is_ok());
// Not recommended in production! // Not recommended in production!
@ -104,7 +110,7 @@ fn test_fixture(rsclient: KanidmClient) -> () {
#[test] #[test]
fn test_cache_sshkey() { fn test_cache_sshkey() {
run_test(test_fixture, |cachelayer| { run_test(test_fixture, |cachelayer, _adminclient| {
let mut rt = Runtime::new().expect("Failed to start tokio"); let mut rt = Runtime::new().expect("Failed to start tokio");
let fut = async move { let fut = async move {
// Force offline. Show we have no keys. // Force offline. Show we have no keys.
@ -137,3 +143,201 @@ fn test_cache_sshkey() {
rt.block_on(fut); rt.block_on(fut);
}) })
} }
#[test]
fn test_cache_account() {
run_test(test_fixture, |cachelayer, _adminclient| {
let mut rt = Runtime::new().expect("Failed to start tokio");
let fut = async move {
// Force offline. Show we have no account
cachelayer.mark_offline().await;
let ut = cachelayer
.get_nssaccount_name("testaccount1")
.await
.expect("Failed to get from cache");
assert!(ut.is_none());
// go online
cachelayer.attempt_online().await;
assert!(cachelayer.test_connection().await);
// get the account
let ut = cachelayer
.get_nssaccount_name("testaccount1")
.await
.expect("Failed to get from cache");
assert!(ut.is_some());
// go offline
cachelayer.mark_offline().await;
// can still get account
let ut = cachelayer
.get_nssaccount_name("testaccount1")
.await
.expect("Failed to get from cache");
assert!(ut.is_some());
// Finally, check we have "all accounts" in the list.
let us = cachelayer
.get_nssaccounts()
.expect("failed to list all accounts");
assert!(us.len() == 1);
};
rt.block_on(fut);
})
}
#[test]
fn test_cache_group() {
run_test(test_fixture, |cachelayer, _adminclient| {
let mut rt = Runtime::new().expect("Failed to start tokio");
let fut = async move {
// Force offline. Show we have no groups.
cachelayer.mark_offline().await;
let gt = cachelayer
.get_nssgroup_name("testgroup1")
.await
.expect("Failed to get from cache");
assert!(gt.is_none());
// go online. Get the group
cachelayer.attempt_online().await;
assert!(cachelayer.test_connection().await);
let gt = cachelayer
.get_nssgroup_name("testgroup1")
.await
.expect("Failed to get from cache");
assert!(gt.is_some());
// go offline. still works
cachelayer.mark_offline().await;
let gt = cachelayer
.get_nssgroup_name("testgroup1")
.await
.expect("Failed to get from cache");
assert!(gt.is_some());
// And check we have no members in the group. Members are an artifact of
// user lookups!
assert!(gt.unwrap().members.len() == 0);
// clear cache, go online
assert!(cachelayer.invalidate().is_ok());
cachelayer.attempt_online().await;
assert!(cachelayer.test_connection().await);
// get an account with the group
// DO NOT get the group yet.
let ut = cachelayer
.get_nssaccount_name("testaccount1")
.await
.expect("Failed to get from cache");
assert!(ut.is_some());
// go offline.
cachelayer.mark_offline().await;
// show we have the group despite no direct calls
let gt = cachelayer
.get_nssgroup_name("testgroup1")
.await
.expect("Failed to get from cache");
assert!(gt.is_some());
// And check we have members in the group, since we came from a userlook up
assert!(gt.unwrap().members.len() == 1);
// Finally, check we have "all groups" in the list.
let gs = cachelayer
.get_nssgroups()
.expect("failed to list all groups");
assert!(gs.len() == 2);
};
rt.block_on(fut);
})
}
#[test]
fn test_cache_group_delete() {
run_test(test_fixture, |cachelayer, adminclient| {
let mut rt = Runtime::new().expect("Failed to start tokio");
let fut = async move {
// get the group
cachelayer.attempt_online().await;
assert!(cachelayer.test_connection().await);
let gt = cachelayer
.get_nssgroup_name("testgroup1")
.await
.expect("Failed to get from cache");
assert!(gt.is_some());
// delete it.
adminclient
.auth_simple_password("admin", ADMIN_TEST_PASSWORD)
.await
.expect("failed to auth as admin");
adminclient
.idm_group_delete("testgroup1")
.await
.expect("failed to delete");
// invalidate cache
assert!(cachelayer.invalidate().is_ok());
// "get it"
// should be empty.
let gt = cachelayer
.get_nssgroup_name("testgroup1")
.await
.expect("Failed to get from cache");
assert!(gt.is_none());
};
rt.block_on(fut);
})
}
#[test]
fn test_cache_account_delete() {
run_test(test_fixture, |cachelayer, adminclient| {
let mut rt = Runtime::new().expect("Failed to start tokio");
let fut = async move {
// get the account
cachelayer.attempt_online().await;
assert!(cachelayer.test_connection().await);
let ut = cachelayer
.get_nssaccount_name("testaccount1")
.await
.expect("Failed to get from cache");
assert!(ut.is_some());
// delete it.
adminclient
.auth_simple_password("admin", ADMIN_TEST_PASSWORD)
.await
.expect("failed to auth as admin");
adminclient
.idm_account_delete("testaccount1")
.await
.expect("failed to delete");
// invalidate cache
assert!(cachelayer.invalidate().is_ok());
// "get it"
let ut = cachelayer
.get_nssaccount_name("testaccount1")
.await
.expect("Failed to get from cache");
// should be empty.
assert!(ut.is_none());
// The group should be removed too.
let gt = cachelayer
.get_nssgroup_name("testaccount1")
.await
.expect("Failed to get from cache");
assert!(gt.is_none());
};
rt.block_on(fut);
})
}

View file

@ -2,7 +2,7 @@
[package] [package]
name = "kanidm" name = "kanidm"
version = "0.1.0" version = "0.1.1"
authors = ["William Brown <william@blackhats.net.au>"] authors = ["William Brown <william@blackhats.net.au>"]
# default-run = "kanidm_core" # default-run = "kanidm_core"
edition = "2018" edition = "2018"

View file

@ -5,7 +5,7 @@ pub mod system_config;
pub use crate::constants::system_config::JSON_SYSTEM_CONFIG_V1; pub use crate::constants::system_config::JSON_SYSTEM_CONFIG_V1;
// Increment this as we add new schema types and values!!! // Increment this as we add new schema types and values!!!
pub static SYSTEM_INDEX_VERSION: i64 = 3; pub static SYSTEM_INDEX_VERSION: i64 = 4;
// On test builds, define to 60 seconds // On test builds, define to 60 seconds
#[cfg(test)] #[cfg(test)]
pub static PURGE_TIMEOUT: u64 = 60; pub static PURGE_TIMEOUT: u64 = 60;

View file

@ -265,8 +265,8 @@ impl IdmServerProxyReadTransaction {
.impersonate_search_ext_uuid(au, &uute.target, &uute.event) .impersonate_search_ext_uuid(au, &uute.target, &uute.event)
); );
let account = try_audit!(au, UnixGroup::try_from_entry_reduced(account_entry)); let group = try_audit!(au, UnixGroup::try_from_entry_reduced(account_entry));
account.to_unixgrouptoken() group.to_unixgrouptoken()
} }
} }
@ -917,8 +917,21 @@ mod tests {
assert!(tok_r.name == "admin"); assert!(tok_r.name == "admin");
assert!(tok_r.spn == "admin@example.com"); assert!(tok_r.spn == "admin@example.com");
assert!(tok_r.groups.len() == 1); assert!(tok_r.groups.len() == 2);
assert!(tok_r.groups[0].name == "testgroup"); assert!(tok_r.groups[0].name == "admin");
assert!(tok_r.groups[1].name == "testgroup");
// Show we can get the admin as a unix group token too
let ugte = UnixGroupTokenEvent::new_internal(
Uuid::parse_str("00000000-0000-0000-0000-000000000000")
.expect("failed to parse uuid"),
);
let tok_g = idms_prox_read
.get_unixgrouptoken(au, &ugte)
.expect("Failed to generate unix group token");
assert!(tok_g.name == "admin");
assert!(tok_g.spn == "admin@example.com");
}) })
} }
} }

View file

@ -7,6 +7,8 @@ use crate::value::PartialValue;
use kanidm_proto::v1::OperationError; use kanidm_proto::v1::OperationError;
use kanidm_proto::v1::{UnixGroupToken, UnixUserToken}; use kanidm_proto::v1::{UnixGroupToken, UnixUserToken};
use std::iter;
#[derive(Debug, Clone)] #[derive(Debug, Clone)]
pub(crate) struct UnixUserAccount { pub(crate) struct UnixUserAccount {
pub name: String, pub name: String,
@ -110,15 +112,15 @@ pub(crate) struct UnixGroup {
macro_rules! try_from_group_e { macro_rules! try_from_group_e {
($value:expr) => {{ ($value:expr) => {{
if !$value.attribute_value_pres("class", &PVCLASS_GROUP) { // We could be looking at a user for their UPG, OR a true group.
return Err(OperationError::InvalidAccountState(
"Missing class: group".to_string(),
));
}
if !$value.attribute_value_pres("class", &PVCLASS_POSIXGROUP) { if !(($value.attribute_value_pres("class", &PVCLASS_ACCOUNT)
&& $value.attribute_value_pres("class", &PVCLASS_POSIXACCOUNT))
|| ($value.attribute_value_pres("class", &PVCLASS_GROUP)
&& $value.attribute_value_pres("class", &PVCLASS_POSIXGROUP)))
{
return Err(OperationError::InvalidAccountState( return Err(OperationError::InvalidAccountState(
"Missing class: posixgroup".to_string(), "Missing class: account && posixaccount OR group && posixgroup".to_string(),
)); ));
} }
@ -154,6 +156,46 @@ impl UnixGroup {
value: &Entry<EntryReduced, EntryCommitted>, value: &Entry<EntryReduced, EntryCommitted>,
qs: &QueryServerReadTransaction, qs: &QueryServerReadTransaction,
) -> Result<Vec<Self>, OperationError> { ) -> Result<Vec<Self>, OperationError> {
// First synthesise the self-group from the account.
// We have already checked these, but paranoia is better than
// complacency.
if !value.attribute_value_pres("class", &PVCLASS_ACCOUNT) {
return Err(OperationError::InvalidAccountState(
"Missing class: account".to_string(),
));
}
if !value.attribute_value_pres("class", &PVCLASS_POSIXACCOUNT) {
return Err(OperationError::InvalidAccountState(
"Missing class: posixaccount".to_string(),
));
}
let name = value.get_ava_single_string("name").ok_or_else(|| {
OperationError::InvalidAccountState("Missing attribute: name".to_string())
})?;
let spn = value
.get_ava_single("spn")
.map(|v| v.to_proto_string_clone())
.ok_or_else(|| {
OperationError::InvalidAccountState("Missing attribute: spn".to_string())
})?;
let uuid = *value.get_uuid();
let gidnumber = value.get_ava_single_uint32("gidnumber").ok_or_else(|| {
OperationError::InvalidAccountState("Missing attribute: gidnumber".to_string())
})?;
// This is the user private group.
let upg = UnixGroup {
name,
spn,
gidnumber,
uuid,
};
match value.get_ava_reference_uuid("memberof") { match value.get_ava_reference_uuid("memberof") {
Some(l) => { Some(l) => {
let f = filter!(f_and!([ let f = filter!(f_and!([
@ -166,13 +208,14 @@ impl UnixGroup {
) )
])); ]));
let ges: Vec<_> = try_audit!(au, qs.internal_search(au, f)); let ges: Vec<_> = try_audit!(au, qs.internal_search(au, f));
let groups: Result<Vec<_>, _> = let groups: Result<Vec<_>, _> = iter::once(Ok(upg))
ges.into_iter().map(UnixGroup::try_from_entry).collect(); .chain(ges.into_iter().map(UnixGroup::try_from_entry))
.collect();
groups groups
} }
None => { None => {
// No memberof, no groups! // No memberof, no groups!
Ok(Vec::new()) Ok(vec![upg])
} }
} }
} }