Compare commits

...

25 commits

Author SHA1 Message Date
William Brown 2369b1a755 Improve scim querying 2025-02-22 13:00:28 +10:00
ToxicMushroom 892c013613 Some progress on admin ui for managing groups and users 2025-02-22 13:00:20 +10:00
Sebastiano Tocci 9611a7f976
Fixes : add configurable maximum queryable attributes for LDAP () 2025-02-21 12:14:47 +10:00
sinavir f40679cd52
Accept invalid certs and fix token_cache_path ()
* Add accept-invalid-certs option for cli
* Fix token_cache_path behavior

---------

Co-authored-by: sinavir <sinavir@sinavir.fr>
2025-02-20 08:07:48 +00:00
Firstyear 52824b58f1
Accept lowercase ldap pwd hashes () 2025-02-20 04:34:27 +00:00
CEbbinghaus 848af4cecd
TOTP label verification ()
* Adding TOTP Label verification (for both empty and duplicate)
2025-02-19 06:54:50 +00:00
micolous de506a5f53
Rewrite WebFinger docs () 2025-02-19 12:26:15 +10:00
micolous 7f3b1f2580
doc: fix formatting of URL table, remove Caddyfile instructions ()
There are many web servers, and this breaks the flow of the rest of the table.
2025-02-19 11:18:58 +10:00
Alex Martens 9bf17c4846
book: add OAuth2 Proxy example () 2025-02-16 05:14:47 +00:00
Firstyear ed88b72080
Exempt idm_admin and admin from denied names. ()
idm_admin and admin should be exempted from the denied names process,
as these values will already be denied due to attribute uniqueness.
Additionally improved the denied names check to only validate the
name during a change, not during a modifification. This way entries
that become denied can get themself out of the pickle.
2025-02-15 22:45:25 +00:00
Firstyear d0b0b163fd
Book fixes () 2025-02-15 16:01:44 +10:00
Jade Ellis ce410f440c
ci: uniform Docker builds () 2025-02-14 10:25:04 +00:00
Firstyear 77271c1720
20240213 3413 domain displayname ()
Remove older migrations and make domain displayname optional.
2025-02-14 10:52:49 +10:00
Justin Warren e838da9a08
Correct path to kanidm config example in documentation. () 2025-02-13 01:31:38 +00:00
Firstyear 94b7285cbb
Support redirect uris with query parameters ()
RFC 6749 once again reminds us that given the room to do silly
things, RFC authors absolutely will. In this case, it's query
parameters in redirection uris which are absolutely horrifying
and yet, here we are.

We strictly match the query pairs during the redirection to
ensure that if a query pair did allow open redirection, then
we prevent it.
2025-02-13 01:03:15 +00:00
Firstyear af6f55b1fe
Update to 1.6.0-dev () 2025-02-11 07:26:07 +00:00
George Wu 211e7d4e89
Remove white background from square logo. () 2025-02-11 14:41:55 +10:00
CEbbinghaus ccde675cd2
feat: Added webfinger implementation ()
Adds WebFinger endpoints to every oauth2 client

Co-authored-by: James Hodgkinson <james@terminaloutcomes.com>
2025-02-10 06:10:12 +00:00
dependabot[bot] b96fe49b99
Bump the all group in /pykanidm with 7 updates ()
Bumps the all group in /pykanidm with 7 updates:

| Package | From | To |
| --- | --- | --- |
| [aiohttp](https://github.com/aio-libs/aiohttp) | `3.11.11` | `3.11.12` |
| [ruff](https://github.com/astral-sh/ruff) | `0.9.4` | `0.9.5` |
| [mypy](https://github.com/python/mypy) | `1.14.1` | `1.15.0` |
| [coverage](https://github.com/nedbat/coveragepy) | `7.6.10` | `7.6.11` |
| [mkdocs-material](https://github.com/squidfunk/mkdocs-material) | `9.6.1` | `9.6.3` |
| [mkdocstrings](https://github.com/mkdocstrings/mkdocstrings) | `0.27.0` | `0.28.0` |
| [mkdocstrings-python](https://github.com/mkdocstrings/python) | `1.13.0` | `1.14.6` |


Updates `aiohttp` from 3.11.11 to 3.11.12
- [Release notes](https://github.com/aio-libs/aiohttp/releases)
- [Changelog](https://github.com/aio-libs/aiohttp/blob/master/CHANGES.rst)
- [Commits](https://github.com/aio-libs/aiohttp/compare/v3.11.11...v3.11.12)

Updates `ruff` from 0.9.4 to 0.9.5
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.9.4...0.9.5)

Updates `mypy` from 1.14.1 to 1.15.0
- [Changelog](https://github.com/python/mypy/blob/master/CHANGELOG.md)
- [Commits](https://github.com/python/mypy/compare/v1.14.1...v1.15.0)

Updates `coverage` from 7.6.10 to 7.6.11
- [Release notes](https://github.com/nedbat/coveragepy/releases)
- [Changelog](https://github.com/nedbat/coveragepy/blob/master/CHANGES.rst)
- [Commits](https://github.com/nedbat/coveragepy/compare/7.6.10...7.6.11)

Updates `mkdocs-material` from 9.6.1 to 9.6.3
- [Release notes](https://github.com/squidfunk/mkdocs-material/releases)
- [Changelog](https://github.com/squidfunk/mkdocs-material/blob/master/CHANGELOG)
- [Commits](https://github.com/squidfunk/mkdocs-material/compare/9.6.1...9.6.3)

Updates `mkdocstrings` from 0.27.0 to 0.28.0
- [Release notes](https://github.com/mkdocstrings/mkdocstrings/releases)
- [Changelog](https://github.com/mkdocstrings/mkdocstrings/blob/main/CHANGELOG.md)
- [Commits](https://github.com/mkdocstrings/mkdocstrings/compare/0.27.0...0.28.0)

Updates `mkdocstrings-python` from 1.13.0 to 1.14.6
- [Release notes](https://github.com/mkdocstrings/python/releases)
- [Changelog](https://github.com/mkdocstrings/python/blob/main/CHANGELOG.md)
- [Commits](https://github.com/mkdocstrings/python/compare/1.13.0...1.14.6)

---
updated-dependencies:
- dependency-name: aiohttp
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all
- dependency-name: ruff
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: all
- dependency-name: mypy
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: all
- dependency-name: coverage
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: all
- dependency-name: mkdocs-material
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: all
- dependency-name: mkdocstrings
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: all
- dependency-name: mkdocstrings-python
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: all
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-10 07:19:22 +10:00
James Hodgkinson c89f0c011e
20250209 pre release ()
* fix: removing unused dependencies (assert_cmd, gethostname)
* chore: Release Notes
2025-02-09 10:06:01 +00:00
Firstyear b15ff89b39
20250206 freebsd ports ()
* Remove unneeded files
* Ensure we config client config for freebsd
* Improve shell handling
* Use freebsd compat nss
2025-02-09 08:57:15 +00:00
Firstyear 1f5ce2617d
Resolve kanidm-unix auth-test bug ()
* Resolve kanidm-unix auth-test bug

When reworking the unix daemon, we missed changing the auth-test
tool to handle the new challenge-response flow correctly which
would cause the session to disconnect.

* Cleanup
2025-02-09 02:49:54 +00:00
CEbbinghaus f68906bf1b
chore: Remove empty scopemaps () 2025-02-09 11:19:52 +10:00
CEbbinghaus 7a9bb9eac2
Feat: Allowing spn query with non-spn structured data in LDAP ()
* Added Botch for fixing spn query

* Got Invalid filter working. spn can now be searched on

* Addressed review comments

* Resolved Invalid filter correctly for no index

* Cleaned comments and added tests (still 1 failing)

* Added comments and fixed unit test

* Formatting

* Made Clippy Happy
2025-02-08 06:37:28 +00:00
Wei Jian Gan 0ce1bbeddc
SSH Keys in Credentials Update () 2025-02-08 11:54:41 +10:00
120 changed files with 3662 additions and 3909 deletions
.github/workflows
Cargo.lockCargo.tomlREADME.mdRELEASE_NOTES.md
book/src
examples
libs
platform/freebsd/client
proto
pykanidm
server

View file

@ -35,9 +35,15 @@ jobs:
needs:
- set_tag_values
steps:
- uses: actions/checkout@v4
- name: Checkout repository
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Docker metadata
id: meta
uses: docker/metadata-action@v5
- name: Build kanidm
uses: docker/build-push-action@v6
with:
@ -47,6 +53,9 @@ jobs:
build-args: |
"KANIDM_FEATURES="
file: tools/Dockerfile
context: .
labels: ${{ steps.meta.outputs.labels }}
annotations: ${{ steps.meta.outputs.annotations }}
# Must use OCI exporter for multi-arch: https://github.com/docker/buildx/pull/1813
outputs: type=oci,dest=/tmp/kanidm-docker.tar
- name: Upload artifact
@ -60,8 +69,8 @@ jobs:
# This step is split so that we don't apply "packages: write" permission
# except when uploading the final Docker image to GHCR.
runs-on: ubuntu-latest
if: ( github.ref_type == 'tag' || github.ref == 'refs/heads/master' ) && github.repository == 'kanidm/kanidm'
needs: kanidm_build
if: ( github.ref_type == 'tag' || github.ref == 'refs/heads/master' )
needs: [kanidm_build, set_tag_values]
permissions:
packages: write
@ -78,4 +87,4 @@ jobs:
echo "${{ secrets.GITHUB_TOKEN }}" | \
oras login -u "${{ github.actor }}" --password-stdin ghcr.io
oras copy --from-oci-layout "/tmp/kanidm-docker.tar:devel" \
"ghcr.io/${{ github.repository_owner }}/kanidm:devel"
"ghcr.io/${{ needs.set_tag_values.outputs.owner_lc }}/kanidm:devel"

View file

@ -35,27 +35,15 @@ jobs:
runs-on: ubuntu-latest
needs: set_tag_values
steps:
- uses: actions/checkout@v4
- name: Checkout repository
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Docker metadata
id: meta
uses: docker/metadata-action@v5
with:
# list of Docker images to use as base name for tags
# images: |
# kanidm/kanidmd
# ghcr.io/username/app
# generate Docker tags based on the following events/attributes
tags: |
type=schedule
type=ref,event=branch
type=ref,event=pr
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=sha
- name: Build kanidmd
uses: docker/build-push-action@v6
with:
@ -64,6 +52,9 @@ jobs:
# build-args: |
# "KANIDM_BUILD_OPTIONS=-j1"
file: server/Dockerfile
context: .
labels: ${{ steps.meta.outputs.labels }}
annotations: ${{ steps.meta.outputs.annotations }}
# Must use OCI exporter for multi-arch: https://github.com/docker/buildx/pull/1813
outputs: type=oci,dest=/tmp/kanidmd-docker.tar
- name: Upload artifact
@ -77,8 +68,8 @@ jobs:
# This step is split so that we don't apply "packages: write" permission
# except when uploading the final Docker image to GHCR.
runs-on: ubuntu-latest
if: ( github.ref_type== 'tag' || github.ref == 'refs/heads/master' ) && github.repository == 'kanidm/kanidm'
needs: kanidmd_build
if: ( github.ref_type== 'tag' || github.ref == 'refs/heads/master' )
needs: [kanidmd_build, set_tag_values]
permissions:
packages: write
@ -95,4 +86,4 @@ jobs:
echo "${{ secrets.GITHUB_TOKEN }}" | \
oras login -u "${{ github.actor }}" --password-stdin ghcr.io
oras copy --from-oci-layout "/tmp/kanidmd-docker.tar:devel" \
"ghcr.io/${{ github.repository_owner }}/kanidmd:devel"
"ghcr.io/${{ needs.set_tag_values.outputs.owner_lc }}/kanidmd:devel"

View file

@ -35,17 +35,26 @@ jobs:
runs-on: ubuntu-latest
needs: set_tag_values
steps:
- uses: actions/checkout@v4
- name: Checkout repository
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Docker metadata
id: meta
uses: docker/metadata-action@v5
- name: Build radius
uses: docker/build-push-action@v6
with:
platforms: linux/arm64,linux/amd64
tags: ghcr.io/${{ needs.set_tag_values.outputs.owner_lc }}/radius:devel,ghcr.io/${{ needs.set_tag_values.outputs.owner_lc }}/radius:${{ needs.set_tag_values.outputs.ref_name}}
file: rlm_python/Dockerfile
context: .
labels: ${{ steps.meta.outputs.labels }}
annotations: ${{ steps.meta.outputs.annotations }}
# Must use OCI exporter for multi-arch: https://github.com/docker/buildx/pull/1813
outputs: type=oci,dest=/tmp/radius-docker.tar
- name: Upload artifact
@ -59,8 +68,8 @@ jobs:
# This step is split so that we don't apply "packages: write" permission
# except when uploading the final Docker image to GHCR.
runs-on: ubuntu-latest
if: ( github.ref_type == 'tag' || github.ref == 'refs/heads/master' ) && github.repository == 'kanidm/kanidm'
needs: radius_build
if: ( github.ref_type == 'tag' || github.ref == 'refs/heads/master' )
needs: [radius_build, set_tag_values]
permissions:
packages: write
@ -79,4 +88,4 @@ jobs:
echo "${{ secrets.GITHUB_TOKEN }}" | \
oras login -u "${{ github.actor }}" --password-stdin ghcr.io
oras copy --from-oci-layout "/tmp/radius-docker.tar:devel" \
"ghcr.io/${{ github.repository_owner }}/radius:devel"
"ghcr.io/${{ needs.set_tag_values.outputs.owner_lc }}/radius:devel"

559
Cargo.lock generated

File diff suppressed because it is too large Load diff

View file

@ -1,10 +1,10 @@
[workspace.package]
version = "1.5.0-dev"
version = "1.6.0-dev"
authors = [
"William Brown <william@blackhats.net.au>",
"James Hodgkinson <james@terminaloutcomes.com>",
]
rust-version = "1.79"
rust-version = "1.80"
edition = "2021"
license = "MPL-2.0"
homepage = "https://github.com/kanidm/kanidm/"
@ -120,21 +120,23 @@ codegen-units = 256
# kanidm-hsm-crypto = { path = "../hsm-crypto" }
libnss = { git = "https://github.com/Firstyear/libnss-rs.git", branch = "20250207-freebsd" }
[workspace.dependencies]
kanidmd_core = { path = "./server/core", version = "=1.5.0-dev" }
kanidmd_lib = { path = "./server/lib", version = "=1.5.0-dev" }
kanidmd_lib_macros = { path = "./server/lib-macros", version = "=1.5.0-dev" }
kanidmd_testkit = { path = "./server/testkit", version = "=1.5.0-dev" }
kanidm_build_profiles = { path = "./libs/profiles", version = "=1.5.0-dev" }
kanidm_client = { path = "./libs/client", version = "=1.5.0-dev" }
kanidmd_core = { path = "./server/core", version = "=1.6.0-dev" }
kanidmd_lib = { path = "./server/lib", version = "=1.6.0-dev" }
kanidmd_lib_macros = { path = "./server/lib-macros", version = "=1.6.0-dev" }
kanidmd_testkit = { path = "./server/testkit", version = "=1.6.0-dev" }
kanidm_build_profiles = { path = "./libs/profiles", version = "=1.6.0-dev" }
kanidm_client = { path = "./libs/client", version = "=1.6.0-dev" }
kanidm-hsm-crypto = "^0.2.0"
kanidm_lib_crypto = { path = "./libs/crypto", version = "=1.5.0-dev" }
kanidm_lib_file_permissions = { path = "./libs/file_permissions", version = "=1.5.0-dev" }
kanidm_proto = { path = "./proto", version = "=1.5.0-dev" }
kanidm_unix_common = { path = "./unix_integration/common", version = "=1.5.0-dev" }
kanidm_utils_users = { path = "./libs/users", version = "=1.5.0-dev" }
scim_proto = { path = "./libs/scim_proto", version = "=1.5.0-dev" }
sketching = { path = "./libs/sketching", version = "=1.5.0-dev" }
kanidm_lib_crypto = { path = "./libs/crypto", version = "=1.6.0-dev" }
kanidm_lib_file_permissions = { path = "./libs/file_permissions", version = "=1.6.0-dev" }
kanidm_proto = { path = "./proto", version = "=1.6.0-dev" }
kanidm_unix_common = { path = "./unix_integration/common", version = "=1.6.0-dev" }
kanidm_utils_users = { path = "./libs/users", version = "=1.6.0-dev" }
scim_proto = { path = "./libs/scim_proto", version = "=1.6.0-dev" }
sketching = { path = "./libs/sketching", version = "=1.6.0-dev" }
anyhow = { version = "1.0.95" }
argon2 = { version = "0.5.3", features = ["alloc"] }
@ -163,15 +165,15 @@ clap_complete = "^4.5.42"
chrono = "^0.4.39"
compact_jwt = { version = "^0.4.2", default-features = false }
concread = "^0.5.3"
cron = "0.12.1"
cron = "0.15.0"
crossbeam = "0.8.4"
csv = "1.3.1"
dialoguer = "0.10.4"
dialoguer = "0.11.0"
dhat = "0.3.3"
dyn-clone = "^1.0.17"
fernet = "^0.2.1"
filetime = "^0.2.24"
fs4 = "^0.8.3"
fs4 = "^0.12.0"
futures = "^0.3.31"
futures-util = { version = "^0.3.30", features = ["sink"] }
gix = { version = "0.64.0", default-features = false }
@ -223,7 +225,6 @@ opentelemetry-semantic-conventions = "0.27.0"
tracing-opentelemetry = "0.28.0"
tracing-core = "0.1.33"
paste = "^1.0.14"
peg = "0.8"
pkg-config = "^0.3.31"
prctl = "1.0.0"

View file

@ -1,8 +1,6 @@
# Kanidm - Simple and Secure Identity Management
<p align="center">
<img src="https://raw.githubusercontent.com/kanidm/kanidm/master/artwork/logo-small.png" width="20%" height="auto" />
</p>
![Kanidm Logo](artwork/logo-small.png)
## About

View file

@ -1,42 +1,93 @@
<p align="center">
<img src="https://raw.githubusercontent.com/kanidm/kanidm/master/artwork/logo-small.png" width="20%" height="auto" />
</p>
# Kanidm Release Notes
# Getting Started
![Kanidm Logo](artwork/logo-small.png)
## Getting Started
To get started, see the [kanidm book]
# Feedback
## Feedback
We value your feedback! First, please see our [code of conduct]. If you have questions please join
our [gitter community channel] so that we can help. If you find a bug or issue, we'd love you to
report it to our [issue tracker].
# Release Notes
## Release Notes
## 2024-11-01 - Kanidm 1.4.0
### 2025-02-09 - Kanidm 1.5.0
This is the latest stable release of the Kanidm Identity Management project. Every release is the
combined effort of our community and we appreciate their invaluable contributions, comments,
questions, feedback and support.
You should review our
[support documentation](https://github.com/kanidm/kanidm/blob/master/book/src/support.md) as this
[support documentation] as this
may have important effects on your distribution or upgrades in future.
Before upgrading you should review
[our upgrade documentation](https://github.com/kanidm/kanidm/blob/master/book/src/server_updates.md#general-update-notes)
[our upgrade documentation]
### 1.4.0 Important Changes
#### 1.5.0 Important Changes
- There has been a lot of tweaks to how cookies are handled in this release, if you're having issues with the login flow please clear all cookies as an initial troubleshooting step.
#### 1.5.0 Release Highlights
- Many updates to the UI!
- SSH Keys in Credentials Update (#3027)
- Improved error message when PassKey is missing PIN (mainly for Firefox) (#3403)
- Fix the password reset form and possible resolver issue (#3398)
- Fixed unrecoverable error page doesn't include logo or domain name (#3352)
- Add support for prefers-color-scheme using Bootstrap classes. Dark mode! (#3327)
- Automatically trigger passkeys on login view (#3307)
- Two new operating systems!
- Initial OpenBSD support (#3381)
- FreeBSD client (#3333)
- Many SCIM-related improvements
- SCIM access control (#3359)
- SCIM put (#3151)
- OAuth2 Things
- Allow OAuth2 with empty `state` parameter (#3396)
- Allow POST on oauth userinfo (#3395)
- Add OAuth2 `response_mode=fragment` (#3335)
- Add CORS headers to jwks and userinfo (#3283)
- Allowing SPN query with non-SPN structured data in LDAP (#3400)
- Correctly return that uuid2spn changed on domain rename (#3402)
- RADIUS startup fixing (#3388)
- Repaired systemd reload notifications (#3355)
- Add `ssh_publickeys` as a claim for OAuth2 (#3346)
- Allow modification of password minimum length (#3345)
- PAM on Debian, enable use_first_pass by default (#3326)
- Allow opt-in of easter eggs (#3308)
- Allow reseting account policy values to defaults (#3306)
- Ignore system users for UPG synthesiseation (#3297)
- Allow group managers to modify entry-managed-by (#3272)
And many more!
### 2024-11-01 - Kanidm 1.4.0
This is the latest stable release of the Kanidm Identity Management project. Every release is the
combined effort of our community and we appreciate their invaluable contributions, comments,
questions, feedback and support.
You should review our
[support documentation] as this
may have important effects on your distribution or upgrades in future.
Before upgrading you should review
[our upgrade documentation]
#### 1.4.0 Important Changes
- The web user interface has been rewritten and now supports theming. You will notice that your
domain displayname is included in a number of locations on upgrade, and that you can set
your own domain and OAuth2 client icons.
- OAuth2 strict redirect uri is now required. Ensure you have read
[our upgrade documentation](https://github.com/kanidm/kanidm/blob/master/book/src/server_updates.md#general-update-notes).
[our upgrade documentation].
and taken the needed steps before upgrading.
### 1.4.0 Release Highlights
#### 1.4.0 Release Highlights
- Improve handling of client timeouts when the server is under high load
- Resolve a minor issue preventing some credential updates from saving
@ -65,20 +116,20 @@ and taken the needed steps before upgrading.
- Rewrite the entire web frontend to be simpler and faster, allowing more features to be added
in the future. Greatly improves user experience as the pages are now very fast to load!
## 2024-08-07 - Kanidm 1.3.0
### 2024-08-07 - Kanidm 1.3.0
This is the latest stable release of the Kanidm Identity Management project. Every release is the
combined effort of our community and we appreciate their invaluable contributions, comments,
questions, feedback and support.
You should review our
[support documentation](https://github.com/kanidm/kanidm/blob/master/book/src/support.md) as this
[support documentation] as this
may have important effects on your distribution or upgrades in future.
Before upgrading you should review
[our upgrade documentation](https://github.com/kanidm/kanidm/blob/master/book/src/server_updates.md#general-update-notes)
[our upgrade documentation]
### 1.3.0 Important Changes
#### 1.3.0 Important Changes
- New GID number constraints are now enforced in this version. To upgrade from 1.2.0 all accounts
and groups must adhere to these rules. See [our upgrade documentation]. about tools to help you
@ -89,7 +140,7 @@ Before upgrading you should review
by PassKeys which give a better user experience.
- Kanidm now supports FreeBSD and Illumos in addition to Linux
### 1.3.0 Release Highlights
#### 1.3.0 Release Highlights
- TOTP update user interface improvements
- Improved error messages when a load balancer is failing
@ -112,24 +163,24 @@ Before upgrading you should review
- Strict redirect URI enforcement in OAuth2
- Substring indexing for improved search performance
## 2024-05-01 - Kanidm 1.2.0
### 2024-05-01 - Kanidm 1.2.0
This is the first stable release of the Kanidm Identity Management project. We want to thank every
one in our community who has supported to the project to this point with their invaluable
contributions, comments, questions, feedback and support.
Importantly this release makes a number of changes to our project's support processes. You should
review our [support documentation](https://github.com/kanidm/kanidm/blob/master/book/src/support.md)
review our [support documentation]
as this may have important effects on your distribution or upgrades in future.
### 1.2.0 Important Changes
#### 1.2.0 Important Changes
- On upgrade all OAuth2 sessions and user sessions will be reset due to changes in cryptographic key
handling. This does not affect api tokens.
- There is a maximum limit of 48 interactive sessions for persons where older sessions are
automatically removed.
### 1.2.0 Release Highlights
#### 1.2.0 Release Highlights
- The book now contains a list of supported RFCs and standards
- Add code challenge methods to OIDC discovery
@ -154,7 +205,7 @@ as this may have important effects on your distribution or upgrades in future.
- Migrate cryptographic key handling to an object model with future HSM support
- Limit maximum active sessions on an account to 48
## 2024-02-07 - Kanidm 1.1.0-rc.16
### 2024-02-07 - Kanidm 1.1.0-rc.16
This is the sixteenth pre-release of the Kanidm Identity Management project. Pre-releases are to
help get feedback and ideas from the community on how we can continue to make this project better.
@ -163,7 +214,7 @@ This is the final release candidate before we publish a release version. We beli
server interfaces are stable and reliable enough for people to depend on, and to develop external
tools to interact with Kanidm.
### 1.1.0-rc.16 Release Highlights
#### 1.1.0-rc.16 Release Highlights
- Replication for two node environments is now supported
- Account policy supports password minimum length
@ -182,7 +233,7 @@ tools to interact with Kanidm.
- Support RFC6749 Client Credentials Grant
- Support custom claim maps in OIDC
## 2023-10-31 - Kanidm 1.1.0-beta14
### 2023-10-31 - Kanidm 1.1.0-beta14
This is the fourteenth pre-release of the Kanidm Identity Management project. Pre-releases are to
help get feedback and ideas from the community on how we can continue to make this project better.
@ -191,7 +242,7 @@ At this point we believe we are on the final stretch to making something we cons
ready". After this we will start to ship release candidates as our focus will now be changing to
finish our production components and the stability of the API's for longer term support.
### 1.1.0-beta14 Release Highlights
#### 1.1.0-beta14 Release Highlights
- Replication is in Beta! Please test carefully!
- Web UI WASM has been split up, significantly improving the responsiveness.
@ -205,7 +256,7 @@ finish our production components and the stability of the API's for longer term
- Removed a lot of uses of `unwrap` and `expect` to improve reliability.
- Account policy framework is now in place.
## 2023-05-01 - Kanidm 1.1.0-beta13
### 2023-05-01 - Kanidm 1.1.0-beta13
This is the thirteenth pre-release of the Kanidm Identity Management project. Pre-releases are to
help get feedback and ideas from the community on how we can continue to make this project better.
@ -214,7 +265,7 @@ At this point we believe we are on the final stretch to making something we cons
ready". After this we will start to ship release candidates as our focus will now be changing to
finish our production components and the stability of the API's for longer term support.
### 1.1.0-beta13 Release Highlights
#### 1.1.0-beta13 Release Highlights
- Replication foundations
- Full implementation of replication refresh
@ -255,7 +306,7 @@ finish our production components and the stability of the API's for longer term
- Improve create-reset-token user experience
- Improve self-healing for some reference issues
## 2023-05-01 - Kanidm 1.1.0-alpha12
### 2023-05-01 - Kanidm 1.1.0-alpha12
This is the twelfth alpha series release of the Kanidm Identity Management project. Alpha releases
are to help get feedback and ideas from the community on how we can continue to make this project
@ -266,7 +317,7 @@ done so yet is we haven't decided if we want to commit to the current API layout
There are still things we want to change there. Otherwise the server is stable and reliable for
production usage.
### Release Highlights
#### 1.1.0-alpha12 Release Highlights
- Allow full server content replication in testing (yes we're finally working on replication!)
- Improve OAuth2 to allow scoped members to see RS they can access for UI flows
@ -286,7 +337,7 @@ production usage.
- Add exclusive process lock to daemon
- Allow dns/rdns in ldap search contexts
## 2023-02-01 - Kanidm 1.1.0-alpha11
### 2023-02-01 - Kanidm 1.1.0-alpha11
This is the eleventh alpha series release of the Kanidm Identity Management project. Alpha releases
are to help get feedback and ideas from the community on how we can continue to make this project
@ -296,7 +347,7 @@ The project is shaping up very nicely, and a beta will be coming soon! The main
done so yet is we haven't decided if we want to commit to the current API layout and freeze it yet.
There are still things we want to change there. Otherwise the server is stable and reliable.
### Release Highlights
#### 1.1.0-alpha11 Release Highlights
- Support /etc/skel home dir templates in kanidm-unixd
- Improve warning messages for openssl when a cryptographic routine is not supported
@ -317,7 +368,7 @@ There are still things we want to change there. Otherwise the server is stable a
- Improve the access control module to evaluate access in a clearer way
- Allow synced users to correct modify their local sessions
## 2022-11-01 - Kanidm 1.1.0-alpha10
### 2022-11-01 - Kanidm 1.1.0-alpha10
This is the tenth alpha series release of the Kanidm Identity Management project. Alpha releases are
to help get feedback and ideas from the community on how we can continue to make this project better
@ -325,12 +376,12 @@ for a future supported release.
The project is shaping up very nicely, and a beta will be coming soon!
### Upgrade Note
#### 1.1.0-alpha10 Upgrade Note
This version will _require_ TLS on all servers, even if behind a load balancer or TLS terminating
proxy. You should be ready for this change when you upgrade to the latest version.
### Release Highlights
#### 1.1.0-alpha10 Release Highlights
- Management and tracking of authenticated sessions
- Make upgrade migrations more robust when upgrading over multiple versions
@ -352,7 +403,7 @@ proxy. You should be ready for this change when you upgrade to the latest versio
- Cleanup of expired authentication sessions
- Improved administration of password badlists
## 2022-08-02 - Kanidm 1.1.0-alpha9
### 2022-08-02 - Kanidm 1.1.0-alpha9
This is the ninth alpha series release of the Kanidm Identity Management project. Alpha releases are
to help get feedback and ideas from the community on how we can continue to make this project better
@ -360,7 +411,7 @@ for a future supported release.
The project is shaping up very nicely, and a beta will be coming soon!
### Release Highlights
#### 1.1.0-alpha9 Release Highlights
- Inclusion of a Python3 API library
- Improve orca usability
@ -376,13 +427,13 @@ The project is shaping up very nicely, and a beta will be coming soon!
- CTAP2+ support in Webauthn via CLI
- Radius supports EAP TLS identities in addition to EAP PEAP
## 2022-05-01 - Kanidm 1.1.0-alpha8
### 2022-05-01 - Kanidm 1.1.0-alpha8
This is the eighth alpha series release of the Kanidm Identity Management project. Alpha releases
are to help get feedback and ideas from the community on how we can continue to make this project
better for a future supported release.
### Release Highlights
#### 1.1.0-alpha8 Release Highlights
- Foundations for cryptographic trusted device authentication
- Foundations for new user onboarding and credential reset
@ -398,13 +449,13 @@ better for a future supported release.
- Highlight that the WebUI is in alpha to prevent confusion
- Remove sync only client paths
## 2022-01-01 - Kanidm 1.1.0-alpha7
### 2022-01-01 - Kanidm 1.1.0-alpha7
This is the seventh alpha series release of the Kanidm Identity Management project. Alpha releases
are to help get feedback and ideas from the community on how we can continue to make this project
better for a future supported release.
### Release Highlights
#### 1.1.0-alpha7 Release Highlights
- OAuth2 scope to group mappings
- Webauthn subdomain support
@ -415,7 +466,7 @@ better for a future supported release.
- Addition of email address attributes
- Web UI improvements for OAuth2
## 2021-10-01 - Kanidm 1.1.0-alpha6
### 2021-10-01 - Kanidm 1.1.0-alpha6
This is the sixth alpha series release of the Kanidm Identity Management project. Alpha releases are
to help get feedback and ideas from the community on how we can continue to make this project better
@ -424,7 +475,7 @@ for a future supported release.
It's also a special release as Kanidm has just turned 3 years old! Thank you all for helping to
bring the project this far! 🎉 🦀
### Release Highlights
#### 1.1.0-alpha6 Release Highlights
- Support backup codes as MFA in case of lost TOTP/Webauthn
- Dynamic menus on CLI for usernames when multiple sessions exist
@ -444,13 +495,13 @@ bring the project this far! 🎉 🦀
- Improvements to performance with high cache sizes
- Session tokens persist over a session restart
## 2021-07-07 - Kanidm 1.1.0-alpha5
### 2021-07-07 - Kanidm 1.1.0-alpha5
This is the fifth alpha series release of the Kanidm Identity Management project. Alpha releases are
to help get feedback and ideas from the community on how we can continue to make this project better
for a future supported release.
### Release Highlights
#### 1.1.0-alpha5 Release Highlights
- Fix a major defect in how backup/restore worked
- Improve query performance by caching partial queries
@ -465,13 +516,13 @@ for a future supported release.
- Statistical analysis of indexes to improve query optimisation
- Handle broken TOTP authenticator apps
## 2021-04-01 - Kanidm 1.1.0-alpha4
### 2021-04-01 - Kanidm 1.1.0-alpha4
This is the fourth alpha series release of the Kanidm Identity Management project. Alpha releases
are to help get feedback and ideas from the community on how we can continue to make this project
better for a future supported release.
### Release Highlights
#### 1.1.0-alpha4 Release Highlights
- Performance Improvements
- TOTP CLI enrollment
@ -485,13 +536,13 @@ better for a future supported release.
- Badlist checked at login to determine account compromise
- Minor Fixes for attribute display
## 2021-01-01 - Kanidm 1.1.0-alpha3
### 2021-01-01 - Kanidm 1.1.0-alpha3
This is the third alpha series release of the Kanidm Identity Management project. Alpha releases are
to help get feedback and ideas from the community on how we can continue to make this project better
for a future supported release.
### Release Highlights
#### 1.1.0-alpha3 Release Highlights
- Account "valid from" and "expiry" times.
- Rate limiting and softlocking of account credentials to prevent bruteforcing.
@ -499,13 +550,13 @@ for a future supported release.
- Rewrite of json authentication protocol components.
- Unixd will cache "non-existent" items to improve nss/pam latency.
## 2020-10-01 - Kanidm 1.1.0-alpha2
### 2020-10-01 - Kanidm 1.1.0-alpha2
This is the second alpha series release of the Kanidm Identity Management project. Alpha releases
are to help get feedback and ideas from the community on how we can continue to make this project
better for a future supported release.
### Release Highlights
#### 1.1.0-alpha2 Release Highlights
- SIMD key lookups in container builds for datastructures
- Server and Client hardening warnings for running users and file permissions
@ -517,7 +568,7 @@ better for a future supported release.
- Reduction in memory footprint during searches
- Change authentication from cookies to auth-bearer tokens
## 2020-07-01 - Kanidm 1.1.0-alpha1
### 2020-07-01 - Kanidm 1.1.0-alpha1
This is the first alpha series release of the Kanidm Identity Management project. Alpha releases are
to help get feedback and ideas from the community on how we can continue to make this project better
@ -536,7 +587,7 @@ people. I would especially like to thank:
- Samuel Cabrero (scabrero)
- Jim McDonough
### Release Highlights
#### 1.1.0-alpha1 Release Highlights
- A working identity management server, including database
- RADIUS authentication and docker images
@ -552,3 +603,5 @@ people. I would especially like to thank:
[gitter community channel]: https://gitter.im/kanidm/community
[code of conduct]: https://github.com/kanidm/kanidm/blob/master/CODE_OF_CONDUCT.md
[kanidm book]: https://kanidm.github.io/kanidm/stable/
[our upgrade documentation]: https://github.com/kanidm/kanidm/blob/master/book/src/server_updates.md#general-update-notes
[support documentation]: https://github.com/kanidm/kanidm/blob/master/book/src/support.md

View file

@ -3,57 +3,58 @@
## Pre-Reqs
```bash
cargo install cargo-audit
cargo install cargo-outdated
cargo install cargo-udeps
cargo install cargo-machete
cargo install --force \
cargo-audit \
cargo-outdated \
cargo-udeps \
cargo-machete
```
## Pre Release Check List
### Start a release
- [ ] git checkout -b YYYYMMDD-pre-release
- [ ] `git checkout -b "$(date +%Y%m%d)-pre-release"`
### Cargo Tasks
- [ ] Update MSRV if applicable
- [ ] cargo update
- [ ] `cargo update`
- [ ] `RUSTC_BOOTSTRAP=1 cargo udeps`
- [ ] `cargo machete`
- [ ] cargo outdated -R
- [ ] cargo audit
- [ ] cargo test
- [ ] `cargo machete --with-metadata`
- [ ] `cargo outdated -R`
- [ ] `cargo audit`
- [ ] `cargo test`
- [ ] setup a local instance and run orca (TBD)
- [ ] store a copy an an example db (TBD)
### Code Changes
- [ ] upgrade crypto policy values if required
- [ ] upgrade crypto policy values if required (see `libs/crypto/src/lib.rs` -> `CryptoPolicy`)
- [ ] check for breaking db entry changes.
### Administration
- [ ] Update `RELEASE_NOTES.md`
- [ ] Update `README.md`
- [ ] cargo test
- [ ] git commit -a -m "Release Notes"
- [ ] git push origin YYYYMMDD-pre-release
- [ ] `cargo test`
- [ ] `git commit -a -m 'chore: Release Notes'`
- [ ] `git push origin "$(date +%Y%m%d)-pre-release"`
- [ ] Merge PR
### Git Management
- [ ] git checkout master
- [ ] git pull
- [ ] `git checkout master`
- [ ] `git pull`
- [ ] git checkout -b 1.x.0 (Note no v to prevent ref conflict)
- [ ] update version to set pre tag in ./Cargo.toml
- [ ] git commit -m "Release 1.x.0-pre"
- [ ] git tag v1.x.0-pre
- [ ] `git commit -m "Release $(cargo metadata --format-version 1 | jq '.packages[] | select(.name=="kanidm_proto") | .version')-pre"`
- [ ] `git tag v$(cargo metadata --format-version 1 | jq '.packages[] | select(.name=="kanidm_proto") | .version')-pre`
- [ ] Final inspect of the branch
- [ ] git push origin 1.x.0 --tags
- [ ] `git push origin "$(cargo metadata --format-version 1 | jq '.packages[] | select(.name=="kanidm_proto") | .version')" --tags`
- [ ] github -> Ensure release branch is protected
@ -106,4 +107,3 @@ cargo install cargo-machete
### Distro
- [ ] vendor and release to build.opensuse.org

View file

@ -145,7 +145,8 @@ with a dn of `dn=token` and provide the api token in the password.
> [!NOTE]
>
> The `dn=token` keyword is guaranteed to not be used by any other entry, which is why it was chosen
> as the keyword to initiate api token binds.
> as the keyword to initiate api token binds. Additionally it is not required, leaving the field empty
> will fall back to the service-account if a "password" is provided
```bash
ldapwhoami -H ldaps://URL -x -D "dn=token" -w "TOKEN"
@ -234,6 +235,7 @@ ldapwhoami ... -x -D '22a65b6c-80c8-4e1a-9b76-3f3afdff8400'
ldapwhoami ... -x -D 'spn=test1@idm.example.com,dc=idm,dc=example,dc=com'
ldapwhoami ... -x -D 'name=test1,dc=idm,dc=example,dc=com'
```
<sub>in fact, the key of the bind isn't used at all so `googoogaaga=test1` is entirely valid</sub> ;)
## Troubleshooting

View file

@ -70,11 +70,10 @@ anything special for Kanidm (or another provider).
**Note:** some apps automatically append `/.well-known/openid-configuration` to
the end of an OIDC Discovery URL, so you may need to omit that.
</dd>
<dt>
[RFC 8414 OAuth 2.0 Authorisation Server Metadata](https://datatracker.ietf.org/doc/html/rfc8414) URL
[RFC 8414 OAuth 2.0 Authorisation Server Metadata](https://datatracker.ietf.org/doc/html/rfc8414)
URL **(recommended)**
</dt>
@ -86,6 +85,21 @@ the end of an OIDC Discovery URL, so you may need to omit that.
<dt>
[WebFinger URL **(discouraged)**](#webfinger)
</dt>
<dd>
`https://idm.example.com/oauth2/openid/:client_id:/.well-known/webfinger`
See [the WebFinger section](#webfinger) for more details, as there a number of
caveats for WebFinger clients.
</dd>
<dt>
User auth
</dt>
@ -190,7 +204,7 @@ Token signing public key
### Create the Kanidm Configuration
By default, members of the `system_admins` or `idm_hp_oauth2_manage_priv` groups are able to create
By default, members of the `idm_admins` or `idm_oauth2_admins` groups are able to create
or manage OAuth2 client integrations.
You can create a new client by specifying its client name, application display name and the landing
@ -441,3 +455,61 @@ kanidm system oauth2 reset-secrets
```
Each client has unique signing keys and access secrets, so this is limited to each service.
## WebFinger
[WebFinger](https://datatracker.ietf.org/doc/html/rfc7033) provides a mechanism
for discovering information about people or other entities. It can be used by an
identity provider to supply OpenID Connect discovery information.
Kanidm provides
[an Identity Provider Discovery for OIDC URL](https://datatracker.ietf.org/doc/html/rfc7033#section-3.1)
response to all incoming WebFinger requests, using a user's SPN as their account
ID. This does not match on email addresses as they are not guaranteed to be
unique.
However, WebFinger has a number of flaws which make it difficult to use with
Kanidm:
* WebFinger assumes that the identity provider will give the same `iss`
(Issuer) for every OAuth 2.0/OIDC client, and there is no standard way for a
WebFinger client to report its client ID.
Kanidm uses a *different* `iss` (Issuer) value for each client.
* WebFinger requires that this be served at the *root* of the domain of a user's
SPN (ie: information about the user with SPN `user@idm.example.com` is at
`https://idm.example.com/.well-known/webfinger`).
Kanidm *does not* provide a WebFinger endpoint at its root URL, because it has
no way to know *which* OAuth 2.0/OIDC client a WebFinger request is associated
with, so could report an incorrect `iss` (Issuer).
You will need a load balancer in front of Kanidm's HTTPS server to redirect
requests to the appropriate `/oauth2/openid/:client_id:/.well-known/webfinger`
URL. If the client does not follow redirects, you may need to rewrite the
request in the load balancer instead.
If you have *multiple* WebFinger clients, it will need to map some other
property of the request (such as a source IP address or `User-Agent` header)
to a client ID, and redirect to the appropriate WebFinger URL for that client.
* Kanidm responds to *all* WebFinger queries with
[an Identity Provider Discovery for OIDC URL](https://datatracker.ietf.org/doc/html/rfc7033#section-3.1),
**regardless** of what
[`rel` parameter](https://datatracker.ietf.org/doc/html/rfc7033#section-4.4.4.1)
was specified.
This is to work around
[a broken client](https://tailscale.com/kb/1240/sso-custom-oidc) which doesn't
send a `rel` parameter, but expects an Identity Provider Discovery issuer URL
in response.
If you want to use WebFinger in any *other* context on Kanidm's hostname,
you'll need a load balancer in front of Kanidm which matches on some property
of the request.
Because of the flaws of the WebFinger specification and the deployment
difficulties they introduce, we recommend that applications use OpenID Connect
Discovery or OAuth 2.0 Authorisation Server Metadata for client configuration
instead of WebFinger.

View file

@ -556,6 +556,65 @@ php occ config:app:set --value=0 user_oidc allow_multiple_user_backends
You can login directly by appending `?direct=1` to your login page. You can re-enable other backends
by setting the value to `1`
## OAuth2 Proxy
OAuth2 Proxy is a reverse proxy that provides authentication with OpenID Connect identity providers.
It is typically used to secure web applications without native OpenID Connect support.
Prepare the environment.
Due to a [lack of public client support](https://github.com/oauth2-proxy/oauth2-proxy/issues/1714) we have to set it up as a basic client.
```bash
kanidm system oauth2 create webapp 'webapp.example.com' 'https://webapp.example.com'
kanidm system add-redirect-url webapp 'https://webapp.example.com/oauth2/callback'
kanidm system oauth2 update-scope-map webapp email openid
kanidm system oauth2 get webapp
kanidm system oauth2 show-basic-secret webapp
<SECRET>
```
Create a user group.
```bash
kanidm group create 'webapp_admin'
```
Setup the claim-map to add `webapp_group` to the userinfo claim.
```bash
kanidm system oauth2 update-claim-map-join 'webapp' 'webapp_group' array
kanidm system oauth2 update-claim-map 'webapp' 'webapp_group' 'webapp_admin' 'webapp_admin'
```
Authorize users for the application.
Additionally OAuth2 Proxy requires all users have an email, reference this issue for more details:
- <https://github.com/oauth2-proxy/oauth2-proxy/issues/2667>
```bash
kanidm person update '<user>' --legalname 'Personal Name' --mail 'user@example.com'
kanidm group add-members 'webapp_admin' '<user>'
```
And add the following to your OAuth2 Proxy config.
```toml
provider = "oidc"
scope = "openid email"
# change to match your kanidm domain and client id
oidc_issuer_url = "https://idm.example.com/oauth2/openid/webapp"
# client ID from `kanidm system oauth2 create`
client_id = "webapp"
# redirect URL from `kanidm system add-redirect-url webapp`
redirect_url = "https://webapp.example.com/oauth2/callback"
# claim name from `kanidm system oauth2 update-claim-map-join`
oidc_groups_claim = "webapp_group"
# user group from `kanidm group create`
allowed_groups = ["webapp_admin"]
# secret from `kanidm system oauth2 show-basic-secret webapp`
client_secret = "<SECRET>"
```
## Outline
> These instructions were tested with self-hosted Outline 0.80.2.

View file

@ -22,6 +22,7 @@ This is a list of supported features and standards within Kanidm.
- [RFC4519 LDAP Schema](https://www.rfc-editor.org/rfc/rfc4519)
- FreeIPA User Schema
- [RFC7644 SCIM Bulk Data Import](https://www.rfc-editor.org/rfc/rfc7644)
- NOTE: SCIM is only supported for synchronisation from another IDP at this time.
# Database

View file

@ -1,5 +1,5 @@
# Kanidm minimal Service Configuration - /etc/kanidm/config
# For a full example and documentation, see /usr/share/kanidm/kanidm
# For a full example and documentation, see /usr/share/kanidm/config
# or `example/kanidm` in the source repository.
# Replace this with your kanidmd URI and uncomment the line

View file

@ -30,8 +30,8 @@ use compact_jwt::Jwk;
use kanidm_proto::constants::uri::V1_AUTH_VALID;
use kanidm_proto::constants::{
ATTR_DOMAIN_DISPLAY_NAME, ATTR_DOMAIN_LDAP_BASEDN, ATTR_DOMAIN_SSID, ATTR_ENTRY_MANAGED_BY,
ATTR_KEY_ACTION_REVOKE, ATTR_LDAP_ALLOW_UNIX_PW_BIND, ATTR_NAME, CLIENT_TOKEN_CACHE, KOPID,
KSESSIONID, KVERSION,
ATTR_KEY_ACTION_REVOKE, ATTR_LDAP_ALLOW_UNIX_PW_BIND, ATTR_LDAP_MAX_QUERYABLE_ATTRS, ATTR_NAME,
CLIENT_TOKEN_CACHE, KOPID, KSESSIONID, KVERSION,
};
use kanidm_proto::internal::*;
use kanidm_proto::v1::*;
@ -94,7 +94,7 @@ pub struct KanidmClientConfigInstance {
pub verify_hostnames: Option<bool>,
/// Whether to verify the Certificate Authority details of the server's TLS certificate, defaults to `true`.
///
/// Environment variable is slightly inverted - `KANIDM_SKIP_HOSTNAME_VERIFICATION`.
/// Environment variable is slightly inverted - `KANIDM_ACCEPT_INVALID_CERTS`.
pub verify_ca: Option<bool>,
/// Optionally you can specify the path of a CA certificate to use for verifying the server, if you're not using one trusted by your system certificate store.
///
@ -453,6 +453,13 @@ impl KanidmClientBuilder {
}
}
pub fn set_token_cache_path(self, token_cache_path: Option<String>) -> Self {
KanidmClientBuilder {
token_cache_path,
..self
}
}
#[allow(clippy::result_unit_err)]
pub fn add_root_certificate_filepath(self, ca_path: &str) -> Result<Self, ClientError> {
//Okay we have a ca to add. Let's read it in and setup.
@ -2075,6 +2082,18 @@ impl KanidmClient {
.await
}
/// Sets the maximum number of LDAP attributes that can be queryed in a single operation
pub async fn idm_domain_set_ldap_max_queryable_attrs(
&self,
max_queryable_attrs: usize,
) -> Result<(), ClientError> {
self.perform_put_request(
&format!("/v1/domain/_attr/{}", ATTR_LDAP_MAX_QUERYABLE_ATTRS),
vec![max_queryable_attrs.to_string()],
)
.await
}
pub async fn idm_set_ldap_allow_unix_password_bind(
&self,
enable: bool,

View file

@ -35,3 +35,7 @@ x509-cert = { workspace = true, features = ["pem"] }
[dev-dependencies]
sketching = { workspace = true }
[package.metadata.cargo-machete]
ignored = ["openssl-sys"]

View file

@ -662,9 +662,13 @@ impl TryFrom<&str> for Password {
});
}
// Test 389ds formats
// Test 389ds/openldap formats. Shout outs openldap which sometimes makes these
// lowercase.
if let Some(ds_ssha1) = value.strip_prefix("{SHA}") {
if let Some(ds_ssha1) = value
.strip_prefix("{SHA}")
.or_else(|| value.strip_prefix("{sha}"))
{
let h = general_purpose::STANDARD.decode(ds_ssha1).map_err(|_| ())?;
if h.len() != DS_SHA1_HASH_LEN {
return Err(());
@ -674,7 +678,10 @@ impl TryFrom<&str> for Password {
});
}
if let Some(ds_ssha1) = value.strip_prefix("{SSHA}") {
if let Some(ds_ssha1) = value
.strip_prefix("{SSHA}")
.or_else(|| value.strip_prefix("{ssha}"))
{
let sh = general_purpose::STANDARD.decode(ds_ssha1).map_err(|_| ())?;
let (h, s) = sh.split_at(DS_SHA1_HASH_LEN);
if s.len() != DS_SHA_SALT_LEN {
@ -685,7 +692,10 @@ impl TryFrom<&str> for Password {
});
}
if let Some(ds_ssha256) = value.strip_prefix("{SHA256}") {
if let Some(ds_ssha256) = value
.strip_prefix("{SHA256}")
.or_else(|| value.strip_prefix("{sha256}"))
{
let h = general_purpose::STANDARD
.decode(ds_ssha256)
.map_err(|_| ())?;
@ -697,7 +707,10 @@ impl TryFrom<&str> for Password {
});
}
if let Some(ds_ssha256) = value.strip_prefix("{SSHA256}") {
if let Some(ds_ssha256) = value
.strip_prefix("{SSHA256}")
.or_else(|| value.strip_prefix("{ssha256}"))
{
let sh = general_purpose::STANDARD
.decode(ds_ssha256)
.map_err(|_| ())?;
@ -710,7 +723,10 @@ impl TryFrom<&str> for Password {
});
}
if let Some(ds_ssha512) = value.strip_prefix("{SHA512}") {
if let Some(ds_ssha512) = value
.strip_prefix("{SHA512}")
.or_else(|| value.strip_prefix("{sha512}"))
{
let h = general_purpose::STANDARD
.decode(ds_ssha512)
.map_err(|_| ())?;
@ -722,7 +738,10 @@ impl TryFrom<&str> for Password {
});
}
if let Some(ds_ssha512) = value.strip_prefix("{SSHA512}") {
if let Some(ds_ssha512) = value
.strip_prefix("{SSHA512}")
.or_else(|| value.strip_prefix("{ssha512}"))
{
let sh = general_purpose::STANDARD
.decode(ds_ssha512)
.map_err(|_| ())?;
@ -1441,8 +1460,12 @@ mod tests {
#[test]
fn test_password_from_ds_sha1() {
let im_pw = "{SHA}W6ph5Mm5Pz8GgiULbPgzG37mj9g=";
let _r = Password::try_from(im_pw).expect("Failed to parse");
let im_pw = "{sha}W6ph5Mm5Pz8GgiULbPgzG37mj9g=";
let password = "password";
let r = Password::try_from(im_pw).expect("Failed to parse");
// Known weak, require upgrade.
assert!(r.requires_upgrade());
assert!(r.verify(password).unwrap_or(false));
@ -1451,8 +1474,12 @@ mod tests {
#[test]
fn test_password_from_ds_ssha1() {
let im_pw = "{SSHA}EyzbBiP4u4zxOrLpKTORI/RX3HC6TCTJtnVOCQ==";
let _r = Password::try_from(im_pw).expect("Failed to parse");
let im_pw = "{ssha}EyzbBiP4u4zxOrLpKTORI/RX3HC6TCTJtnVOCQ==";
let password = "password";
let r = Password::try_from(im_pw).expect("Failed to parse");
// Known weak, require upgrade.
assert!(r.requires_upgrade());
assert!(r.verify(password).unwrap_or(false));
@ -1461,8 +1488,12 @@ mod tests {
#[test]
fn test_password_from_ds_sha256() {
let im_pw = "{SHA256}XohImNooBHFR0OVvjcYpJ3NgPQ1qq73WKhHvch0VQtg=";
let _r = Password::try_from(im_pw).expect("Failed to parse");
let im_pw = "{sha256}XohImNooBHFR0OVvjcYpJ3NgPQ1qq73WKhHvch0VQtg=";
let password = "password";
let r = Password::try_from(im_pw).expect("Failed to parse");
// Known weak, require upgrade.
assert!(r.requires_upgrade());
assert!(r.verify(password).unwrap_or(false));
@ -1471,8 +1502,12 @@ mod tests {
#[test]
fn test_password_from_ds_ssha256() {
let im_pw = "{SSHA256}luYWfFJOZgxySTsJXHgIaCYww4yMpu6yest69j/wO5n5OycuHFV/GQ==";
let _r = Password::try_from(im_pw).expect("Failed to parse");
let im_pw = "{ssha256}luYWfFJOZgxySTsJXHgIaCYww4yMpu6yest69j/wO5n5OycuHFV/GQ==";
let password = "password";
let r = Password::try_from(im_pw).expect("Failed to parse");
// Known weak, require upgrade.
assert!(r.requires_upgrade());
assert!(r.verify(password).unwrap_or(false));
@ -1481,8 +1516,12 @@ mod tests {
#[test]
fn test_password_from_ds_sha512() {
let im_pw = "{SHA512}sQnzu7wkTrgkQZF+0G1hi5AI3Qmzvv0bXgc5THBqi7mAsdd4Xll27ASbRt9fEyavWi6m0QP9B8lThf+rDKy8hg==";
let _r = Password::try_from(im_pw).expect("Failed to parse");
let im_pw = "{sha512}sQnzu7wkTrgkQZF+0G1hi5AI3Qmzvv0bXgc5THBqi7mAsdd4Xll27ASbRt9fEyavWi6m0QP9B8lThf+rDKy8hg==";
let password = "password";
let r = Password::try_from(im_pw).expect("Failed to parse");
// Known weak, require upgrade.
assert!(r.requires_upgrade());
assert!(r.verify(password).unwrap_or(false));
@ -1491,8 +1530,12 @@ mod tests {
#[test]
fn test_password_from_ds_ssha512() {
let im_pw = "{SSHA512}JwrSUHkI7FTAfHRVR6KoFlSN0E3dmaQWARjZ+/UsShYlENOqDtFVU77HJLLrY2MuSp0jve52+pwtdVl2QUAHukQ0XUf5LDtM";
let _r = Password::try_from(im_pw).expect("Failed to parse");
let im_pw = "{ssha512}JwrSUHkI7FTAfHRVR6KoFlSN0E3dmaQWARjZ+/UsShYlENOqDtFVU77HJLLrY2MuSp0jve52+pwtdVl2QUAHukQ0XUf5LDtM";
let password = "password";
let r = Password::try_from(im_pw).expect("Failed to parse");
// Known weak, require upgrade.
assert!(r.requires_upgrade());
assert!(r.verify(password).unwrap_or(false));

View file

@ -16,8 +16,5 @@ doctest = false
[dependencies]
[target.'cfg(target_family = "windows")'.dependencies]
whoami = { workspace = true }
[target.'cfg(not(target_family = "windows"))'.dependencies]
kanidm_utils_users = { workspace = true }

View file

@ -28,3 +28,7 @@ toml = { workspace = true }
[build-dependencies]
base64 = { workspace = true }
gix = { workspace = true, default-features = false }
[package.metadata.cargo-machete]
ignored = ["gix"]

View file

@ -3,5 +3,6 @@
server_admin_bind_path = "/data/kanidmd.sock"
server_ui_pkg_path = "/hpkg"
server_config_path = "/data/server.toml"
client_config_path = "/data/config"
resolver_config_path = "/data/unixd"
resolver_unix_shell_path = "/bin/false"

View file

@ -3,5 +3,6 @@ cpu_flags = "native"
server_admin_bind_path = "/tmp/kanidmd.sock"
server_ui_pkg_path = "../core/static"
server_config_path = "../../examples/insecure_server.toml"
client_config_path = "/etc/kanidm/config"
resolver_config_path = "/tmp/unixd"
resolver_unix_shell_path = "/bin/bash"

View file

@ -3,5 +3,6 @@
server_admin_bind_path = "/var/run/kanidmd/sock"
server_ui_pkg_path = "/usr/local/share/kanidm/ui/hpkg"
server_config_path = "/usr/local/etc/kanidm/server.toml"
client_config_path = "/usr/local/etc/kanidm/config"
resolver_config_path = "/usr/local/etc/kanidm/unixd"
resolver_unix_shell_path = "/bin/sh"

View file

@ -3,5 +3,6 @@
server_admin_bind_path = "/var/run/kanidmd/sock"
server_ui_pkg_path = "/usr/share/kanidm/ui/hpkg"
server_config_path = "/etc/kanidm/server.toml"
client_config_path = "/etc/kanidm/config"
resolver_config_path = "/etc/kanidm/unixd"
resolver_unix_shell_path = "/bin/bash"

View file

@ -59,6 +59,7 @@ struct ProfileConfig {
server_admin_bind_path: String,
server_config_path: String,
server_ui_pkg_path: String,
client_config_path: String,
resolver_config_path: String,
resolver_unix_shell_path: String,
}
@ -139,6 +140,10 @@ pub fn apply_profile() {
"cargo:rustc-env=KANIDM_SERVER_CONFIG_PATH={}",
profile_cfg.server_config_path
);
println!(
"cargo:rustc-env=KANIDM_CLIENT_CONFIG_PATH={}",
profile_cfg.client_config_path
);
println!(
"cargo:rustc-env=KANIDM_RESOLVER_CONFIG_PATH={}",
profile_cfg.resolver_config_path

View file

@ -17,7 +17,6 @@ test = false
doctest = false
[dependencies]
gethostname = "0.5.0"
num_enum = { workspace = true }
opentelemetry = { workspace = true, features = ["metrics"] }
opentelemetry-otlp = { workspace = true, default-features = false, features = [

View file

@ -1,13 +1,13 @@
[package]
name = "kanidm_utils_users"
description = "Kanidm utility crate"
version.workspace = true
authors.workspace = true
rust-version.workspace = true
edition.workspace = true
license.workspace = true
homepage.workspace = true
repository.workspace = true
version = { workspace = true }
authors = { workspace = true }
rust-version = { workspace = true }
edition = { workspace = true }
license = { workspace = true }
homepage = { workspace = true }
repository = { workspace = true }
[lib]
test = true

View file

@ -1,624 +0,0 @@
CARGO_CRATES= addr2line-0.24.2 \
adler2-2.0.0 \
ahash-0.8.11 \
aho-corasick-1.1.3 \
allocator-api2-0.2.21 \
android-tzdata-0.1.1 \
android_system_properties-0.1.5 \
anstream-0.6.18 \
anstyle-1.0.10 \
anstyle-parse-0.2.6 \
anstyle-query-1.1.2 \
anstyle-wincon-3.0.6 \
anyhow-1.0.95 \
arc-swap-1.7.1 \
argon2-0.5.3 \
askama-0.12.1 \
askama_axum-0.4.0 \
askama_derive-0.12.5 \
askama_escape-0.10.3 \
askama_parser-0.2.1 \
asn1-rs-0.6.2 \
asn1-rs-derive-0.5.1 \
asn1-rs-impl-0.2.0 \
assert_cmd-2.0.16 \
async-compression-0.4.18 \
async-stream-0.3.6 \
async-stream-impl-0.3.6 \
async-trait-0.1.83 \
atomic-waker-1.1.2 \
authenticator-0.4.1 \
autocfg-1.4.0 \
axum-0.6.20 \
axum-0.7.9 \
axum-core-0.3.4 \
axum-core-0.4.5 \
axum-extra-0.9.6 \
axum-htmx-0.5.0 \
axum-macros-0.4.2 \
axum-server-0.7.1 \
backtrace-0.3.74 \
base32-0.5.1 \
base64-0.13.1 \
base64-0.21.7 \
base64-0.22.1 \
base64ct-1.6.0 \
base64urlsafedata-0.5.1 \
basic-toml-0.1.9 \
bindgen-0.66.1 \
bindgen-0.70.1 \
bit-set-0.5.3 \
bit-set-0.8.0 \
bit-vec-0.6.3 \
bit-vec-0.8.0 \
bitfield-0.13.2 \
bitflags-1.3.2 \
bitflags-2.6.0 \
blake2-0.10.6 \
block-buffer-0.10.4 \
borrow-or-share-0.2.2 \
bstr-1.11.1 \
bumpalo-3.16.0 \
bytecount-0.6.8 \
bytemuck-1.21.0 \
byteorder-1.5.0 \
bytes-1.9.0 \
cc-1.2.5 \
cexpr-0.6.0 \
cfg-if-1.0.0 \
cfg_aliases-0.2.1 \
checked_int_cast-1.0.0 \
chrono-0.4.39 \
clang-sys-1.8.1 \
clap-4.5.23 \
clap_builder-4.5.23 \
clap_complete-4.5.40 \
clap_derive-4.5.18 \
clap_lex-0.7.4 \
clru-0.6.2 \
color_quant-1.1.0 \
colorchoice-1.0.3 \
compact_jwt-0.4.3 \
concread-0.5.3 \
console-0.15.10 \
const-oid-0.9.6 \
cookie-0.16.2 \
cookie-0.18.1 \
cookie_store-0.21.1 \
core-foundation-0.9.4 \
core-foundation-0.10.0 \
core-foundation-sys-0.8.7 \
cpufeatures-0.2.16 \
crc32fast-1.4.2 \
cron-0.12.1 \
crossbeam-0.8.4 \
crossbeam-channel-0.5.14 \
crossbeam-deque-0.8.6 \
crossbeam-epoch-0.9.18 \
crossbeam-queue-0.3.12 \
crossbeam-utils-0.8.21 \
crypto-common-0.1.6 \
csv-1.3.1 \
csv-core-0.1.11 \
darling-0.14.4 \
darling-0.20.10 \
darling_core-0.14.4 \
darling_core-0.20.10 \
darling_macro-0.14.4 \
darling_macro-0.20.10 \
data-encoding-2.6.0 \
der-0.7.9 \
der-parser-9.0.0 \
der_derive-0.7.3 \
deranged-0.3.11 \
derive_builder-0.12.0 \
derive_builder_core-0.12.0 \
derive_builder_macro-0.12.0 \
devd-rs-0.3.6 \
dhat-0.3.3 \
dialoguer-0.10.4 \
difflib-0.4.0 \
digest-0.10.7 \
dirs-4.0.0 \
dirs-sys-0.3.7 \
displaydoc-0.2.5 \
doc-comment-0.3.3 \
document-features-0.2.10 \
dunce-1.0.5 \
dyn-clone-1.0.17 \
either-1.13.0 \
email_address-0.2.9 \
encode_unicode-1.0.0 \
encoding_rs-0.8.35 \
enum-iterator-2.1.0 \
enum-iterator-derive-1.4.0 \
enumflags2-0.7.10 \
enumflags2_derive-0.7.10 \
equivalent-1.0.1 \
errno-0.3.10 \
escargot-0.5.13 \
fallible-iterator-0.2.0 \
fallible-streaming-iterator-0.1.9 \
fancy-regex-0.11.0 \
fancy-regex-0.14.0 \
fantoccini-0.21.3 \
faster-hex-0.9.0 \
fastrand-2.3.0 \
fernet-0.2.2 \
file-id-0.1.0 \
filetime-0.2.25 \
fixedbitset-0.4.2 \
flagset-0.4.6 \
flate2-1.0.35 \
fluent-uri-0.3.2 \
fnv-1.0.7 \
foldhash-0.1.4 \
foreign-types-0.3.2 \
foreign-types-shared-0.1.1 \
form_urlencoded-1.2.1 \
fraction-0.15.3 \
fs4-0.8.4 \
fsevent-sys-4.1.0 \
futures-0.3.31 \
futures-channel-0.3.31 \
futures-core-0.3.31 \
futures-executor-0.3.31 \
futures-io-0.3.31 \
futures-macro-0.3.31 \
futures-sink-0.3.31 \
futures-task-0.3.31 \
futures-util-0.3.31 \
generic-array-0.14.7 \
gethostname-0.5.0 \
getrandom-0.2.15 \
gif-0.13.1 \
gimli-0.31.1 \
gix-0.64.0 \
gix-actor-0.31.5 \
gix-chunk-0.4.10 \
gix-commitgraph-0.24.3 \
gix-config-0.38.0 \
gix-config-value-0.14.10 \
gix-date-0.8.7 \
gix-diff-0.44.1 \
gix-discover-0.33.0 \
gix-features-0.38.2 \
gix-fs-0.11.3 \
gix-glob-0.16.5 \
gix-hash-0.14.2 \
gix-hashtable-0.5.2 \
gix-lock-14.0.0 \
gix-macros-0.1.5 \
gix-object-0.42.3 \
gix-odb-0.61.1 \
gix-pack-0.51.1 \
gix-path-0.10.13 \
gix-quote-0.4.14 \
gix-ref-0.45.0 \
gix-refspec-0.23.1 \
gix-revision-0.27.2 \
gix-revwalk-0.13.2 \
gix-sec-0.10.10 \
gix-tempfile-14.0.2 \
gix-trace-0.1.11 \
gix-traverse-0.39.2 \
gix-url-0.27.5 \
gix-utils-0.1.13 \
gix-validate-0.8.5 \
glob-0.3.1 \
h2-0.3.26 \
h2-0.4.7 \
half-1.8.3 \
hashbrown-0.12.3 \
hashbrown-0.14.5 \
hashbrown-0.15.2 \
hashlink-0.8.4 \
heck-0.5.0 \
hex-0.4.3 \
home-0.5.11 \
hostname-validator-1.1.1 \
http-0.2.12 \
http-1.2.0 \
http-body-0.4.6 \
http-body-1.0.1 \
http-body-util-0.1.2 \
http-range-header-0.4.2 \
httparse-1.9.5 \
httpdate-1.0.3 \
humansize-2.1.3 \
hyper-0.14.32 \
hyper-1.5.2 \
hyper-rustls-0.24.2 \
hyper-rustls-0.27.5 \
hyper-timeout-0.4.1 \
hyper-tls-0.6.0 \
hyper-util-0.1.10 \
iana-time-zone-0.1.61 \
iana-time-zone-haiku-0.1.2 \
icu_collections-1.5.0 \
icu_locid-1.5.0 \
icu_locid_transform-1.5.0 \
icu_locid_transform_data-1.5.0 \
icu_normalizer-1.5.0 \
icu_normalizer_data-1.5.0 \
icu_properties-1.5.1 \
icu_properties_data-1.5.0 \
icu_provider-1.5.0 \
icu_provider_macros-1.5.0 \
ident_case-1.0.1 \
idlset-0.2.5 \
idna-1.0.3 \
idna_adapter-1.2.0 \
image-0.23.14 \
image-0.24.9 \
indexmap-1.9.3 \
indexmap-2.7.0 \
inotify-0.9.6 \
inotify-sys-0.1.5 \
ipnet-2.10.1 \
is_terminal_polyfill-1.70.1 \
itertools-0.10.5 \
itertools-0.13.0 \
itoa-1.0.14 \
jpeg-decoder-0.3.1 \
js-sys-0.3.76 \
jsonschema-0.28.0 \
kanidm-hsm-crypto-0.2.0 \
kqueue-1.0.8 \
kqueue-sys-1.0.4 \
lazy_static-1.5.0 \
lazycell-1.3.0 \
lber-0.4.2 \
ldap3_client-0.5.2 \
ldap3_proto-0.5.2 \
libc-0.2.169 \
libloading-0.8.6 \
libm-0.2.11 \
libmimalloc-sys-0.1.39 \
libnss-0.8.0 \
libredox-0.1.3 \
libsqlite3-sys-0.25.2 \
libudev-0.2.0 \
libudev-sys-0.1.4 \
linux-raw-sys-0.4.14 \
litemap-0.7.4 \
litrs-0.4.1 \
lock_api-0.4.12 \
lodepng-3.10.7 \
log-0.4.22 \
lru-0.12.5 \
malloced-1.3.1 \
matchers-0.1.0 \
matchit-0.7.3 \
mathru-0.13.0 \
memchr-2.7.4 \
memmap2-0.9.5 \
memoffset-0.8.0 \
mimalloc-0.1.43 \
mime-0.3.17 \
mime_guess-2.0.5 \
minimal-lexical-0.2.1 \
miniz_oxide-0.8.2 \
mintex-0.1.3 \
mio-0.8.11 \
mio-1.0.3 \
multer-3.1.0 \
native-tls-0.2.12 \
nix-0.29.0 \
nom-7.1.3 \
nonempty-0.8.1 \
notify-6.1.1 \
notify-debouncer-full-0.1.0 \
nu-ansi-term-0.46.0 \
num-0.4.3 \
num-bigint-0.4.6 \
num-cmp-0.1.0 \
num-complex-0.4.6 \
num-conv-0.1.0 \
num-derive-0.3.3 \
num-integer-0.1.46 \
num-iter-0.1.45 \
num-rational-0.3.2 \
num-rational-0.4.2 \
num-traits-0.2.19 \
num_enum-0.5.11 \
num_enum_derive-0.5.11 \
num_threads-0.1.7 \
oauth2-4.4.2 \
object-0.36.5 \
oid-0.2.1 \
oid-registry-0.7.1 \
once_cell-1.20.2 \
openssl-0.10.68 \
openssl-macros-0.1.1 \
openssl-probe-0.1.5 \
openssl-sys-0.9.104 \
opentelemetry-0.20.0 \
opentelemetry-http-0.9.0 \
opentelemetry-otlp-0.13.0 \
opentelemetry-proto-0.3.0 \
opentelemetry-semantic-conventions-0.12.0 \
opentelemetry_api-0.20.0 \
opentelemetry_sdk-0.20.0 \
ordered-float-3.9.2 \
outref-0.5.1 \
overload-0.1.1 \
parking_lot-0.12.3 \
parking_lot_core-0.9.10 \
password-hash-0.5.0 \
paste-1.0.15 \
peeking_take_while-0.1.2 \
peg-0.8.4 \
peg-macros-0.8.4 \
peg-runtime-0.8.3 \
pem-rfc7468-0.7.0 \
percent-encoding-2.3.1 \
petgraph-0.6.5 \
picky-asn1-0.8.0 \
picky-asn1-der-0.4.1 \
picky-asn1-x509-0.12.0 \
pin-project-1.1.7 \
pin-project-internal-1.1.7 \
pin-project-lite-0.2.15 \
pin-utils-0.1.0 \
pkg-config-0.3.31 \
powerfmt-0.2.0 \
ppv-lite86-0.2.20 \
prctl-1.0.0 \
predicates-3.1.3 \
predicates-core-1.0.9 \
predicates-tree-1.0.12 \
prettyplease-0.2.25 \
proc-macro-crate-1.3.1 \
proc-macro-error-1.0.4 \
proc-macro-error-attr-1.0.4 \
proc-macro2-1.0.92 \
prodash-28.0.0 \
prost-0.11.9 \
prost-derive-0.11.9 \
psl-types-2.0.11 \
publicsuffix-2.3.0 \
qrcode-0.12.0 \
quick-error-2.0.1 \
quinn-0.11.6 \
quinn-proto-0.11.9 \
quinn-udp-0.5.9 \
quote-1.0.38 \
rand-0.8.5 \
rand_chacha-0.3.1 \
rand_core-0.6.4 \
redox_syscall-0.5.8 \
redox_users-0.4.6 \
ref-cast-1.0.23 \
ref-cast-impl-1.0.23 \
reference-counted-singleton-0.1.5 \
referencing-0.28.0 \
regex-1.11.1 \
regex-automata-0.1.10 \
regex-automata-0.4.9 \
regex-syntax-0.6.29 \
regex-syntax-0.8.5 \
reqwest-0.11.27 \
reqwest-0.12.11 \
rgb-0.8.50 \
ring-0.17.8 \
rpassword-5.0.1 \
runloop-0.1.0 \
rusqlite-0.28.0 \
rust-embed-8.5.0 \
rust-embed-impl-8.5.0 \
rust-embed-utils-8.5.0 \
rustc-demangle-0.1.24 \
rustc-hash-1.1.0 \
rustc-hash-2.1.0 \
rusticata-macros-4.1.0 \
rustix-0.38.42 \
rustls-0.21.12 \
rustls-0.23.20 \
rustls-native-certs-0.8.1 \
rustls-pemfile-1.0.4 \
rustls-pemfile-2.2.0 \
rustls-pki-types-1.10.1 \
rustls-webpki-0.101.7 \
rustls-webpki-0.102.8 \
rustversion-1.0.18 \
ryu-1.0.18 \
same-file-1.0.6 \
schannel-0.1.27 \
scopeguard-1.2.0 \
sct-0.7.1 \
sd-notify-0.4.3 \
security-framework-2.11.1 \
security-framework-3.1.0 \
security-framework-sys-2.13.0 \
selinux-0.4.6 \
selinux-sys-0.6.13 \
semver-1.0.24 \
serde-1.0.217 \
serde_bytes-0.11.15 \
serde_cbor-0.11.2 \
serde_cbor_2-0.12.0-dev \
serde_derive-1.0.217 \
serde_json-1.0.134 \
serde_path_to_error-0.1.16 \
serde_urlencoded-0.7.1 \
serde_with-3.12.0 \
serde_with_macros-3.12.0 \
sha-crypt-0.5.0 \
sha1_smol-1.0.1 \
sha2-0.10.8 \
sharded-slab-0.1.7 \
shell-words-1.1.0 \
shellexpand-2.1.2 \
shlex-1.3.0 \
signal-hook-registry-1.4.2 \
slab-0.4.9 \
smallvec-1.13.2 \
smartstring-1.0.1 \
smolset-1.3.1 \
socket2-0.5.8 \
spin-0.9.8 \
spki-0.7.3 \
sptr-0.3.2 \
sshkey-attest-0.5.0 \
sshkeys-0.3.3 \
stable_deref_trait-1.2.0 \
static_assertions-1.1.0 \
strsim-0.10.0 \
strsim-0.11.1 \
subtle-2.6.1 \
svg-0.13.1 \
syn-1.0.109 \
syn-2.0.93 \
sync_wrapper-0.1.2 \
sync_wrapper-1.0.2 \
synstructure-0.13.1 \
system-configuration-0.5.1 \
system-configuration-sys-0.5.0 \
target-lexicon-0.12.16 \
tempfile-3.14.0 \
termtree-0.5.1 \
thiserror-1.0.69 \
thiserror-2.0.8 \
thiserror-impl-1.0.69 \
thiserror-impl-2.0.8 \
thousands-0.2.0 \
thread_local-1.1.8 \
time-0.3.37 \
time-core-0.1.2 \
time-macros-0.2.19 \
tinystr-0.7.6 \
tinyvec-1.8.1 \
tinyvec_macros-0.1.1 \
tls_codec-0.4.1 \
tls_codec_derive-0.4.1 \
tokio-1.42.0 \
tokio-io-timeout-1.2.0 \
tokio-macros-2.4.0 \
tokio-native-tls-0.3.1 \
tokio-openssl-0.6.5 \
tokio-rustls-0.24.1 \
tokio-rustls-0.26.1 \
tokio-stream-0.1.17 \
tokio-util-0.7.13 \
toml-0.5.11 \
toml_datetime-0.6.8 \
toml_edit-0.19.15 \
tonic-0.9.2 \
tower-0.4.13 \
tower-0.5.2 \
tower-http-0.6.2 \
tower-layer-0.3.3 \
tower-service-0.3.3 \
tracing-0.1.41 \
tracing-attributes-0.1.28 \
tracing-core-0.1.33 \
tracing-forest-0.1.6 \
tracing-log-0.1.4 \
tracing-log-0.2.0 \
tracing-opentelemetry-0.21.0 \
tracing-subscriber-0.3.19 \
try-lock-0.2.5 \
tss-esapi-8.0.0-alpha \
tss-esapi-sys-0.5.0 \
typenum-1.17.0 \
unicase-2.8.0 \
unicode-bom-2.0.3 \
unicode-ident-1.0.14 \
unicode-normalization-0.1.24 \
unicode-segmentation-1.12.0 \
unicode-width-0.2.0 \
untrusted-0.9.0 \
url-2.5.4 \
urlencoding-2.1.3 \
utf16_iter-1.0.5 \
utf8_iter-1.0.4 \
utf8parse-0.2.2 \
utoipa-4.2.3 \
utoipa-gen-4.3.1 \
utoipa-swagger-ui-6.0.0 \
uuid-1.11.0 \
uuid-simd-0.8.0 \
valuable-0.1.0 \
vcpkg-0.2.15 \
version_check-0.9.5 \
vsimd-0.8.0 \
wait-timeout-0.2.0 \
walkdir-2.5.0 \
want-0.3.1 \
wasi-0.11.0+wasi-snapshot-preview1 \
wasite-0.1.0 \
wasm-bindgen-0.2.99 \
wasm-bindgen-backend-0.2.99 \
wasm-bindgen-futures-0.4.49 \
wasm-bindgen-macro-0.2.99 \
wasm-bindgen-macro-support-0.2.99 \
wasm-bindgen-shared-0.2.99 \
web-sys-0.3.76 \
web-time-1.1.0 \
webauthn-attestation-ca-0.5.1 \
webauthn-authenticator-rs-0.5.1 \
webauthn-rs-0.5.1 \
webauthn-rs-core-0.5.1 \
webauthn-rs-proto-0.5.1 \
webdriver-0.50.0 \
webpki-roots-0.25.4 \
webpki-roots-0.26.7 \
weezl-0.1.8 \
which-4.4.2 \
whoami-1.5.2 \
winapi-0.3.9 \
winapi-i686-pc-windows-gnu-0.4.0 \
winapi-util-0.1.9 \
winapi-x86_64-pc-windows-gnu-0.4.0 \
windows-0.41.0 \
windows-core-0.52.0 \
windows-registry-0.2.0 \
windows-result-0.2.0 \
windows-strings-0.1.0 \
windows-sys-0.48.0 \
windows-sys-0.52.0 \
windows-sys-0.59.0 \
windows-targets-0.48.5 \
windows-targets-0.52.6 \
windows_aarch64_gnullvm-0.41.0 \
windows_aarch64_gnullvm-0.48.5 \
windows_aarch64_gnullvm-0.52.6 \
windows_aarch64_msvc-0.41.0 \
windows_aarch64_msvc-0.48.5 \
windows_aarch64_msvc-0.52.6 \
windows_i686_gnu-0.41.0 \
windows_i686_gnu-0.48.5 \
windows_i686_gnu-0.52.6 \
windows_i686_gnullvm-0.52.6 \
windows_i686_msvc-0.41.0 \
windows_i686_msvc-0.48.5 \
windows_i686_msvc-0.52.6 \
windows_x86_64_gnu-0.41.0 \
windows_x86_64_gnu-0.48.5 \
windows_x86_64_gnu-0.52.6 \
windows_x86_64_gnullvm-0.41.0 \
windows_x86_64_gnullvm-0.48.5 \
windows_x86_64_gnullvm-0.52.6 \
windows_x86_64_msvc-0.41.0 \
windows_x86_64_msvc-0.48.5 \
windows_x86_64_msvc-0.52.6 \
winnow-0.5.40 \
winnow-0.6.20 \
winreg-0.50.0 \
write16-1.0.0 \
writeable-0.5.5 \
x509-cert-0.2.5 \
x509-parser-0.16.0 \
yoke-0.7.5 \
yoke-derive-0.7.5 \
zerocopy-0.7.35 \
zerocopy-derive-0.7.35 \
zerofrom-0.1.5 \
zerofrom-derive-0.1.5 \
zeroize-1.8.1 \
zeroize_derive-1.4.2 \
zerovec-0.10.4 \
zerovec-derive-0.10.3 \
zip-0.6.6 \
zxcvbn-2.2.2

File diff suppressed because it is too large Load diff

View file

@ -21,7 +21,7 @@ load_rc_config $name
pidfile="/var/run/kanidm-unixd-tasks.pid"
command=/usr/sbin/daemon
command_args="-u _kanidm_unixd -p /var/run/kanidm-unixd-tasks.pid -T kanidm_unixd_tasks /usr/local/libexec/${name}"
command_args="-u root -p /var/run/kanidm-unixd-tasks.pid -T kanidm_unixd_tasks /usr/local/libexec/${name}"
procname=/usr/local/libexec/${name}
run_rc_command "$1"

View file

@ -42,3 +42,6 @@ sshkeys = { workspace = true }
[dev-dependencies]
enum-iterator = { workspace = true }
serde_urlencoded = { workspace = true }
[build-dependencies]
kanidm_build_profiles = { workspace = true }

3
proto/build.rs Normal file
View file

@ -0,0 +1,3 @@
fn main() {
profiles::apply_profile();
}

View file

@ -94,6 +94,7 @@ pub enum Attribute {
LdapEmailAddress,
/// An LDAP Compatible sshkeys virtual attribute
LdapKeys,
LdapMaxQueryableAttrs,
LegalName,
LimitSearchMaxResults,
LimitSearchMaxFilterTest,
@ -322,6 +323,7 @@ impl Attribute {
Attribute::LdapAllowUnixPwBind => ATTR_LDAP_ALLOW_UNIX_PW_BIND,
Attribute::LdapEmailAddress => ATTR_LDAP_EMAIL_ADDRESS,
Attribute::LdapKeys => ATTR_LDAP_KEYS,
Attribute::LdapMaxQueryableAttrs => ATTR_LDAP_MAX_QUERYABLE_ATTRS,
Attribute::LdapSshPublicKey => ATTR_LDAP_SSHPUBLICKEY,
Attribute::LegalName => ATTR_LEGALNAME,
Attribute::LimitSearchMaxResults => ATTR_LIMIT_SEARCH_MAX_RESULTS,
@ -505,6 +507,7 @@ impl Attribute {
ATTR_LDAP_ALLOW_UNIX_PW_BIND => Attribute::LdapAllowUnixPwBind,
ATTR_LDAP_EMAIL_ADDRESS => Attribute::LdapEmailAddress,
ATTR_LDAP_KEYS => Attribute::LdapKeys,
ATTR_LDAP_MAX_QUERYABLE_ATTRS => Attribute::LdapMaxQueryableAttrs,
ATTR_SSH_PUBLICKEY => Attribute::SshPublicKey,
ATTR_LEGALNAME => Attribute::LegalName,
ATTR_LINKEDGROUP => Attribute::LinkedGroup,
@ -628,6 +631,71 @@ impl From<Attribute> for String {
}
}
/// Sub attributes are a component of SCIM, allowing tagged sub properties of a complex
/// attribute to be accessed.
#[derive(Serialize, Deserialize, Clone, Debug, Eq, PartialEq, PartialOrd, Ord, Hash)]
#[serde(rename_all = "lowercase", try_from = "&str", into = "AttrString")]
pub enum SubAttribute {
/// Denotes a primary value.
Primary,
#[cfg(not(test))]
Custom(AttrString),
}
impl From<SubAttribute> for AttrString {
fn from(val: SubAttribute) -> Self {
AttrString::from(val.as_str())
}
}
impl From<&str> for SubAttribute {
fn from(value: &str) -> Self {
Self::inner_from_str(value)
}
}
impl FromStr for SubAttribute {
type Err = Infallible;
fn from_str(value: &str) -> Result<Self, Self::Err> {
Ok(Self::inner_from_str(value))
}
}
impl SubAttribute {
pub fn as_str(&self) -> &str {
match self {
SubAttribute::Primary => SUB_ATTR_PRIMARY,
#[cfg(not(test))]
SubAttribute::Custom(s) => s,
}
}
// We allow this because the standard lib from_str is fallible, and we want an infallible version.
#[allow(clippy::should_implement_trait)]
fn inner_from_str(value: &str) -> Self {
// Could this be something like heapless to save allocations? Also gives a way
// to limit length of str?
match value.to_lowercase().as_str() {
SUB_ATTR_PRIMARY => SubAttribute::Primary,
#[cfg(not(test))]
_ => SubAttribute::Custom(AttrString::from(value)),
// Allowed only in tests
#[allow(clippy::unreachable)]
#[cfg(test)]
_ => {
unreachable!(
"Check that you've implemented the SubAttribute conversion for {:?}",
value
);
}
}
}
}
#[cfg(test)]
mod test {
use super::Attribute;

View file

@ -30,7 +30,7 @@ pub const VALID_IMAGE_UPLOAD_CONTENT_TYPES: [&str; 5] = [
pub const APPLICATION_JSON: &str = "application/json";
/// The "system" path for Kanidm client config
pub const DEFAULT_CLIENT_CONFIG_PATH: &str = "/etc/kanidm/config";
pub const DEFAULT_CLIENT_CONFIG_PATH: &str = env!("KANIDM_CLIENT_CONFIG_PATH");
/// The user-owned path for Kanidm client config
pub const DEFAULT_CLIENT_CONFIG_PATH_HOME: &str = "~/.config/kanidm";
@ -39,6 +39,8 @@ pub const DEFAULT_SERVER_ADDRESS: &str = "127.0.0.1:8443";
pub const DEFAULT_SERVER_LOCALHOST: &str = "localhost:8443";
/// The default LDAP bind address for the Kanidm client
pub const DEFAULT_LDAP_LOCALHOST: &str = "localhost:636";
/// The default amount of attributes that can be queried in LDAP
pub const DEFAULT_LDAP_MAXIMUM_QUERYABLE_ATTRIBUTES: usize = 16;
/// Default replication configuration
pub const DEFAULT_REPLICATION_ADDRESS: &str = "127.0.0.1:8444";
pub const DEFAULT_REPLICATION_ORIGIN: &str = "repl://localhost:8444";
@ -102,6 +104,7 @@ pub const ATTR_DYNGROUP_FILTER: &str = "dyngroup_filter";
pub const ATTR_DYNGROUP: &str = "dyngroup";
pub const ATTR_DYNMEMBER: &str = "dynmember";
pub const ATTR_LDAP_EMAIL_ADDRESS: &str = "emailaddress";
pub const ATTR_LDAP_MAX_QUERYABLE_ATTRS: &str = "ldap_max_queryable_attrs";
pub const ATTR_EMAIL_ALTERNATIVE: &str = "emailalternative";
pub const ATTR_EMAIL_PRIMARY: &str = "emailprimary";
pub const ATTR_EMAIL: &str = "email";
@ -217,6 +220,8 @@ pub const ATTR_VERSION: &str = "version";
pub const ATTR_WEBAUTHN_ATTESTATION_CA_LIST: &str = "webauthn_attestation_ca_list";
pub const ATTR_ALLOW_PRIMARY_CRED_FALLBACK: &str = "allow_primary_cred_fallback";
pub const SUB_ATTR_PRIMARY: &str = "primary";
pub const OAUTH2_SCOPE_EMAIL: &str = ATTR_EMAIL;
pub const OAUTH2_SCOPE_GROUPS: &str = "groups";
pub const OAUTH2_SCOPE_SSH_PUBLICKEYS: &str = "ssh_publickeys";

View file

@ -130,6 +130,7 @@ pub enum CURegState {
None,
TotpCheck(TotpSecret),
TotpTryAgain,
TotpNameTryAgain(String),
TotpInvalidSha1,
BackupCodes(Vec<String>),
Passkey(CreationChallengeResponse),

View file

@ -443,6 +443,21 @@ fn require_request_uri_parameter_supported_default() -> bool {
false
}
#[derive(Serialize, Deserialize, Debug)]
pub struct OidcWebfingerRel {
pub rel: String,
pub href: String,
}
/// The response to an Webfinger request. Only a subset of the body is defined here.
/// <https://datatracker.ietf.org/doc/html/rfc7033#section-4.4>
#[skip_serializing_none]
#[derive(Serialize, Deserialize, Debug)]
pub struct OidcWebfingerResponse {
pub subject: String,
pub links: Vec<OidcWebfingerRel>,
}
/// The response to an OpenID connect discovery request
/// <https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderMetadata>
#[skip_serializing_none]

View file

@ -1,7 +1,7 @@
//! These are types that a client will send to the server.
use super::ScimEntryGetQuery;
use super::ScimOauth2ClaimMapJoinChar;
use crate::attribute::Attribute;
use crate::attribute::{Attribute, SubAttribute};
use serde::{Deserialize, Serialize};
use serde_json::Value as JsonValue;
use serde_with::formats::PreferMany;
@ -134,3 +134,59 @@ impl TryFrom<ScimEntryPutKanidm> for ScimEntryPutGeneric {
})
}
}
#[derive(Debug, Clone, PartialEq, Eq, Deserialize)]
pub struct AttrPath {
pub a: Attribute,
pub s: Option<SubAttribute>,
}
impl From<Attribute> for AttrPath {
fn from(a: Attribute) -> Self {
Self { a, s: None }
}
}
impl From<(Attribute, SubAttribute)> for AttrPath {
fn from((a, s): (Attribute, SubAttribute)) -> Self {
Self { a, s: Some(s) }
}
}
#[derive(Debug, Clone, PartialEq, Eq, Deserialize)]
pub enum ScimFilter {
Or(Box<ScimFilter>, Box<ScimFilter>),
And(Box<ScimFilter>, Box<ScimFilter>),
Not(Box<ScimFilter>),
Present(AttrPath),
Equal(AttrPath, JsonValue),
NotEqual(AttrPath, JsonValue),
Contains(AttrPath, JsonValue),
StartsWith(AttrPath, JsonValue),
EndsWith(AttrPath, JsonValue),
Greater(AttrPath, JsonValue),
Less(AttrPath, JsonValue),
GreaterOrEqual(AttrPath, JsonValue),
LessOrEqual(AttrPath, JsonValue),
Complex(Attribute, Box<ScimComplexFilter>),
}
#[derive(Debug, Clone, PartialEq, Eq, Deserialize)]
pub enum ScimComplexFilter {
Or(Box<ScimComplexFilter>, Box<ScimComplexFilter>),
And(Box<ScimComplexFilter>, Box<ScimComplexFilter>),
Not(Box<ScimComplexFilter>),
Present(SubAttribute),
Equal(SubAttribute, JsonValue),
NotEqual(SubAttribute, JsonValue),
Contains(SubAttribute, JsonValue),
StartsWith(SubAttribute, JsonValue),
EndsWith(SubAttribute, JsonValue),
Greater(SubAttribute, JsonValue),
Less(SubAttribute, JsonValue),
GreaterOrEqual(SubAttribute, JsonValue),
LessOrEqual(SubAttribute, JsonValue),
}

View file

@ -4,7 +4,7 @@ use super::ScimSshPublicKey;
use crate::attribute::Attribute;
use crate::internal::UiHint;
use scim_proto::ScimEntryHeader;
use serde::Serialize;
use serde::{Deserialize, Serialize};
use serde_with::{base64, formats, hex::Hex, serde_as, skip_serializing_none};
use std::collections::{BTreeMap, BTreeSet};
use time::format_description::well_known::Rfc3339;
@ -28,7 +28,7 @@ pub struct ScimEntryKanidm {
pub attrs: BTreeMap<Attribute, ScimValueKanidm>,
}
#[derive(Serialize, Debug, Clone, ToSchema)]
#[derive(Serialize, Deserialize, Debug, Clone, ToSchema)]
pub enum ScimAttributeEffectiveAccess {
/// All attributes on the entry have this permission granted
Grant,
@ -49,7 +49,7 @@ impl ScimAttributeEffectiveAccess {
}
}
#[derive(Serialize, Debug, Clone, ToSchema)]
#[derive(Serialize, Deserialize, Debug, Clone, ToSchema)]
#[serde(rename_all = "camelCase")]
pub struct ScimEffectiveAccess {
/// The identity that inherits the effective permission
@ -209,7 +209,7 @@ pub struct ScimOAuth2ClaimMap {
pub values: BTreeSet<String>,
}
#[derive(Serialize, Debug, Clone, PartialEq, Eq, ToSchema)]
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, ToSchema)]
#[serde(rename_all = "camelCase")]
pub struct ScimReference {
pub uuid: Uuid,
@ -257,6 +257,98 @@ pub enum ScimValueKanidm {
UiHints(Vec<UiHint>),
}
#[serde_as]
#[derive(Serialize, Deserialize, Debug, Clone, ToSchema)]
pub struct ScimPerson {
pub uuid: Uuid,
pub name: String,
pub displayname: String,
pub spn: String,
pub description: Option<String>,
pub mails: Vec<ScimMail>,
pub managed_by: Option<ScimReference>,
pub groups: Vec<ScimReference>,
}
impl TryFrom<ScimEntryKanidm> for ScimPerson {
type Error = ();
fn try_from(scim_entry: ScimEntryKanidm) -> Result<Self, Self::Error> {
let uuid = scim_entry.header.id;
let name = scim_entry
.attrs
.get(&Attribute::Name)
.and_then(|v| match v {
ScimValueKanidm::String(s) => Some(s.clone()),
_ => None,
})
.ok_or(())?;
let displayname = scim_entry
.attrs
.get(&Attribute::DisplayName)
.and_then(|v| match v {
ScimValueKanidm::String(s) => Some(s.clone()),
_ => None,
})
.ok_or(())?;
let spn = scim_entry
.attrs
.get(&Attribute::Spn)
.and_then(|v| match v {
ScimValueKanidm::String(s) => Some(s.clone()),
_ => None,
})
.ok_or(())?;
let description = scim_entry
.attrs
.get(&Attribute::Description)
.and_then(|v| match v {
ScimValueKanidm::String(s) => Some(s.clone()),
_ => None,
});
let mails = scim_entry
.attrs
.get(&Attribute::Mail)
.and_then(|v| match v {
ScimValueKanidm::Mail(m) => Some(m.clone()),
_ => None,
})
.unwrap_or_default();
let groups = scim_entry
.attrs
.get(&Attribute::DirectMemberOf)
.and_then(|v| match v {
ScimValueKanidm::EntryReferences(v) => Some(v.clone()),
_ => None,
})
.unwrap_or_default();
let managed_by = scim_entry
.attrs
.get(&Attribute::EntryManagedBy)
.and_then(|v| match v {
ScimValueKanidm::EntryReference(v) => Some(v.clone()),
_ => None,
});
Ok(ScimPerson {
uuid,
name,
displayname,
spn,
description,
mails,
managed_by,
groups,
})
}
}
impl From<bool> for ScimValueKanidm {
fn from(b: bool) -> Self {
Self::Bool(b)

View file

@ -19,7 +19,7 @@ pub use self::auth::*;
pub use self::unix::*;
/// The type of Account in use.
#[derive(Clone, Copy, Debug, ToSchema)]
#[derive(Serialize, Deserialize, Clone, Copy, Debug, ToSchema)]
pub enum AccountType {
Person,
ServiceAccount,

522
pykanidm/poetry.lock generated

File diff suppressed because it is too large Load diff

View file

@ -29,7 +29,7 @@ Authlib = "^1.2.0"
[tool.poetry.group.dev.dependencies]
ruff = ">=0.5.1,<0.9.5"
ruff = ">=0.5.1,<0.9.6"
pytest = "^8.3.4"
mypy = "^1.14.1"
types-requests = "^2.32.0.20241016"
@ -40,7 +40,7 @@ pylint-pydantic = "^0.3.5"
coverage = "^7.6.10"
mkdocs = "^1.6.1"
mkdocs-material = "^9.6.1"
mkdocstrings = "^0.27.0"
mkdocstrings = ">=0.27,<0.29"
mkdocstrings-python = "^1.13.0"
pook = "^2.1.3"

View file

@ -45,13 +45,14 @@ ldap3_proto = { workspace = true }
libc = { workspace = true }
openssl = { workspace = true }
opentelemetry = { workspace = true, features = ["logs"] }
# opentelemetry_api = { workspace = true, features = ["logs"] }
qrcode = { workspace = true, features = ["svg"] }
regex = { workspace = true }
serde = { workspace = true, features = ["derive"] }
serde_json = { workspace = true }
serde_with = { workspace = true }
sketching = { workspace = true }
sshkeys = { workspace = true }
sshkey-attest = { workspace = true }
time = { workspace = true, features = ["serde", "std", "local-offset"] }
tokio = { workspace = true, features = ["net", "sync", "io-util", "macros"] }
tokio-openssl = { workspace = true }
@ -92,3 +93,10 @@ kanidmd_lib = { workspace = true, features = ["test"] }
[build-dependencies]
kanidm_build_profiles = { workspace = true }
[package.metadata.cargo-machete]
ignored = [
"opentelemetry", # feature gated
"kanidm_build_profiles",
]

View file

@ -9,6 +9,7 @@ use kanidm_proto::internal::{
IdentifyUserRequest, IdentifyUserResponse, ImageValue, OperationError, RadiusAuthToken,
SearchRequest, SearchResponse, UserAuthToken,
};
use kanidm_proto::oauth2::OidcWebfingerResponse;
use kanidm_proto::v1::{
AuthIssueSession, AuthRequest, Entry as ProtoEntry, UatStatus, UnixGroupToken, UnixUserToken,
WhoamiResponse,
@ -1509,6 +1510,21 @@ impl QueryServerReadV1 {
idms_prox_read.oauth2_openid_discovery(&client_id)
}
#[instrument(
level = "info",
skip_all,
fields(uuid = ?eventid)
)]
pub async fn handle_oauth2_webfinger_discovery(
&self,
client_id: &str,
resource_id: &str,
eventid: Uuid,
) -> Result<OidcWebfingerResponse, OperationError> {
let mut idms_prox_read = self.idms.proxy_read().await?;
idms_prox_read.oauth2_openid_webfinger(client_id, resource_id)
}
#[instrument(
level = "info",
skip_all,

View file

@ -1,6 +1,6 @@
use super::{QueryServerReadV1, QueryServerWriteV1};
use kanidm_proto::scim_v1::{
server::ScimEntryKanidm, ScimEntryGetQuery, ScimSyncRequest, ScimSyncState,
client::ScimFilter, server::ScimEntryKanidm, ScimEntryGetQuery, ScimSyncRequest, ScimSyncState,
};
use kanidmd_lib::idm::scim::{
GenerateScimSyncTokenEvent, ScimSyncFinaliseEvent, ScimSyncTerminateEvent, ScimSyncUpdateEvent,
@ -229,4 +229,27 @@ impl QueryServerReadV1 {
.qs_read
.scim_entry_id_get_ext(target_uuid, class, query, ident)
}
#[instrument(
level = "info",
skip_all,
fields(uuid = ?eventid)
)]
pub async fn scim_entry_search(
&self,
client_auth_info: ClientAuthInfo,
eventid: Uuid,
filter: ScimFilter,
query: ScimEntryGetQuery,
) -> Result<Vec<ScimEntryKanidm>, OperationError> {
let ct = duration_from_epoch_now();
let mut idms_prox_read = self.idms.proxy_read().await?;
let ident = idms_prox_read
.validate_client_auth_info_to_ident(client_auth_info, ct)
.inspect_err(|err| {
error!(?err, "Invalid identity");
})?;
idms_prox_read.qs_read.scim_search_ext(ident, filter, query)
}
}

View file

@ -116,7 +116,6 @@ pub struct ServerConfig {
///
/// If unset, the LDAP server will be disabled.
pub ldapbindaddress: Option<String>,
/// The role of this server, one of write_replica, write_replica_no_ui, read_only_replica, defaults to [ServerRole::WriteReplica]
#[serde(default)]
pub role: ServerRole,

View file

@ -513,7 +513,7 @@ pub async fn oauth2_token_post(
}
}
// // For future openid integration
// For future openid integration
pub async fn oauth2_openid_discovery_get(
State(state): State<ServerState>,
Path(client_id): Path<String>,
@ -538,6 +538,46 @@ pub async fn oauth2_openid_discovery_get(
}
}
#[derive(Deserialize)]
pub struct Oauth2OpenIdWebfingerQuery {
resource: String,
}
pub async fn oauth2_openid_webfinger_get(
State(state): State<ServerState>,
Path(client_id): Path<String>,
Query(query): Query<Oauth2OpenIdWebfingerQuery>,
Extension(kopid): Extension<KOpId>,
) -> impl IntoResponse {
let Oauth2OpenIdWebfingerQuery { resource } = query;
let cleaned_resource = resource.strip_prefix("acct:").unwrap_or(&resource);
let res = state
.qe_r_ref
.handle_oauth2_webfinger_discovery(&client_id, cleaned_resource, kopid.eventid)
.await;
match res {
Ok(mut dsc) => (
StatusCode::OK,
[
(ACCESS_CONTROL_ALLOW_ORIGIN, "*"),
(CONTENT_TYPE, "application/jrd+json"),
],
Json({
dsc.subject = resource;
dsc
}),
)
.into_response(),
Err(e) => {
error!(err = ?e, "Unable to access discovery info");
WebError::from(e).response_with_access_control_origin_header()
}
}
}
pub async fn oauth2_rfc8414_metadata_get(
State(state): State<ServerState>,
Path(client_id): Path<String>,
@ -770,6 +810,10 @@ pub fn route_setup(state: ServerState) -> Router<ServerState> {
"/oauth2/openid/:client_id/.well-known/openid-configuration",
get(oauth2_openid_discovery_get).options(oauth2_preflight_options),
)
.route(
"/oauth2/openid/:client_id/.well-known/webfinger",
get(oauth2_openid_webfinger_get).options(oauth2_preflight_options),
)
// // ⚠️ ⚠️ WARNING ⚠️ ⚠️
// // IF YOU CHANGE THESE VALUES YOU MUST UPDATE OIDC DISCOVERY URLS
.route(

View file

@ -1396,6 +1396,7 @@ pub async fn credential_update_update(
return Err(WebError::InternalServerError(errmsg));
}
};
let session_token = match serde_json::from_value(cubody[1].clone()) {
Ok(val) => val,
Err(err) => {
@ -1406,6 +1407,7 @@ pub async fn credential_update_update(
};
debug!("session_token: {:?}", session_token);
debug!("scr: {:?}", scr);
state
.qe_r_ref
.handle_idmcredentialupdate(session_token, scr, kopid.eventid)

View file

@ -200,6 +200,7 @@ pub(crate) async fn oauth2_id_scopemap_post(
Json(scopes): Json<Vec<String>>,
) -> Result<Json<()>, WebError> {
let filter = oauth2_id(&rs_name);
state
.qe_w_ref
.handle_oauth2_scopemap_update(client_auth_info, group, scopes, filter, kopid.eventid)

View file

@ -0,0 +1,19 @@
use crate::https::ServerState;
use axum::routing::get;
use axum::Router;
use axum_htmx::HxRequestGuardLayer;
mod persons;
pub fn admin_router() -> Router<ServerState> {
let unguarded_router = Router::new()
.route("/persons", get(persons::view_persons_get))
.route(
"/person/:person_uuid/view",
get(persons::view_person_view_get),
);
let guarded_router = Router::new().layer(HxRequestGuardLayer::new("/ui"));
Router::new().merge(unguarded_router).merge(guarded_router)
}

View file

@ -0,0 +1,193 @@
use crate::https::extractors::{DomainInfo, VerifiedClientInformation};
use crate::https::middleware::KOpId;
use crate::https::views::errors::HtmxError;
use crate::https::views::navbar::NavbarCtx;
use crate::https::views::Urls;
use crate::https::ServerState;
use askama::Template;
use axum::extract::{Path, State};
use axum::http::Uri;
use axum::response::{ErrorResponse, IntoResponse, Response};
use axum::Extension;
use axum_htmx::{HxPushUrl, HxRequest};
use futures_util::TryFutureExt;
use kanidm_proto::attribute::Attribute;
use kanidm_proto::internal::OperationError;
use kanidm_proto::scim_v1::client::ScimFilter;
use kanidm_proto::scim_v1::server::{ScimEffectiveAccess, ScimEntryKanidm, ScimPerson};
use kanidm_proto::scim_v1::ScimEntryGetQuery;
use kanidmd_lib::constants::EntryClass;
use kanidmd_lib::idm::server::DomainInfoRead;
use kanidmd_lib::idm::ClientAuthInfo;
use std::str::FromStr;
use uuid::Uuid;
const PERSON_ATTRIBUTES: [Attribute; 9] = [
Attribute::Uuid,
Attribute::Description,
Attribute::Name,
Attribute::DisplayName,
Attribute::Spn,
Attribute::Mail,
Attribute::Class,
Attribute::EntryManagedBy,
Attribute::DirectMemberOf,
];
#[derive(Template)]
#[template(path = "admin/admin_panel_template.html")]
pub(crate) struct PersonsView {
navbar_ctx: NavbarCtx,
partial: PersonsPartialView,
}
#[derive(Template)]
#[template(path = "admin/admin_persons_partial.html")]
struct PersonsPartialView {
persons: Vec<(ScimPerson, ScimEffectiveAccess)>,
}
#[derive(Template)]
#[template(path = "admin/admin_panel_template.html")]
struct PersonView {
partial: PersonViewPartial,
navbar_ctx: NavbarCtx,
}
#[derive(Template)]
#[template(path = "admin/admin_person_view_partial.html")]
struct PersonViewPartial {
person: ScimPerson,
scim_effective_access: ScimEffectiveAccess,
}
pub(crate) async fn view_person_view_get(
State(state): State<ServerState>,
HxRequest(is_htmx): HxRequest,
Extension(kopid): Extension<KOpId>,
VerifiedClientInformation(client_auth_info): VerifiedClientInformation,
Path(uuid): Path<Uuid>,
DomainInfo(domain_info): DomainInfo,
) -> axum::response::Result<Response> {
let (person, scim_effective_access) =
get_person_info(uuid, state, &kopid, client_auth_info, domain_info.clone()).await?;
let person_partial = PersonViewPartial {
person,
scim_effective_access,
};
let path_string = format!("/ui/admin/person/{uuid}/view");
let uri = Uri::from_str(path_string.as_str())
.map_err(|_| HtmxError::new(&kopid, OperationError::Backend, domain_info.clone()))?;
let push_url = HxPushUrl(uri);
Ok(if is_htmx {
(push_url, person_partial).into_response()
} else {
(
push_url,
PersonView {
partial: person_partial,
navbar_ctx: NavbarCtx { domain_info },
},
)
.into_response()
})
}
pub(crate) async fn view_persons_get(
State(state): State<ServerState>,
HxRequest(is_htmx): HxRequest,
Extension(kopid): Extension<KOpId>,
DomainInfo(domain_info): DomainInfo,
VerifiedClientInformation(client_auth_info): VerifiedClientInformation,
) -> axum::response::Result<Response> {
let persons = get_persons_info(state, &kopid, client_auth_info, domain_info.clone()).await?;
let persons_partial = PersonsPartialView { persons: persons };
let push_url = HxPushUrl(Uri::from_static("/ui/admin/persons"));
Ok(if is_htmx {
(push_url, persons_partial).into_response()
} else {
(
push_url,
PersonsView {
navbar_ctx: NavbarCtx { domain_info },
partial: persons_partial,
},
)
.into_response()
})
}
async fn get_person_info(
uuid: Uuid,
state: ServerState,
kopid: &KOpId,
client_auth_info: ClientAuthInfo,
domain_info: DomainInfoRead,
) -> Result<(ScimPerson, ScimEffectiveAccess), ErrorResponse> {
let scim_entry: ScimEntryKanidm = state
.qe_r_ref
.scim_entry_id_get(
client_auth_info.clone(),
kopid.eventid,
uuid.to_string(),
EntryClass::Person,
ScimEntryGetQuery {
attributes: Some(Vec::from(PERSON_ATTRIBUTES)),
ext_access_check: true,
},
)
.map_err(|op_err| HtmxError::new(kopid, op_err, domain_info.clone()))
.await?;
if let Some(personinfo_info) = scimentry_into_personinfo(scim_entry) {
Ok(personinfo_info)
} else {
Err(HtmxError::new(kopid, OperationError::InvalidState, domain_info.clone()).into())
}
}
async fn get_persons_info(
state: ServerState,
kopid: &KOpId,
client_auth_info: ClientAuthInfo,
domain_info: DomainInfoRead,
) -> Result<Vec<(ScimPerson, ScimEffectiveAccess)>, ErrorResponse> {
let filter = ScimFilter::Equal(Attribute::Class.into(), EntryClass::Person.into());
let base: Vec<ScimEntryKanidm> = state
.qe_r_ref
.scim_entry_search(
client_auth_info.clone(),
kopid.eventid,
filter,
ScimEntryGetQuery {
attributes: Some(Vec::from(PERSON_ATTRIBUTES)),
ext_access_check: true,
},
)
.map_err(|op_err| HtmxError::new(kopid, op_err, domain_info.clone()))
.await?;
// TODO: inefficient to sort here
let mut persons: Vec<_> = base
.into_iter()
// TODO: Filtering away unsuccessful entries may not be desired.
.filter_map(scimentry_into_personinfo)
.collect();
persons.sort_by_key(|(sp, _)| sp.uuid);
persons.reverse();
Ok(persons)
}
fn scimentry_into_personinfo(
scim_entry: ScimEntryKanidm,
) -> Option<(ScimPerson, ScimEffectiveAccess)> {
let scim_effective_access = scim_entry.ext_access_check.clone()?; // TODO: This should be an error msg.
let person = ScimPerson::try_from(scim_entry).ok()?;
Some((person, scim_effective_access))
}

View file

@ -45,14 +45,14 @@ pub(crate) async fn view_apps_get(
.await
.map_err(|old| HtmxError::new(&kopid, old, domain_info.clone()))?;
let apps_partial = AppsPartialView { apps: app_links };
Ok({
(
HxPushUrl(Uri::from_static(Urls::Apps.as_ref())),
AppsView {
navbar_ctx: NavbarCtx { domain_info },
apps_partial: AppsPartialView { apps: app_links },
},
)
.into_response()
let apps_view = AppsView {
navbar_ctx: NavbarCtx { domain_info },
apps_partial,
};
(HxPushUrl(Uri::from_static(Urls::Apps.as_ref())), apps_view).into_response()
})
}

View file

@ -105,6 +105,7 @@ pub(crate) async fn view_enrol_get(
Ok(ProfileView {
navbar_ctx: NavbarCtx { domain_info },
profile_partial: EnrolDeviceView {
menu_active_item: ProfileMenuItems::EnrolDevice,
qr_code_svg,

View file

@ -1,6 +1,6 @@
use axum::http::StatusCode;
use axum::response::{IntoResponse, Redirect, Response};
use axum_htmx::{HxReswap, HxRetarget, SwapOption};
use axum_htmx::{HxEvent, HxResponseTrigger, HxReswap, HxRetarget, SwapOption};
use kanidmd_lib::idm::server::DomainInfoRead;
use utoipa::ToSchema;
use uuid::Uuid;
@ -8,7 +8,7 @@ use uuid::Uuid;
use kanidm_proto::internal::OperationError;
use crate::https::middleware::KOpId;
use crate::https::views::UnrecoverableErrorView;
use crate::https::views::{ErrorToastPartial, UnrecoverableErrorView};
// #[derive(Template)]
// #[template(path = "recoverable_error_partial.html")]
// struct ErrorPartialView {
@ -41,7 +41,23 @@ impl IntoResponse for HtmxError {
| OperationError::SessionExpired
| OperationError::InvalidSessionState => Redirect::to("/ui").into_response(),
OperationError::SystemProtectedObject | OperationError::AccessDenied => {
(StatusCode::FORBIDDEN, body).into_response()
let trigger = HxResponseTrigger::after_swap([HxEvent::new(
"permissionDenied".to_string(),
)]);
(
trigger,
HxRetarget("main".to_string()),
HxReswap(SwapOption::BeforeEnd),
(
StatusCode::FORBIDDEN,
ErrorToastPartial {
err_code: inner,
operation_id: kopid,
},
)
.into_response(),
)
.into_response()
}
OperationError::NoMatchingEntries => {
(StatusCode::NOT_FOUND, body).into_response()

View file

@ -8,6 +8,7 @@ use axum::{
use axum_htmx::HxRequestGuardLayer;
use crate::https::views::admin::admin_router;
use constants::Urls;
use kanidmd_lib::{
idm::server::DomainInfoRead,
@ -16,6 +17,7 @@ use kanidmd_lib::{
use crate::https::ServerState;
mod admin;
mod apps;
pub(crate) mod constants;
mod cookies;
@ -36,6 +38,13 @@ struct UnrecoverableErrorView {
domain_info: DomainInfoRead,
}
#[derive(Template)]
#[template(path = "admin/error_toast.html")]
struct ErrorToastPartial {
err_code: OperationError,
operation_id: Uuid,
}
pub fn view_router() -> Router<ServerState> {
let mut unguarded_router = Router::new()
.route(
@ -103,6 +112,10 @@ pub fn view_router() -> Router<ServerState> {
.route("/reset/change_password", post(reset::view_new_pwd))
.route("/reset/add_passkey", post(reset::view_new_passkey))
.route("/reset/set_unixcred", post(reset::view_set_unixcred))
.route(
"/reset/add_ssh_publickey",
post(reset::view_add_ssh_publickey),
)
.route("/api/delete_alt_creds", post(reset::remove_alt_creds))
.route("/api/delete_unixcred", post(reset::remove_unixcred))
.route("/api/add_totp", post(reset::add_totp))
@ -110,11 +123,19 @@ pub fn view_router() -> Router<ServerState> {
.route("/api/remove_passkey", post(reset::remove_passkey))
.route("/api/finish_passkey", post(reset::finish_passkey))
.route("/api/cancel_mfareg", post(reset::cancel_mfareg))
.route(
"/api/remove_ssh_publickey",
post(reset::remove_ssh_publickey),
)
.route("/api/cu_cancel", post(reset::cancel_cred_update))
.route("/api/cu_commit", post(reset::commit))
.layer(HxRequestGuardLayer::new("/ui"));
Router::new().merge(unguarded_router).merge(guarded_router)
let admin_router = admin_router();
Router::new()
.merge(unguarded_router)
.merge(guarded_router)
.nest("/admin", admin_router)
}
/// Serde deserialization decorator to map empty Strings to None,

View file

@ -48,6 +48,7 @@ pub(crate) async fn view_profile_get(
Ok(ProfileView {
navbar_ctx: NavbarCtx { domain_info },
profile_partial: ProfilePartialView {
menu_active_item: ProfileMenuItems::UserProfile,
can_rw,

View file

@ -14,11 +14,15 @@ use qrcode::render::svg;
use qrcode::QrCode;
use serde::{Deserialize, Serialize};
use serde_with::skip_serializing_none;
use std::collections::BTreeMap;
use std::fmt;
use std::fmt::{Display, Formatter};
use std::str::FromStr;
use uuid::Uuid;
pub use sshkey_attest::proto::PublicKey as SshPublicKey;
pub use sshkeys::KeyType;
use kanidm_proto::internal::{
CUCredState, CUExtPortal, CURegState, CURegWarning, CURequest, CUSessionToken, CUStatus,
CredentialDetail, OperationError, PasskeyDetail, PasswordFeedback, TotpAlgo, UserAuthToken,
@ -69,6 +73,12 @@ struct CredStatusView {
credentials_update_partial: CredResetPartialView,
}
struct SshKey {
key_type: KeyType,
key: String,
comment: Option<String>,
}
#[derive(Template)]
#[template(path = "credentials_update_partial.html")]
struct CredResetPartialView {
@ -83,6 +93,8 @@ struct CredResetPartialView {
primary: Option<CredentialDetail>,
unixcred_state: CUCredState,
unixcred: Option<CredentialDetail>,
sshkeys_state: CUCredState,
sshkeys: BTreeMap<String, SshKey>,
}
#[skip_serializing_none]
@ -104,6 +116,13 @@ struct SetUnixCredPartial {
check_res: PwdCheckResult,
}
#[derive(Template)]
#[template(path = "credential_update_add_ssh_publickey_partial.html")]
struct AddSshPublicKeyPartial {
title_error: Option<String>,
key_error: Option<String>,
}
#[derive(Serialize, Deserialize, Debug)]
enum PwdCheckResult {
Success,
@ -120,6 +139,17 @@ pub(crate) struct NewPassword {
new_password_check: String,
}
#[derive(Deserialize, Debug)]
pub(crate) struct NewPublicKey {
title: String,
key: String,
}
#[derive(Deserialize, Debug)]
pub(crate) struct PublicKeyRemoveData {
name: String,
}
#[derive(Deserialize, Debug)]
pub(crate) struct NewTotp {
name: String,
@ -180,6 +210,8 @@ pub(crate) struct TotpInit {
pub(crate) struct TotpCheck {
wrong_code: bool,
broken_app: bool,
bad_name: bool,
taken_name: Option<String>,
}
#[derive(Template)]
@ -341,6 +373,30 @@ pub(crate) async fn remove_unixcred(
Ok(get_cu_partial_response(cu_status))
}
pub(crate) async fn remove_ssh_publickey(
State(state): State<ServerState>,
Extension(kopid): Extension<KOpId>,
HxRequest(_hx_request): HxRequest,
VerifiedClientInformation(_client_auth_info): VerifiedClientInformation,
DomainInfo(domain_info): DomainInfo,
jar: CookieJar,
Form(publickey): Form<PublicKeyRemoveData>,
) -> axum::response::Result<Response> {
let cu_session_token: CUSessionToken = get_cu_session(&jar).await?;
let cu_status = state
.qe_r_ref
.handle_idmcredentialupdate(
cu_session_token,
CURequest::SshPublicKeyRemove(publickey.name),
kopid.eventid,
)
.map_err(|op_err| HtmxError::new(&kopid, op_err, domain_info))
.await?;
Ok(get_cu_partial_response(cu_status))
}
pub(crate) async fn remove_totp(
State(state): State<ServerState>,
Extension(kopid): Extension<KOpId>,
@ -545,6 +601,25 @@ pub(crate) async fn add_totp(
let cu_session_token = get_cu_session(&jar).await?;
let check_totpcode = u32::from_str(&new_totp_form.check_totpcode).unwrap_or_default();
let swapped_handler_trigger =
HxResponseTrigger::after_swap([HxEvent::new("addTotpSwapped".to_string())]);
// If the user has not provided a name or added only spaces we exit early
if new_totp_form.name.trim().is_empty() {
return Ok((
swapped_handler_trigger,
AddTotpPartial {
totp_init: None,
totp_name: "".into(),
totp_value: new_totp_form.check_totpcode.clone(),
check: TotpCheck {
bad_name: true,
..Default::default()
},
},
)
.into_response());
}
let cu_status = if new_totp_form.ignore_broken_app {
// Cope with SHA1 apps because the user has intended to do so, their totp code was already verified
@ -570,6 +645,10 @@ pub(crate) async fn add_totp(
wrong_code: true,
..Default::default()
},
CURegState::TotpNameTryAgain(val) => TotpCheck {
taken_name: Some(val.clone()),
..Default::default()
},
CURegState::TotpInvalidSha1 => TotpCheck {
broken_app: true,
..Default::default()
@ -592,9 +671,6 @@ pub(crate) async fn add_totp(
new_totp_form.check_totpcode.clone()
};
let swapped_handler_trigger =
HxResponseTrigger::after_swap([HxEvent::new("addTotpSwapped".to_string())]);
Ok((
swapped_handler_trigger,
AddTotpPartial {
@ -805,6 +881,95 @@ pub(crate) async fn view_set_unixcred(
.into_response())
}
struct AddSshPublicKeyError {
key: Option<String>,
title: Option<String>,
}
pub(crate) async fn view_add_ssh_publickey(
State(state): State<ServerState>,
Extension(kopid): Extension<KOpId>,
HxRequest(_hx_request): HxRequest,
VerifiedClientInformation(_client_auth_info): VerifiedClientInformation,
DomainInfo(domain_info): DomainInfo,
jar: CookieJar,
opt_form: Option<Form<NewPublicKey>>,
) -> axum::response::Result<Response> {
let cu_session_token: CUSessionToken = get_cu_session(&jar).await?;
let new_key = match opt_form {
None => {
return Ok((AddSshPublicKeyPartial {
title_error: None,
key_error: None,
},)
.into_response());
}
Some(Form(new_key)) => new_key,
};
let (
AddSshPublicKeyError {
key: key_error,
title: title_error,
},
status,
) = {
let publickey = match SshPublicKey::from_string(&new_key.key) {
Err(_) => {
return Ok((AddSshPublicKeyPartial {
title_error: None,
key_error: Some("Key cannot be parsed".to_string()),
},)
.into_response());
}
Ok(publickey) => publickey,
};
let res = state
.qe_r_ref
.handle_idmcredentialupdate(
cu_session_token,
CURequest::SshPublicKey(new_key.title, publickey),
kopid.eventid,
)
.await;
match res {
Ok(cu_status) => return Ok(get_cu_partial_response(cu_status)),
Err(e @ (OperationError::InvalidLabel | OperationError::DuplicateLabel)) => (
AddSshPublicKeyError {
title: Some(e.to_string()),
key: None,
},
StatusCode::UNPROCESSABLE_ENTITY,
),
Err(e @ OperationError::DuplicateKey) => (
AddSshPublicKeyError {
key: Some(e.to_string()),
title: None,
},
StatusCode::UNPROCESSABLE_ENTITY,
),
Err(operr) => {
return Err(ErrorResponse::from(HtmxError::new(
&kopid,
operr,
domain_info,
)))
}
}
};
Ok((
status,
HxPushUrl(Uri::from_static("/ui/reset/add_ssh_publickey")),
AddSshPublicKeyPartial {
title_error,
key_error,
},
)
.into_response())
}
pub(crate) async fn view_reset_get(
State(state): State<ServerState>,
Extension(kopid): Extension<KOpId>,
@ -910,9 +1075,25 @@ fn get_cu_partial(cu_status: CUStatus) -> CredResetPartialView {
primary,
unixcred_state,
unixcred,
sshkeys_state,
sshkeys,
..
} = cu_status;
let sshkeyss: BTreeMap<String, SshKey> = sshkeys
.iter()
.map(|(k, v)| {
(
k.clone(),
SshKey {
key_type: v.clone().key_type,
key: v.fingerprint().hash,
comment: v.comment.clone(),
},
)
})
.collect();
CredResetPartialView {
ext_cred_portal,
can_commit,
@ -925,6 +1106,8 @@ fn get_cu_partial(cu_status: CUStatus) -> CredResetPartialView {
primary,
unixcred_state,
unixcred,
sshkeys_state,
sshkeys: sshkeyss,
}
}

View file

@ -88,15 +88,6 @@
</cc:Work>
</rdf:RDF>
</metadata>
<rect
style="fill:#ffffff;stroke-width:0.243721"
id="rect443"
width="135.46666"
height="135.46666"
x="0"
y="0"
inkscape:label="background"
sodipodi:insensitive="true" />
<g
id="layer1"
transform="matrix(0.91407203,0,0,0.91407203,-34.121105,-24.362694)"

Before

(image error) Size: 16 KiB

After

(image error) Size: 16 KiB

View file

@ -20,6 +20,15 @@ body {
max-width: 680px;
}
/*
* Bootstrap 5.3 fix for input-group validation
* :has checks that a child can be selected with the selector
* + selects the next sibling.
*/
.was-validated .input-group:has(.form-control:invalid) + .invalid-feedback {
display: block !important;
}
/*
* Sidebar
*/
@ -190,3 +199,10 @@ footer {
width: var(--icon-size);
height: var(--icon-size);
}
.ssh-list-icon {
--icon-size: 32px;
width: var(--icon-size);
height: var(--icon-size);
transform: rotate(35deg);
}

View file

@ -0,0 +1,10 @@
(% extends "base_htmx_with_nav.html" %)
(% block title %)Admin Panel(% endblock %)
(% block head %)
(% endblock %)
(% block main %)
(( partial|safe ))
(% endblock %)

View file

@ -0,0 +1,19 @@
<main class="container-xxl pb-5">
<div class="d-flex flex-sm-row flex-column">
<div class="list-group side-menu">
<a href="/ui/admin/persons" hx-target="#main" class="list-group-item list-group-item-action (% block persons_item_extra_classes%)(%endblock%)">
<img src="/pkg/img/icon-accounts.svg" alt="Persons" width="20" height="20">
Persons</a>
<a href="/ui/admin/groups" hx-target="#main" class="list-group-item list-group-item-action (% block groups_item_extra_classes%)(%endblock%)">
<img src="/pkg/img/icon-groups.svg" alt="Groups" width="20" height="20">
Groups (placeholder)</a>
<a href="/ui/admin/oauth2" hx-target="#main" class="list-group-item list-group-item-action (% block oauth2_item_extra_classes%)(%endblock%)">
<img src="/pkg/img/icon-oauth2.svg" alt="Oauth2" width="20" height="20">
Oauth2 (placeholder)</a>
</div>
<div id="settings-window" class="flex-grow-1 ps-sm-4 pt-sm-0 pt-4">
(% block admin_page %)
(% endblock %)
</div>
</div>
</main>

View file

@ -0,0 +1,29 @@
(% macro string_attr(dispname, name, value, editable, attribute) %)
(% if scim_effective_access.search.check(attribute|as_ref) %)
<div class="row mt-3">
<label for="person(( name ))" class="col-12 col-md-3 col-lg-2 col-form-label fw-bold py-0">(( dispname ))</label>
<div class="col-12 col-md-8 col-lg-6">
<input readonly class="form-control-plaintext py-0" id="person(( name ))" name="(( name ))" value="(( value ))">
</div>
</div>
(% endif %)
(% endmacro %)
<form hx-validate="true" hx-ext="bs-validation">
(% call string_attr("UUID", "uuid", person.uuid, false, Attribute::Uuid) %)
(% call string_attr("SPN", "spn", person.spn, false, Attribute::Spn) %)
(% call string_attr("Name", "name", person.name, true, Attribute::Name) %)
(% call string_attr("Displayname", "displayname", person.displayname, true, Attribute::DisplayName) %)
(% if let Some(description) = person.description %)
(% call string_attr("Description", "description", description, true, Attribute::Description) %)
(% else %)
(% call string_attr("Description", "description", "none", true, Attribute::Description) %)
(% endif %)
(% if let Some(entry_managed_by) = person.managed_by %)
(% call string_attr("Managed By", "managed_by", entry_managed_by.value, true, Attribute::EntryManagedBy) %)
(% else %)
(% call string_attr("Managed By", "managed_by", "none", true, Attribute::EntryManagedBy) %)
(% endif %)
</form>

View file

@ -0,0 +1,57 @@
(% extends "admin/admin_partial_base.html" %)
(% block persons_item_extra_classes %)active(% endblock %)
(% block admin_page %)
<nav aria-label="breadcrumb">
<ol class="breadcrumb">
<li class="breadcrumb-item"><a href="/ui/admin/persons" hx-target="#main">persons Management</a></li>
<li class="breadcrumb-item active" aria-current="page">Viewing</li>
</ol>
</nav>
(% include "admin_person_details_partial.html" %)
<hr>
(% if scim_effective_access.search.check(Attribute::Mail|as_ref) %)
<label class="mt-3 fw-bold">Emails</label>
<form hx-validate="true" hx-ext="bs-validation">
(% if person.mails.len() == 0 %)
<p>There are no email addresses associated with this person.</p>
(% else %)
<ol class="list-group col-12 col-md-8 col-lg-6">
(% for mail in person.mails %)
<li id="personMail(( loop.index ))" class="list-group-item d-flex flex-row justify-content-between">
<div class="d-flex align-items-center">(( mail.value ))</div>
<div class="buttons float-end">
</div>
</li>
(% endfor %)
</ol>
(% endif %)
</form>
(% endif %)
(% if scim_effective_access.search.check(Attribute::DirectMemberOf|as_ref) %)
<label class="mt-3 fw-bold">DirectMemberOf</label>
<form hx-validate="true" hx-ext="bs-validation">
(% if person.groups.len() == 0 %)
<p>There are no groups this person is a direct member of.</p>
(% else %)
<ol class="list-group col-12 col-md-8 col-lg-6">
(% for group in person.groups %)
<li id="personGroup(( loop.index ))" class="list-group-item d-flex flex-row justify-content-between">
<div class="d-flex align-items-center">(( group.value ))</div>
<div class="buttons float-end">
</div>
</li>
(% endfor %)
</ol>
(% endif %)
</form>
(% endif %)
(% endblock %)

View file

@ -0,0 +1,23 @@
(% extends "admin/admin_partial_base.html" %)
(% block persons_item_extra_classes %)active(% endblock %)
(% block admin_page %)
<nav aria-label="breadcrumb">
<ol class="breadcrumb">
<li class="breadcrumb-item active" aria-current="page">Person Management</li>
</ol>
</nav>
<ul class="list-group">
(% for (person, _) in persons %)
<li class="list-group-item d-flex flex-row justify-content-between">
<div class="d-flex align-items-center">
<a href="/ui/admin/person/(( person.uuid ))/view" hx-target="#main">(( person.name ))</a> <span class="text-secondary d-none d-lg-inline-block mx-4">(( person.uuid ))</span>
</div>
<div class="buttons float-end">
</div>
</li>
(% endfor %)
</ul>
(% endblock %)

View file

@ -0,0 +1,12 @@
<div class="toast-container position-fixed bottom-0 end-0 p-3">
<div id="permissionDeniedToast" class="toast" role="alert" aria-live="assertive" aria-atomic="true">
<div class="toast-header">
<strong class="me-auto">Error</strong>
<button type="button" class="btn-close" data-bs-dismiss="toast" aria-label="Close"></button>
</div>
<div class="toast-body">
(( err_code )).<br>
OpId: (( operation_id ))
</div>
</div>
</div>

View file

@ -0,0 +1,11 @@
<div class="toast-container position-fixed bottom-0 end-0 p-3">
<div id="savedToast" class="toast" role="alert" aria-live="assertive" aria-atomic="true">
<div class="toast-header">
<strong class="me-auto">Success</strong>
<button type="button" class="btn-close" data-bs-dismiss="toast" aria-label="Close"></button>
</div>
<div class="toast-body">
Saved.
</div>
</div>
</div>

View file

@ -2,7 +2,9 @@
(% block body %)
(% include "navbar.html" %)
<div id="main">
(% block main %)(% endblock %)
</div>
(% include "signout_modal.html" %)
(% endblock %)

View file

@ -0,0 +1,31 @@
<hr>
<div class="d-flex flex-column row-gap-4">
<h4>Add new SSH Key</h4>
<form class="row-gap-3 d-flex flex-column needs-validation"
hx-target="#credentialUpdateDynamicSection"
hx-post="/ui/reset/add_ssh_publickey">
<div>
<label for="key-title" class="form-label">Title</label>
<input type="text" class="form-control(% if let Some(_) = title_error %) is-invalid(% endif %)" id="key-title" name="title" aria-describedby="title-validation-feedback">
<div id="title-validation-feedback" class="invalid-feedback">
(% if let Some(title_error) = title_error %)(( title_error ))(% endif %)
</div>
</div>
<div>
<label for="key-content" class="form-label">Key</label>
<textarea class="form-control(% if let Some(_) = key_error %) is-invalid(% endif %)" id="key-content" rows="5" name="key"
aria-describedby="key-validation-feedback"
placeholder="Begins with 'ssh-rsa', 'ecdsa-sha2-nistp256', 'ecdsa-sha2-nistp384', 'ecdsa-sha2-nistp521', 'ssh-ed25519', 'sk-ecdsa-sha2-nistp256@openssh.com', or 'sk-ssh-ed25519@openssh.com'"
></textarea>
<div id="key-validation-feedback" class="invalid-feedback">
(% if let Some(key_error) = key_error %)(( key_error ))(% endif %)
</div>
</div>
<div class="column-gap-2 d-flex justify-content-end mt-2" hx-target="#credentialUpdateDynamicSection">
<button type="button" class="btn btn-danger" hx-get=((Urls::CredReset)) hx-target="body">Cancel</button>
<button type="submit" class="btn btn-primary">Submit</button>
</div>
</form>
</div>

View file

@ -19,7 +19,8 @@
<label for="new-totp-name" class="form-label">Enter a name for your TOTP</label>
<input
aria-describedby="totp-name-validation-feedback"
class="form-control"
class="form-control (%- if let Some(_) = check.taken_name -%)is-invalid(%- endif -%)
(%- if check.bad_name -%)is-invalid(%- endif -%)"
name="name"
id="new-totp-name"
value="(( totp_name ))"
@ -51,6 +52,18 @@
<li>Incorrect TOTP code - Please try again</li>
</ul>
</div>
(% else if check.bad_name %)
<div id="neq-totp-validation-feedback">
<ul>
<li>The name you provided was empty or blank. Please provide a proper name</li>
</ul>
</div>
(% else if let Some(name) = check.taken_name %)
<div id="neq-totp-validation-feedback">
<ul>
<li>The name "((name))" is either invalid or already taken, Please pick a different one</li>
</ul>
</div>
(% endif %)
</form>

View file

@ -85,12 +85,14 @@
(% when CUCredState::Modifiable %)
(% include "credentials_update_passkeys.html" %)
<!-- Here we are modifiable so we can render the button to add passkeys -->
<button type="button" class="btn btn-primary"
hx-post="/ui/reset/add_passkey"
hx-vals='{"class": "Any"}'
hx-target="#credentialUpdateDynamicSection">
Add Passkey
</button>
<div class="mt-3">
<button type="button" class="btn btn-primary"
hx-post="/ui/reset/add_passkey"
hx-vals='{"class": "Any"}'
hx-target="#credentialUpdateDynamicSection">
Add Passkey
</button>
</div>
(% when CUCredState::DeleteOnly %)
(% if passkeys.len() > 0 %)
@ -134,6 +136,55 @@
(% when CUCredState::PolicyDeny %)
(% endmatch %)
(% match sshkeys_state %)
(% when CUCredState::Modifiable %)
<hr class="my-4" />
<h4>SSH Keys</h4>
(% if sshkeys.len() > 0 %)
<p>This is a list of SSH keys associated with your account.</p>
<ul class="list-group">
(% for (keyname, sshkey) in sshkeys %)
<li class="list-group-item d-flex column-gap-3 py-3">
<div>
<img class="ssh-list-icon" src="/pkg/img/icons/key.svg" alt="" />
</div>
<div class="d-flex flex-column row-gap-2 flex-grow-1">
<div class="d-flex justify-content-between">
<div class="fw-bold column-gap-2">
(( keyname ))<span class="badge rounded-pill text-bg-dark ms-2">(( sshkey.key_type.short_name ))</span>
</div>
<button class="btn btn-tiny btn-danger"
hx-post="/ui/api/remove_ssh_publickey"
hx-vals='{"name": "(( keyname ))"}'
hx-target="#credentialUpdateDynamicSection">
Remove
</button>
</div>
<div><span class="font-monospace text-break">SHA256:(( sshkey.key ))</span></div>
(% if let Some(comment) = sshkey.comment %)
<div class="rounded bg-body-tertiary border border-light-subtle text-body-secondary px-2 py-1 align-self-stretch">Comment: (( comment ))</div>
(% endif %)
</div>
</li>
(% endfor %)
</ul>
(% else %)
<p>There are no SSH keys associated with your account.</p>
(% endif %)
<div class="mt-3">
<button class="btn btn-primary" type="button"
hx-post="/ui/reset/add_ssh_publickey"
hx-target="#credentialUpdateDynamicSection">
Add SSH Key
</button>
</div>
(% when CUCredState::DeleteOnly %)
(% when CUCredState::AccessDeny %)
(% when CUCredState::PolicyDeny %)
(% endmatch %)
<hr class="my-4" />
<div id="cred-update-commit-bar" class="toast bs-emphasis-color bs-secondary-bg">
<div class="toast-body">
@ -148,7 +199,7 @@
</svg>
<b>Careful</b> - Unsaved changes will be lost</div>
</span>
<div class="mt-2 pt-2 border-top">
<div class="mt-3 d-flex column-gap-2">
<button class="btn btn-danger"
hx-post="/ui/api/cu_cancel"
hx-boost="false"

View file

@ -1,4 +1,4 @@
<nav class="navbar navbar-expand-md kanidm_navbar mb-4">
<nav hx-boost="false" class="navbar navbar-expand-md kanidm_navbar mb-4">
<div class="container-lg">
<a class="navbar-brand d-flex align-items-center" href="/ui/apps">
(% if navbar_ctx.domain_info.image().is_some() %)
@ -21,7 +21,7 @@
</button>
<div class="collapse navbar-collapse" id="navbarCollapse">
<ul class="navbar-nav me-auto mb-2 mb-md-0">
<ul class="navbar-nav">
<li>
<a class="nav-link" href=((Urls::Apps))>
<span data-feather="file"></span>Applications</a>
@ -31,7 +31,7 @@
<span data-feather="file"></span>Profile</a>
</li>
</ul>
<ul class="navbar-nav me-auto mb-2 mb-md-0 ms-md-auto">
<ul class="navbar-nav ms-md-auto">
<li>
<a class="nav-link" href="#" data-bs-toggle="modal"
data-bs-target="#signoutModal">Sign out</a>
@ -39,4 +39,4 @@
</ul>
</div>
</div>
</nav>
</nav>

View file

@ -37,11 +37,11 @@ reqwest = { workspace = true }
tokio = { workspace = true, features = ["rt-multi-thread", "macros", "signal"] }
tokio-util = { workspace = true, features = ["codec"] }
tracing = { workspace = true }
serde_json.workspace = true
serde_json = { workspace = true }
[target.'cfg(target_os = "linux")'.dependencies]
sd-notify.workspace = true
prctl.workspace = true
sd-notify = { workspace = true }
prctl = { workspace = true }
[target.'cfg(target_family = "windows")'.dependencies]
whoami = { workspace = true }
@ -53,7 +53,10 @@ kanidm_utils_users = { workspace = true }
mimalloc = { workspace = true }
[build-dependencies]
serde = { workspace = true, features = ["derive"] }
clap = { workspace = true, features = ["derive"] }
clap_complete = { workspace = true }
kanidm_build_profiles = { workspace = true }
[package.metadata.cargo-machete]
ignored = ["clap_complete", "kanidm_build_profiles"]

View file

@ -20,7 +20,7 @@ static ALLOC: dhat::Alloc = dhat::Alloc;
use std::fs::{metadata, File};
// This works on both unix and windows.
use fs4::FileExt;
use fs4::fs_std::FileExt;
use kanidm_proto::messages::ConsoleOutputMode;
use sketching::otel::TracingPipelineGuard;
use std::io::Read;

View file

@ -79,7 +79,7 @@ webauthn-rs = { workspace = true, features = [
webauthn-rs-core = { workspace = true }
zxcvbn = { workspace = true }
serde_with = { workspace = true, features = ["time_0_3", "base64"] }
hex.workspace = true
hex = { workspace = true }
lodepng = { workspace = true }
image = { workspace = true, default-features = false, features = [
"gif",
@ -113,3 +113,9 @@ mimalloc = { workspace = true }
hashbrown = { workspace = true }
kanidm_build_profiles = { workspace = true }
regex = { workspace = true }
[package.metadata.cargo-machete]
ignored = [
"openssl-sys", # see note above
"whoami", # used in windows
]

View file

@ -571,6 +571,10 @@ pub trait BackendTransaction {
filter_error!("Requested a top level or isolated AndNot, returning empty");
(IdList::Indexed(IDLBitRange::new()), FilterPlan::Invalid)
}
FilterResolved::Invalid(_) => {
// Indexed since it is always false and we don't want to influence filter testing
(IdList::Indexed(IDLBitRange::new()), FilterPlan::Invalid)
}
})
}
@ -2376,6 +2380,46 @@ mod tests {
});
}
#[test]
fn test_be_search_with_invalid() {
run_test!(|be: &mut BackendWriteTransaction| {
trace!("Simple Search");
let mut e: Entry<EntryInit, EntryNew> = Entry::new();
e.add_ava(Attribute::UserId, Value::from("bagel"));
e.add_ava(
Attribute::Uuid,
Value::from("db237e8a-0079-4b8c-8a56-593b22aa44d1"),
);
let e = e.into_sealed_new();
let single_result = be.create(&CID_ZERO, vec![e]);
assert!(single_result.is_ok());
// Test Search with or condition including invalid attribute
let filt = filter_resolved!(f_or(vec![
f_eq(Attribute::UserId, PartialValue::new_utf8s("bagel")),
f_invalid(Attribute::UserId)
]));
let lims = Limits::unlimited();
let r = be.search(&lims, &filt);
assert!(r.expect("Search failed!").len() == 1);
// Test Search with or condition including invalid attribute
let filt = filter_resolved!(f_and(vec![
f_eq(Attribute::UserId, PartialValue::new_utf8s("bagel")),
f_invalid(Attribute::UserId)
]));
let lims = Limits::unlimited();
let r = be.search(&lims, &filt);
assert!(r.expect("Search failed!").len() == 0);
});
}
#[test]
fn test_be_simple_modify() {
run_test!(|be: &mut BackendWriteTransaction| {

View file

@ -1045,6 +1045,7 @@ lazy_static! {
Attribute::DomainDisplayName,
Attribute::DomainName,
Attribute::DomainLdapBasedn,
Attribute::LdapMaxQueryableAttrs,
Attribute::DomainSsid,
Attribute::DomainUuid,
// Grants read access to the key object.
@ -1058,6 +1059,7 @@ lazy_static! {
Attribute::DomainDisplayName,
Attribute::DomainSsid,
Attribute::DomainLdapBasedn,
Attribute::LdapMaxQueryableAttrs,
Attribute::LdapAllowUnixPwBind,
Attribute::KeyActionRevoke,
Attribute::KeyActionRotate,
@ -1065,6 +1067,7 @@ lazy_static! {
modify_present_attrs: vec![
Attribute::DomainDisplayName,
Attribute::DomainLdapBasedn,
Attribute::LdapMaxQueryableAttrs,
Attribute::DomainSsid,
Attribute::LdapAllowUnixPwBind,
Attribute::KeyActionRevoke,
@ -1100,6 +1103,7 @@ lazy_static! {
Attribute::DomainDisplayName,
Attribute::DomainName,
Attribute::DomainLdapBasedn,
Attribute::LdapMaxQueryableAttrs,
Attribute::DomainSsid,
Attribute::DomainUuid,
Attribute::KeyInternalData,
@ -1111,6 +1115,7 @@ lazy_static! {
Attribute::DomainDisplayName,
Attribute::DomainSsid,
Attribute::DomainLdapBasedn,
Attribute::LdapMaxQueryableAttrs,
Attribute::LdapAllowUnixPwBind,
Attribute::KeyActionRevoke,
Attribute::KeyActionRotate,
@ -1119,6 +1124,7 @@ lazy_static! {
modify_present_attrs: vec![
Attribute::DomainDisplayName,
Attribute::DomainLdapBasedn,
Attribute::LdapMaxQueryableAttrs,
Attribute::DomainSsid,
Attribute::LdapAllowUnixPwBind,
Attribute::KeyActionRevoke,
@ -1156,6 +1162,7 @@ lazy_static! {
Attribute::DomainDisplayName,
Attribute::DomainName,
Attribute::DomainLdapBasedn,
Attribute::LdapMaxQueryableAttrs,
Attribute::DomainSsid,
Attribute::DomainUuid,
Attribute::KeyInternalData,
@ -1167,6 +1174,7 @@ lazy_static! {
Attribute::DomainDisplayName,
Attribute::DomainSsid,
Attribute::DomainLdapBasedn,
Attribute::LdapMaxQueryableAttrs,
Attribute::DomainAllowEasterEggs,
Attribute::LdapAllowUnixPwBind,
Attribute::KeyActionRevoke,
@ -1176,6 +1184,7 @@ lazy_static! {
modify_present_attrs: vec![
Attribute::DomainDisplayName,
Attribute::DomainLdapBasedn,
Attribute::LdapMaxQueryableAttrs,
Attribute::DomainSsid,
Attribute::DomainAllowEasterEggs,
Attribute::LdapAllowUnixPwBind,

View file

@ -11,6 +11,7 @@ use crate::valueset::{ValueSet, ValueSetIutf8};
pub use kanidm_proto::attribute::Attribute;
use kanidm_proto::constants::*;
use kanidm_proto::internal::OperationError;
use kanidm_proto::scim_v1::JsonValue;
use kanidm_proto::v1::AccountType;
use uuid::Uuid;
@ -129,6 +130,12 @@ impl From<EntryClass> for &'static str {
}
}
impl From<EntryClass> for JsonValue {
fn from(value: EntryClass) -> Self {
Self::String(value.as_ref().to_string())
}
}
impl AsRef<str> for EntryClass {
fn as_ref(&self) -> &str {
self.into()

View file

@ -289,15 +289,6 @@ lazy_static! {
..Default::default()
};
/// Self-write of mail
pub static ref IDM_PEOPLE_SELF_WRITE_MAIL_V1: BuiltinGroup = BuiltinGroup {
name: "idm_people_self_write_mail",
description: "Builtin IDM Group for people accounts to update their own mail.",
uuid: UUID_IDM_PEOPLE_SELF_MAIL_WRITE,
members: Vec::with_capacity(0),
..Default::default()
};
/// Self-write of mail
pub static ref IDM_PEOPLE_SELF_MAIL_WRITE_DL7: BuiltinGroup = BuiltinGroup {
name: "idm_people_self_mail_write",
@ -373,36 +364,7 @@ lazy_static! {
};
/// This must be the last group to init to include the UUID of the other high priv groups.
pub static ref IDM_HIGH_PRIVILEGE_V1: BuiltinGroup = BuiltinGroup {
name: "idm_high_privilege",
uuid: UUID_IDM_HIGH_PRIVILEGE,
entry_managed_by: Some(UUID_IDM_ACCESS_CONTROL_ADMINS),
description: "Builtin IDM provided groups with high levels of access that should be audited and limited in modification.",
members: vec![
UUID_SYSTEM_ADMINS,
UUID_IDM_ADMINS,
UUID_DOMAIN_ADMINS,
UUID_IDM_SERVICE_DESK,
UUID_IDM_RECYCLE_BIN_ADMINS,
UUID_IDM_SCHEMA_ADMINS,
UUID_IDM_ACCESS_CONTROL_ADMINS,
UUID_IDM_OAUTH2_ADMINS,
UUID_IDM_RADIUS_ADMINS,
UUID_IDM_ACCOUNT_POLICY_ADMINS,
UUID_IDM_RADIUS_SERVERS,
UUID_IDM_GROUP_ADMINS,
UUID_IDM_UNIX_ADMINS,
UUID_IDM_PEOPLE_PII_READ,
UUID_IDM_PEOPLE_ADMINS,
UUID_IDM_PEOPLE_ON_BOARDING,
UUID_IDM_SERVICE_ACCOUNT_ADMINS,
UUID_IDM_HIGH_PRIVILEGE,
],
..Default::default()
};
/// This must be the last group to init to include the UUID of the other high priv groups.
pub static ref IDM_HIGH_PRIVILEGE_DL7: BuiltinGroup = BuiltinGroup {
pub static ref IDM_HIGH_PRIVILEGE_DL8: BuiltinGroup = BuiltinGroup {
name: "idm_high_privilege",
uuid: UUID_IDM_HIGH_PRIVILEGE,
entry_managed_by: Some(UUID_IDM_ACCESS_CONTROL_ADMINS),
@ -426,12 +388,14 @@ lazy_static! {
UUID_IDM_PEOPLE_ON_BOARDING,
UUID_IDM_SERVICE_ACCOUNT_ADMINS,
UUID_IDM_CLIENT_CERTIFICATE_ADMINS,
UUID_IDM_APPLICATION_ADMINS,
UUID_IDM_MAIL_ADMINS,
UUID_IDM_HIGH_PRIVILEGE,
],
..Default::default()
};
pub static ref BUILTIN_GROUP_APPLICATION_ADMINS: BuiltinGroup = BuiltinGroup {
pub static ref BUILTIN_GROUP_APPLICATION_ADMINS_DL8: BuiltinGroup = BuiltinGroup {
name: "idm_application_admins",
uuid: UUID_IDM_APPLICATION_ADMINS,
description: "Builtin Application Administration Group.",
@ -458,17 +422,19 @@ pub fn idm_builtin_non_admin_groups() -> Vec<&'static BuiltinGroup> {
&BUILTIN_GROUP_PEOPLE_PII_READ,
&BUILTIN_GROUP_PEOPLE_ON_BOARDING,
&BUILTIN_GROUP_SERVICE_ACCOUNT_ADMINS,
&BUILTIN_GROUP_APPLICATION_ADMINS,
&BUILTIN_GROUP_MAIL_SERVICE_ADMINS_DL8,
&IDM_GROUP_ADMINS_V1,
&IDM_ALL_PERSONS,
&IDM_ALL_ACCOUNTS,
&BUILTIN_IDM_RADIUS_SERVERS_V1,
&BUILTIN_IDM_MAIL_SERVERS_DL8,
&IDM_PEOPLE_SELF_WRITE_MAIL_V1,
&BUILTIN_GROUP_PEOPLE_SELF_NAME_WRITE_DL7,
&IDM_PEOPLE_SELF_MAIL_WRITE_DL7,
&BUILTIN_GROUP_CLIENT_CERTIFICATE_ADMINS_DL7,
&BUILTIN_GROUP_APPLICATION_ADMINS_DL8,
// Write deps on read, so write must be added first.
// All members must exist before we write HP
&IDM_HIGH_PRIVILEGE_V1,
&IDM_HIGH_PRIVILEGE_DL8,
// other things
&IDM_UI_ENABLE_EXPERIMENTAL_FEATURES,
&IDM_ACCOUNT_MAIL_READ,

View file

@ -54,14 +54,6 @@ pub type DomainVersion = u32;
/// previously.
pub const DOMAIN_LEVEL_0: DomainVersion = 0;
/// Deprecated as of 1.3.0
pub const DOMAIN_LEVEL_5: DomainVersion = 5;
/// Domain Level introduced with 1.2.0.
/// Deprecated as of 1.4.0
pub const DOMAIN_LEVEL_6: DomainVersion = 6;
pub const PATCH_LEVEL_1: u32 = 1;
/// Domain Level introduced with 1.3.0.
/// Deprecated as of 1.5.0
pub const DOMAIN_LEVEL_7: DomainVersion = 7;
@ -79,22 +71,28 @@ pub const PATCH_LEVEL_2: u32 = 2;
/// Deprecated as of 1.8.0
pub const DOMAIN_LEVEL_10: DomainVersion = 10;
/// Domain Level introduced with 1.7.0.
/// Deprecated as of 1.9.0
pub const DOMAIN_LEVEL_11: DomainVersion = 11;
// The minimum level that we can re-migrate from.
// This should be DOMAIN_TGT_LEVEL minus 2
pub const DOMAIN_MIN_REMIGRATION_LEVEL: DomainVersion = DOMAIN_LEVEL_7;
pub const DOMAIN_MIN_REMIGRATION_LEVEL: DomainVersion = DOMAIN_LEVEL_8;
// The minimum supported domain functional level (for replication)
pub const DOMAIN_MIN_LEVEL: DomainVersion = DOMAIN_TGT_LEVEL;
// The previous releases domain functional level
pub const DOMAIN_PREVIOUS_TGT_LEVEL: DomainVersion = DOMAIN_LEVEL_8;
pub const DOMAIN_PREVIOUS_TGT_LEVEL: DomainVersion = DOMAIN_TGT_LEVEL - 1;
// The target supported domain functional level. During development this is
// the NEXT level that users will upgrade too.
pub const DOMAIN_TGT_LEVEL: DomainVersion = DOMAIN_LEVEL_9;
// the NEXT level that users will upgrade too. In other words if we are
// developing 1.6.0-dev, then we need to set TGT_LEVEL to 10 which is
// the corresponding level.
pub const DOMAIN_TGT_LEVEL: DomainVersion = DOMAIN_LEVEL_10;
// The current patch level if any out of band fixes are required.
pub const DOMAIN_TGT_PATCH_LEVEL: u32 = PATCH_LEVEL_2;
// The target domain functional level for the SUBSEQUENT release/dev cycle.
pub const DOMAIN_TGT_NEXT_LEVEL: DomainVersion = DOMAIN_LEVEL_10;
pub const DOMAIN_TGT_NEXT_LEVEL: DomainVersion = DOMAIN_TGT_LEVEL + 1;
// The maximum supported domain functional level
pub const DOMAIN_MAX_LEVEL: DomainVersion = DOMAIN_LEVEL_10;
pub const DOMAIN_MAX_LEVEL: DomainVersion = DOMAIN_LEVEL_11;
// On test builds define to 60 seconds
#[cfg(test)]

View file

@ -167,6 +167,17 @@ pub static ref SCHEMA_ATTR_DOMAIN_LDAP_BASEDN: SchemaAttribute = SchemaAttribute
..Default::default()
};
pub static ref SCHEMA_ATTR_LDAP_MAXIMUM_QUERYABLE_ATTRIBUTES: SchemaAttribute = SchemaAttribute {
uuid: UUID_SCHEMA_ATTR_LDAP_MAXIMUM_QUERYABLE_ATTRIBUTES,
name: Attribute::LdapMaxQueryableAttrs,
description: "The maximum number of LDAP attributes that can be queried in one operation".to_string(),
multivalue: false,
sync_allowed: true,
syntax: SyntaxType::Uint32,
..Default::default()
};
pub static ref SCHEMA_ATTR_DOMAIN_DISPLAY_NAME: SchemaAttribute = SchemaAttribute {
uuid: UUID_SCHEMA_ATTR_DOMAIN_DISPLAY_NAME,
name: Attribute::DomainDisplayName,
@ -208,6 +219,16 @@ pub static ref SCHEMA_ATTR_DENIED_NAME: SchemaAttribute = SchemaAttribute {
..Default::default()
};
pub static ref SCHEMA_ATTR_DENIED_NAME_DL10: SchemaAttribute = SchemaAttribute {
uuid: UUID_SCHEMA_ATTR_DENIED_NAME,
name: Attribute::DeniedName,
description: "Iname values that are not allowed to be used in 'name'.".to_string(),
syntax: SyntaxType::Utf8StringIname,
multivalue: true,
..Default::default()
};
pub static ref SCHEMA_ATTR_DOMAIN_TOKEN_KEY: SchemaAttribute = SchemaAttribute {
uuid: UUID_SCHEMA_ATTR_DOMAIN_TOKEN_KEY,
name: Attribute::DomainTokenKey,
@ -1209,6 +1230,31 @@ pub static ref SCHEMA_CLASS_DOMAIN_INFO_DL9: SchemaClass = SchemaClass {
..Default::default()
};
pub static ref SCHEMA_CLASS_DOMAIN_INFO_DL10: SchemaClass = SchemaClass {
uuid: UUID_SCHEMA_CLASS_DOMAIN_INFO,
name: EntryClass::DomainInfo.into(),
description: "Local domain information and configuration".to_string(),
systemmay: vec![
Attribute::DomainSsid,
Attribute::DomainLdapBasedn,
Attribute::LdapMaxQueryableAttrs,
Attribute::LdapAllowUnixPwBind,
Attribute::Image,
Attribute::PatchLevel,
Attribute::DomainDevelopmentTaint,
Attribute::DomainAllowEasterEggs,
Attribute::DomainDisplayName,
],
systemmust: vec![
Attribute::Name,
Attribute::DomainUuid,
Attribute::DomainName,
Attribute::Version,
],
..Default::default()
};
pub static ref SCHEMA_CLASS_POSIXGROUP: SchemaClass = SchemaClass {
uuid: UUID_SCHEMA_CLASS_POSIXGROUP,
name: EntryClass::PosixGroup.into(),

View file

@ -131,7 +131,8 @@ pub const UUID_SCHEMA_ATTR_PRIMARY_CREDENTIAL: Uuid = uuid!("00000000-0000-0000-
pub const UUID_SCHEMA_CLASS_PERSON: Uuid = uuid!("00000000-0000-0000-0000-ffff00000044");
pub const UUID_SCHEMA_CLASS_GROUP: Uuid = uuid!("00000000-0000-0000-0000-ffff00000045");
pub const UUID_SCHEMA_CLASS_ACCOUNT: Uuid = uuid!("00000000-0000-0000-0000-ffff00000046");
// GAP - 47
pub const UUID_SCHEMA_ATTR_LDAP_MAXIMUM_QUERYABLE_ATTRIBUTES: Uuid =
uuid!("00000000-0000-0000-0000-ffff00000187");
pub const UUID_SCHEMA_ATTR_ATTRIBUTENAME: Uuid = uuid!("00000000-0000-0000-0000-ffff00000048");
pub const UUID_SCHEMA_ATTR_CLASSNAME: Uuid = uuid!("00000000-0000-0000-0000-ffff00000049");
pub const UUID_SCHEMA_ATTR_LEGALNAME: Uuid = uuid!("00000000-0000-0000-0000-ffff00000050");

View file

@ -702,6 +702,13 @@ impl Credential {
}
}
pub(crate) fn has_totp_by_name(&self, label: &str) -> bool {
match &self.type_ {
CredentialType::PasswordMfa(_, totp, _, _) => totp.contains_key(label),
_ => false,
}
}
pub(crate) fn new_from_generatedpassword(pw: Password) -> Self {
Credential {
type_: CredentialType::GeneratedPassword(pw),

View file

@ -2912,6 +2912,7 @@ impl<VALID, STATE> Entry<VALID, STATE> {
false
}
FilterResolved::AndNot(f, _) => !self.entry_match_no_index_inner(f),
FilterResolved::Invalid(_) => false,
}
}

View file

@ -244,13 +244,15 @@ impl SearchEvent {
ident: &Identity,
filter: Filter<FilterValid>,
filter_orig: Filter<FilterValid>,
attrs: Option<BTreeSet<Attribute>>,
effective_access_check: bool,
) -> Self {
SearchEvent {
ident: Identity::from_impersonate(ident),
filter,
filter_orig,
attrs: None,
effective_access_check: false,
attrs,
effective_access_check,
}
}

View file

@ -23,6 +23,7 @@ use hashbrown::HashMap;
use hashbrown::HashSet;
use kanidm_proto::constants::ATTR_UUID;
use kanidm_proto::internal::{Filter as ProtoFilter, OperationError, SchemaError};
use kanidm_proto::scim_v1::client::{AttrPath as ScimAttrPath, ScimFilter};
use ldap3_proto::proto::{LdapFilter, LdapSubstringFilter};
use serde::Deserialize;
use uuid::Uuid;
@ -83,6 +84,10 @@ pub fn f_self() -> FC {
FC::SelfUuid
}
pub fn f_invalid(a: Attribute) -> FC {
FC::Invalid(a)
}
pub fn f_id(uuid: &str) -> FC {
let uf = Uuid::parse_str(uuid)
.ok()
@ -117,6 +122,7 @@ pub enum FC {
Inclusion(Vec<FC>),
AndNot(Box<FC>),
SelfUuid,
Invalid(Attribute),
// Not(Box<FC>),
}
@ -135,6 +141,7 @@ enum FilterComp {
Inclusion(Vec<FilterComp>),
AndNot(Box<FilterComp>),
SelfUuid,
Invalid(Attribute),
// Does this mean we can add a true not to the type now?
// Not(Box<FilterComp>),
}
@ -196,12 +203,15 @@ impl fmt::Debug for FilterComp {
FilterComp::SelfUuid => {
write!(f, "uuid eq self")
}
FilterComp::Invalid(attr) => {
write!(f, "invalid ( {:?} )", attr)
}
}
}
}
/// This is the fully resolved internal representation. Note the lack of Not and selfUUID
/// because these are resolved into And(Pres(class), AndNot(term)) and Eq(uuid, ...).
/// because these are resolved into And(Pres(class), AndNot(term)) and Eq(uuid, ...) respectively.
/// Importantly, we make this accessible to Entry so that it can then match on filters
/// internally.
///
@ -221,6 +231,7 @@ pub enum FilterResolved {
LessThan(Attribute, PartialValue, Option<NonZeroU8>),
Or(Vec<FilterResolved>, Option<NonZeroU8>),
And(Vec<FilterResolved>, Option<NonZeroU8>),
Invalid(Attribute),
// All terms must have 1 or more items, or the inclusion is false!
Inclusion(Vec<FilterResolved>, Option<NonZeroU8>),
AndNot(Box<FilterResolved>, Option<NonZeroU8>),
@ -310,6 +321,9 @@ impl fmt::Debug for FilterResolved {
FilterResolved::AndNot(inner, idx) => {
write!(f, "not (s{} {:?})", idx.unwrap_or(NonZeroU8::MAX), inner)
}
FilterResolved::Invalid(attr) => {
write!(f, "{} inv", attr)
}
}
}
}
@ -751,6 +765,21 @@ impl Filter<FilterInvalid> {
},
})
}
#[instrument(name = "filter::from_scim_ro", level = "trace", skip_all)]
pub fn from_scim_ro(
ev: &Identity,
f: &ScimFilter,
qs: &mut QueryServerReadTransaction,
) -> Result<Self, OperationError> {
let depth = DEFAULT_LIMIT_FILTER_DEPTH_MAX as usize;
let mut elems = ev.limits().filter_max_elements;
Ok(Filter {
state: FilterInvalid {
inner: FilterComp::from_scim_ro(f, qs, depth, &mut elems)?,
},
})
}
}
impl FromStr for Filter<FilterInvalid> {
@ -777,6 +806,7 @@ impl FilterComp {
FC::Inclusion(v) => FilterComp::Inclusion(v.into_iter().map(FilterComp::new).collect()),
FC::AndNot(b) => FilterComp::AndNot(Box::new(FilterComp::new(*b))),
FC::SelfUuid => FilterComp::SelfUuid,
FC::Invalid(a) => FilterComp::Invalid(a),
}
}
@ -804,7 +834,8 @@ impl FilterComp {
| FilterComp::Stw(attr, _)
| FilterComp::Enw(attr, _)
| FilterComp::Pres(attr)
| FilterComp::LessThan(attr, _) => {
| FilterComp::LessThan(attr, _)
| FilterComp::Invalid(attr) => {
r_set.insert(attr.clone());
}
FilterComp::Or(vs) => vs.iter().for_each(|f| f.get_attr_set(r_set)),
@ -952,6 +983,11 @@ impl FilterComp {
// Pretty hard to mess this one up ;)
Ok(FilterComp::SelfUuid)
}
FilterComp::Invalid(attr) => {
// FilterComp may be invalid but Invalid is still a valid value.
// we continue the evaluation so OR queries can still succeed
Ok(FilterComp::Invalid(attr.clone()))
}
}
}
@ -1067,38 +1103,32 @@ impl FilterComp {
elems: &mut usize,
) -> Result<Self, OperationError> {
let ndepth = depth.checked_sub(1).ok_or(OperationError::ResourceLimit)?;
*elems = (*elems)
.checked_sub(1)
.ok_or(OperationError::ResourceLimit)?;
Ok(match f {
LdapFilter::And(l) => {
*elems = (*elems)
.checked_sub(l.len())
.ok_or(OperationError::ResourceLimit)?;
FilterComp::And(
l.iter()
.map(|f| Self::from_ldap_ro(f, qs, ndepth, elems))
.collect::<Result<Vec<_>, _>>()?,
)
}
LdapFilter::Or(l) => {
*elems = (*elems)
.checked_sub(l.len())
.ok_or(OperationError::ResourceLimit)?;
FilterComp::Or(
l.iter()
.map(|f| Self::from_ldap_ro(f, qs, ndepth, elems))
.collect::<Result<Vec<_>, _>>()?,
)
}
LdapFilter::And(l) => FilterComp::And(
l.iter()
.map(|f| Self::from_ldap_ro(f, qs, ndepth, elems))
.collect::<Result<Vec<_>, _>>()?,
),
LdapFilter::Or(l) => FilterComp::Or(
l.iter()
.map(|f| Self::from_ldap_ro(f, qs, ndepth, elems))
.collect::<Result<Vec<_>, _>>()?,
),
LdapFilter::Not(l) => {
*elems = (*elems)
.checked_sub(1)
.ok_or(OperationError::ResourceLimit)?;
FilterComp::AndNot(Box::new(Self::from_ldap_ro(l, qs, ndepth, elems)?))
}
LdapFilter::Equality(a, v) => {
let a = ldap_attr_filter_map(a);
let v = qs.clone_partialvalue(&a, v)?;
FilterComp::Eq(a, v)
let pv = qs.clone_partialvalue(&a, v);
match pv {
Ok(pv) => FilterComp::Eq(a, pv),
Err(_) if a == Attribute::Spn => FilterComp::Invalid(a),
Err(err) => return Err(err),
}
}
LdapFilter::Present(a) => FilterComp::Pres(ldap_attr_filter_map(a)),
LdapFilter::Substring(
@ -1147,6 +1177,103 @@ impl FilterComp {
}
})
}
fn from_scim_ro(
f: &ScimFilter,
qs: &mut QueryServerReadTransaction,
depth: usize,
elems: &mut usize,
) -> Result<Self, OperationError> {
let ndepth = depth.checked_sub(1).ok_or(OperationError::ResourceLimit)?;
*elems = (*elems)
.checked_sub(1)
.ok_or(OperationError::ResourceLimit)?;
Ok(match f {
ScimFilter::Present(ScimAttrPath { a, s: None }) => FilterComp::Pres(a.clone()),
ScimFilter::Equal(ScimAttrPath { a, s: None }, json_value) => {
let pv = qs.resolve_scim_json_get(a, json_value)?;
FilterComp::Eq(a.clone(), pv)
}
ScimFilter::Contains(ScimAttrPath { a, s: None }, json_value) => {
let pv = qs.resolve_scim_json_get(a, json_value)?;
FilterComp::Cnt(a.clone(), pv)
}
ScimFilter::StartsWith(ScimAttrPath { a, s: None }, json_value) => {
let pv = qs.resolve_scim_json_get(a, json_value)?;
FilterComp::Stw(a.clone(), pv)
}
ScimFilter::EndsWith(ScimAttrPath { a, s: None }, json_value) => {
let pv = qs.resolve_scim_json_get(a, json_value)?;
FilterComp::Enw(a.clone(), pv)
}
ScimFilter::Greater(ScimAttrPath { a, s: None }, json_value) => {
let pv = qs.resolve_scim_json_get(a, json_value)?;
// Greater is equivalent to "not equal or less than".
FilterComp::And(vec![
FilterComp::Pres(a.clone()),
FilterComp::AndNot(Box::new(FilterComp::Or(vec![
FilterComp::LessThan(a.clone(), pv.clone()),
FilterComp::Eq(a.clone(), pv),
]))),
])
}
ScimFilter::Less(ScimAttrPath { a, s: None }, json_value) => {
let pv = qs.resolve_scim_json_get(a, json_value)?;
FilterComp::LessThan(a.clone(), pv)
}
ScimFilter::GreaterOrEqual(ScimAttrPath { a, s: None }, json_value) => {
let pv = qs.resolve_scim_json_get(a, json_value)?;
// Greater or equal is equivalent to "not less than".
FilterComp::And(vec![
FilterComp::Pres(a.clone()),
FilterComp::AndNot(Box::new(FilterComp::LessThan(a.clone(), pv.clone()))),
])
}
ScimFilter::LessOrEqual(ScimAttrPath { a, s: None }, json_value) => {
let pv = qs.resolve_scim_json_get(a, json_value)?;
FilterComp::Or(vec![
FilterComp::LessThan(a.clone(), pv.clone()),
FilterComp::Eq(a.clone(), pv),
])
}
ScimFilter::Not(f) => {
let f = Self::from_scim_ro(f, qs, ndepth, elems)?;
FilterComp::AndNot(Box::new(f))
}
ScimFilter::Or(left, right) => {
let left = Self::from_scim_ro(left, qs, ndepth, elems)?;
let right = Self::from_scim_ro(right, qs, ndepth, elems)?;
FilterComp::Or(vec![left, right])
}
ScimFilter::And(left, right) => {
let left = Self::from_scim_ro(left, qs, ndepth, elems)?;
let right = Self::from_scim_ro(right, qs, ndepth, elems)?;
FilterComp::And(vec![left, right])
}
ScimFilter::NotEqual(ScimAttrPath { s: None, .. }, _) => {
error!("Unsupported filter operation - not-equal");
return Err(OperationError::FilterGeneration);
}
ScimFilter::Present(ScimAttrPath { s: Some(_), .. })
| ScimFilter::Equal(ScimAttrPath { s: Some(_), .. }, _)
| ScimFilter::NotEqual(ScimAttrPath { s: Some(_), .. }, _)
| ScimFilter::Contains(ScimAttrPath { s: Some(_), .. }, _)
| ScimFilter::StartsWith(ScimAttrPath { s: Some(_), .. }, _)
| ScimFilter::EndsWith(ScimAttrPath { s: Some(_), .. }, _)
| ScimFilter::Greater(ScimAttrPath { s: Some(_), .. }, _)
| ScimFilter::Less(ScimAttrPath { s: Some(_), .. }, _)
| ScimFilter::GreaterOrEqual(ScimAttrPath { s: Some(_), .. }, _)
| ScimFilter::LessOrEqual(ScimAttrPath { s: Some(_), .. }, _) => {
error!("Unsupported filter operation - sub-attribute");
return Err(OperationError::FilterGeneration);
}
ScimFilter::Complex(..) => {
error!("Unsupported filter operation - complex");
return Err(OperationError::FilterGeneration);
}
})
}
}
/* We only configure partial eq if cfg test on the invalid/valid types */
@ -1267,6 +1394,7 @@ impl FilterResolved {
FilterResolved::Eq(a, v, idx)
}
FilterComp::SelfUuid => panic!("Not possible to resolve SelfUuid in from_invalid!"),
FilterComp::Invalid(attr) => FilterResolved::Invalid(attr),
FilterComp::Cnt(a, v) => {
let idx = idxmeta.contains(&(&a, &IndexType::SubString));
let idx = NonZeroU8::new(idx as u8);
@ -1340,6 +1468,7 @@ impl FilterResolved {
| FilterComp::Stw(..)
| FilterComp::Enw(..)
| FilterComp::Pres(_)
| FilterComp::Invalid(_)
| FilterComp::LessThan(..) => true,
}
}
@ -1435,6 +1564,7 @@ impl FilterResolved {
FilterResolved::resolve_idx((*f).clone(), ev, idxmeta)
.map(|fi| FilterResolved::AndNot(Box::new(fi), None))
}
FilterComp::Invalid(attr) => Some(FilterResolved::Invalid(attr)),
}
}
@ -1493,6 +1623,7 @@ impl FilterResolved {
FilterResolved::resolve_no_idx((*f).clone(), ev)
.map(|fi| FilterResolved::AndNot(Box::new(fi), None))
}
FilterComp::Invalid(attr) => Some(FilterResolved::Invalid(attr)),
}
}
@ -1632,6 +1763,8 @@ impl FilterResolved {
| FilterResolved::And(_, sf)
| FilterResolved::Inclusion(_, sf)
| FilterResolved::AndNot(_, sf) => *sf,
// We hard code 1 because there is no slope for an invlid filter
FilterResolved::Invalid(_) => NonZeroU8::new(1),
}
}
}

View file

@ -32,7 +32,7 @@ impl IdmServerProxyReadTransaction<'_> {
// _ext reduces the entries based on access.
let oauth2_related = self
.qs_read
.impersonate_search_ext(f_executed, f_intent, ident)?;
.impersonate_search_ext(f_executed, f_intent, ident, None, false)?;
trace!(?oauth2_related);
// Aggregate results to a Vec of AppLink

View file

@ -86,6 +86,7 @@ enum MfaRegState {
None,
TotpInit(Totp),
TotpTryAgain(Totp),
TotpNameTryAgain(Totp, String),
TotpInvalidSha1(Totp, Totp, String),
Passkey(Box<CreationChallengeResponse>, PasskeyRegistration),
#[allow(dead_code)]
@ -98,6 +99,7 @@ impl fmt::Debug for MfaRegState {
MfaRegState::None => "MfaRegState::None",
MfaRegState::TotpInit(_) => "MfaRegState::TotpInit",
MfaRegState::TotpTryAgain(_) => "MfaRegState::TotpTryAgain",
MfaRegState::TotpNameTryAgain(_, _) => "MfaRegState::TotpNameTryAgain",
MfaRegState::TotpInvalidSha1(_, _, _) => "MfaRegState::TotpInvalidSha1",
MfaRegState::Passkey(_, _) => "MfaRegState::Passkey",
MfaRegState::AttestedPasskey(_, _) => "MfaRegState::AttestedPasskey",
@ -273,6 +275,7 @@ pub enum MfaRegStateStatus {
None,
TotpCheck(TotpSecret),
TotpTryAgain,
TotpNameTryAgain(String),
TotpInvalidSha1,
BackupCodes(HashSet<String>),
Passkey(CreationChallengeResponse),
@ -283,8 +286,9 @@ impl fmt::Debug for MfaRegStateStatus {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let t = match self {
MfaRegStateStatus::None => "MfaRegStateStatus::None",
MfaRegStateStatus::TotpCheck(_) => "MfaRegStateStatus::TotpCheck(_)",
MfaRegStateStatus::TotpCheck(_) => "MfaRegStateStatus::TotpCheck",
MfaRegStateStatus::TotpTryAgain => "MfaRegStateStatus::TotpTryAgain",
MfaRegStateStatus::TotpNameTryAgain(_) => "MfaRegStateStatus::TotpNameTryAgain",
MfaRegStateStatus::TotpInvalidSha1 => "MfaRegStateStatus::TotpInvalidSha1",
MfaRegStateStatus::BackupCodes(_) => "MfaRegStateStatus::BackupCodes",
MfaRegStateStatus::Passkey(_) => "MfaRegStateStatus::Passkey",
@ -389,6 +393,7 @@ impl Into<CUStatus> for CredentialUpdateSessionStatus {
MfaRegStateStatus::None => CURegState::None,
MfaRegStateStatus::TotpCheck(c) => CURegState::TotpCheck(c),
MfaRegStateStatus::TotpTryAgain => CURegState::TotpTryAgain,
MfaRegStateStatus::TotpNameTryAgain(label) => CURegState::TotpNameTryAgain(label),
MfaRegStateStatus::TotpInvalidSha1 => CURegState::TotpInvalidSha1,
MfaRegStateStatus::BackupCodes(s) => {
CURegState::BackupCodes(s.into_iter().collect())
@ -469,6 +474,9 @@ impl From<&CredentialUpdateSession> for CredentialUpdateSessionStatus {
MfaRegState::TotpInit(token) => MfaRegStateStatus::TotpCheck(
token.to_proto(session.account.name.as_str(), session.issuer.as_str()),
),
MfaRegState::TotpNameTryAgain(_, name) => {
MfaRegStateStatus::TotpNameTryAgain(name.clone())
}
MfaRegState::TotpTryAgain(_) => MfaRegStateStatus::TotpTryAgain,
MfaRegState::TotpInvalidSha1(_, _, _) => MfaRegStateStatus::TotpInvalidSha1,
MfaRegState::Passkey(r, _) => MfaRegStateStatus::Passkey(r.as_ref().clone()),
@ -1899,7 +1907,22 @@ impl IdmServerCredUpdateTransaction<'_> {
match &session.mfaregstate {
MfaRegState::TotpInit(totp_token)
| MfaRegState::TotpTryAgain(totp_token)
| MfaRegState::TotpNameTryAgain(totp_token, _)
| MfaRegState::TotpInvalidSha1(totp_token, _, _) => {
if session
.primary
.as_ref()
.map(|cred| cred.has_totp_by_name(label))
.unwrap_or_default()
|| label.trim().is_empty()
|| !Value::validate_str_escapes(label)
{
// The user is trying to add a second TOTP under the same name. Lets save them from themselves
session.mfaregstate =
MfaRegState::TotpNameTryAgain(totp_token.clone(), label.into());
return Ok(session.deref().into());
}
if totp_token.verify(totp_chal, ct) {
// It was valid. Update the credential.
let ncred = session
@ -3368,10 +3391,39 @@ mod tests {
.credential_primary_check_totp(&cust, ct, chal + 1, "totp")
.expect("Failed to update the primary cred totp");
assert!(matches!(
c_status.mfaregstate,
MfaRegStateStatus::TotpTryAgain
));
assert!(
matches!(c_status.mfaregstate, MfaRegStateStatus::TotpTryAgain),
"{:?}",
c_status.mfaregstate
);
// Check that the user actually put something into the label
let c_status = cutxn
.credential_primary_check_totp(&cust, ct, chal, "")
.expect("Failed to update the primary cred totp");
assert!(
matches!(
c_status.mfaregstate,
MfaRegStateStatus::TotpNameTryAgain(ref val) if val == ""
),
"{:?}",
c_status.mfaregstate
);
// Okay, Now they are trying to be smart...
let c_status = cutxn
.credential_primary_check_totp(&cust, ct, chal, " ")
.expect("Failed to update the primary cred totp");
assert!(
matches!(
c_status.mfaregstate,
MfaRegStateStatus::TotpNameTryAgain(ref val) if val == " "
),
"{:?}",
c_status.mfaregstate
);
let c_status = cutxn
.credential_primary_check_totp(&cust, ct, chal, "totp")
@ -3383,6 +3435,40 @@ mod tests {
_ => false,
});
{
let c_status = cutxn
.credential_primary_init_totp(&cust, ct)
.expect("Failed to update the primary cred password");
// Check the status has the token.
let totp_token: Totp = match c_status.mfaregstate {
MfaRegStateStatus::TotpCheck(secret) => Some(secret.try_into().unwrap()),
_ => None,
}
.expect("Unable to retrieve totp token, invalid state.");
trace!(?totp_token);
let chal = totp_token
.do_totp_duration_from_epoch(&ct)
.expect("Failed to perform totp step");
// They tried to add a second totp under the same name
let c_status = cutxn
.credential_primary_check_totp(&cust, ct, chal, "totp")
.expect("Failed to update the primary cred totp");
assert!(
matches!(
c_status.mfaregstate,
MfaRegStateStatus::TotpNameTryAgain(ref val) if val == "totp"
),
"{:?}",
c_status.mfaregstate
);
assert!(cutxn.credential_update_cancel_mfareg(&cust, ct).is_ok())
}
// Should be okay now!
drop(cutxn);

View file

@ -60,6 +60,7 @@ pub struct LdapServer {
basedn: String,
dnre: Regex,
binddnre: Regex,
max_queryable_attrs: usize,
}
#[derive(Debug)]
@ -79,6 +80,12 @@ impl LdapServer {
.qs_read
.internal_search_uuid(UUID_DOMAIN_INFO)?;
// Get the maximum number of queryable attributes from the domain entry
let max_queryable_attrs = domain_entry
.get_ava_single_uint32(Attribute::LdapMaxQueryableAttrs)
.map(|u| u as usize)
.unwrap_or(DEFAULT_LDAP_MAXIMUM_QUERYABLE_ATTRIBUTES);
let basedn = domain_entry
.get_ava_single_iutf8(Attribute::DomainLdapBasedn)
.map(|s| s.to_string())
@ -154,6 +161,7 @@ impl LdapServer {
basedn,
dnre,
binddnre,
max_queryable_attrs,
})
}
@ -205,9 +213,9 @@ impl LdapServer {
// Map the Some(a,v) to ...?
let ext_filter = match (&sr.scope, req_dn) {
// OneLevel and Child searches are veerrrryyy similar for us because child
// OneLevel and Child searches are **very** similar for us because child
// is a "subtree search excluding base". Because we don't have a tree structure at
// all, this is the same as a onelevel (ald children of base excludeing base).
// all, this is the same as a one level (all children of base excluding base).
(LdapSearchScope::Children, Some(_r)) | (LdapSearchScope::OneLevel, Some(_r)) => {
return Ok(vec![sr.gen_success()])
}
@ -239,11 +247,11 @@ impl LdapServer {
let mut all_attrs = false;
let mut all_op_attrs = false;
// TODO #67: limit the number of attributes here!
let attrs_len = sr.attrs.len();
if sr.attrs.is_empty() {
// If [], then "all" attrs
all_attrs = true;
} else {
} else if attrs_len < self.max_queryable_attrs {
sr.attrs.iter().for_each(|a| {
if a == "*" {
all_attrs = true;
@ -267,6 +275,12 @@ impl LdapServer {
}
}
})
} else {
admin_error!(
"Too many LDAP attributes requested. Maximum allowed is {}, while your search query had {}",
self.max_queryable_attrs, attrs_len
);
return Err(OperationError::ResourceLimit);
}
// We need to retain this to know what the client requested.
@ -1158,6 +1172,88 @@ mod tests {
assert_eq!(r1.len(), r2.len());
}
#[idm_test]
async fn test_ldap_spn_search(idms: &IdmServer, _idms_delayed: &IdmServerDelayed) {
let ldaps = LdapServer::new(idms).await.expect("failed to start ldap");
let usr_uuid = Uuid::new_v4();
let usr_name = "panko";
// Setup person, group and application
{
let e1: Entry<EntryInit, EntryNew> = entry_init!(
(Attribute::Class, EntryClass::Object.to_value()),
(Attribute::Class, EntryClass::Account.to_value()),
(Attribute::Class, EntryClass::Person.to_value()),
(Attribute::Name, Value::new_iname(usr_name)),
(Attribute::Uuid, Value::Uuid(usr_uuid)),
(Attribute::DisplayName, Value::new_utf8s(usr_name))
);
let ct = duration_from_epoch_now();
let mut server_txn = idms.proxy_write(ct).await.unwrap();
assert!(server_txn
.qs_write
.internal_create(vec![e1])
.and_then(|_| server_txn.commit())
.is_ok());
}
// Setup the anonymous login
let anon_t = ldaps.do_bind(idms, "", "").await.unwrap().unwrap();
assert_eq!(
anon_t.effective_session,
LdapSession::UnixBind(UUID_ANONYMOUS)
);
// Searching a malformed spn shouldn't cause the query to fail
let sr = SearchRequest {
msgid: 1,
base: format!("dc=example,dc=com"),
scope: LdapSearchScope::Subtree,
filter: LdapFilter::Or(vec![
LdapFilter::Equality(Attribute::Name.to_string(), usr_name.to_string()),
LdapFilter::Equality(Attribute::Spn.to_string(), usr_name.to_string()),
]),
attrs: vec!["*".to_string()],
};
let result = ldaps
.do_search(idms, &sr, &anon_t, Source::Internal)
.await
.map(|r| {
r.into_iter()
.filter(|r| matches!(r.op, LdapOp::SearchResultEntry(_)))
.collect::<Vec<_>>()
})
.unwrap();
assert!(!result.is_empty());
let sr = SearchRequest {
msgid: 1,
base: format!("dc=example,dc=com"),
scope: LdapSearchScope::Subtree,
filter: LdapFilter::And(vec![
LdapFilter::Equality(Attribute::Name.to_string(), usr_name.to_string()),
LdapFilter::Equality(Attribute::Spn.to_string(), usr_name.to_string()),
]),
attrs: vec!["*".to_string()],
};
let empty_result = ldaps
.do_search(idms, &sr, &anon_t, Source::Internal)
.await
.map(|r| {
r.into_iter()
.filter(|r| matches!(r.op, LdapOp::SearchResultEntry(_)))
.collect::<Vec<_>>()
})
.unwrap();
assert!(empty_result.is_empty());
}
#[idm_test]
async fn test_ldap_application_bind(idms: &IdmServer, _idms_delayed: &IdmServerDelayed) {
let ldaps = LdapServer::new(idms).await.expect("failed to start ldap");
@ -2549,4 +2645,106 @@ mod tests {
&OperationError::InvalidAttributeName("invalid".to_string()),
);
}
#[idm_test]
async fn test_ldap_maximum_queryable_attributes(
idms: &IdmServer,
_idms_delayed: &IdmServerDelayed,
) {
// Set the max queryable attrs to 2
let mut server_txn = idms.proxy_write(duration_from_epoch_now()).await.unwrap();
let set_ldap_maximum_queryable_attrs = ModifyEvent::new_internal_invalid(
filter!(f_eq(Attribute::Uuid, PartialValue::Uuid(UUID_DOMAIN_INFO))),
ModifyList::new_purge_and_set(Attribute::LdapMaxQueryableAttrs, Value::Uint32(2)),
);
assert!(server_txn
.qs_write
.modify(&set_ldap_maximum_queryable_attrs)
.and_then(|_| server_txn.commit())
.is_ok());
let ldaps = LdapServer::new(idms).await.expect("failed to start ldap");
let usr_uuid = Uuid::new_v4();
let grp_uuid = Uuid::new_v4();
let app_uuid = Uuid::new_v4();
let app_name = "testapp1";
// Setup person, group and application
{
let e1 = entry_init!(
(Attribute::Class, EntryClass::Object.to_value()),
(Attribute::Class, EntryClass::Account.to_value()),
(Attribute::Class, EntryClass::Person.to_value()),
(Attribute::Name, Value::new_iname("testperson1")),
(Attribute::Uuid, Value::Uuid(usr_uuid)),
(Attribute::Description, Value::new_utf8s("testperson1")),
(Attribute::DisplayName, Value::new_utf8s("testperson1"))
);
let e2 = entry_init!(
(Attribute::Class, EntryClass::Object.to_value()),
(Attribute::Class, EntryClass::Group.to_value()),
(Attribute::Name, Value::new_iname("testgroup1")),
(Attribute::Uuid, Value::Uuid(grp_uuid))
);
let e3 = entry_init!(
(Attribute::Class, EntryClass::Object.to_value()),
(Attribute::Class, EntryClass::ServiceAccount.to_value()),
(Attribute::Class, EntryClass::Application.to_value()),
(Attribute::Name, Value::new_iname(app_name)),
(Attribute::Uuid, Value::Uuid(app_uuid)),
(Attribute::LinkedGroup, Value::Refer(grp_uuid))
);
let ct = duration_from_epoch_now();
let mut server_txn = idms.proxy_write(ct).await.unwrap();
assert!(server_txn
.qs_write
.internal_create(vec![e1, e2, e3])
.and_then(|_| server_txn.commit())
.is_ok());
}
// Setup the anonymous login
let anon_t = ldaps.do_bind(idms, "", "").await.unwrap().unwrap();
assert_eq!(
anon_t.effective_session,
LdapSession::UnixBind(UUID_ANONYMOUS)
);
let invalid_search = SearchRequest {
msgid: 1,
base: "dc=example,dc=com".to_string(),
scope: LdapSearchScope::Subtree,
filter: LdapFilter::Present(Attribute::ObjectClass.to_string()),
attrs: vec![
"objectClass".to_string(),
"cn".to_string(),
"givenName".to_string(),
],
};
let valid_search = SearchRequest {
msgid: 1,
base: "dc=example,dc=com".to_string(),
scope: LdapSearchScope::Subtree,
filter: LdapFilter::Present(Attribute::ObjectClass.to_string()),
attrs: vec!["objectClass: person".to_string()],
};
let invalid_res: Result<Vec<LdapMsg>, OperationError> = ldaps
.do_search(idms, &invalid_search, &anon_t, Source::Internal)
.await;
let valid_res: Result<Vec<LdapMsg>, OperationError> = ldaps
.do_search(idms, &valid_search, &anon_t, Source::Internal)
.await;
assert_eq!(invalid_res, Err(OperationError::ResourceLimit));
assert!(valid_res.is_ok());
}
}

View file

@ -32,7 +32,7 @@ pub use kanidm_proto::oauth2::{
AccessTokenIntrospectRequest, AccessTokenIntrospectResponse, AccessTokenRequest,
AccessTokenResponse, AuthorisationRequest, CodeChallengeMethod, ErrorResponse, GrantTypeReq,
OAuth2RFC9068Token, OAuth2RFC9068TokenExtensions, Oauth2Rfc8414MetadataResponse,
OidcDiscoveryResponse, PkceAlg, TokenRevokeRequest,
OidcDiscoveryResponse, OidcWebfingerRel, OidcWebfingerResponse, PkceAlg, TokenRevokeRequest,
};
use kanidm_proto::oauth2::{
@ -248,24 +248,32 @@ impl AuthorisePermitSuccess {
pub fn build_redirect_uri(&self) -> Url {
let mut redirect_uri = self.redirect_uri.clone();
// Always clear query and fragment, regardless of the response mode
redirect_uri.set_query(None);
// Always clear the fragment per RFC
redirect_uri.set_fragment(None);
// We can't set query pairs on fragments, only query.
let mut uri_builder = url::form_urlencoded::Serializer::new(String::new());
uri_builder.append_pair("code", &self.code);
if let Some(state) = self.state.as_ref() {
uri_builder.append_pair("state", state);
};
let encoded = uri_builder.finish();
match self.response_mode {
ResponseMode::Query => redirect_uri.set_query(Some(&encoded)),
ResponseMode::Fragment => redirect_uri.set_fragment(Some(&encoded)),
ResponseMode::Query => {
redirect_uri
.query_pairs_mut()
.append_pair("code", &self.code);
if let Some(state) = self.state.as_ref() {
redirect_uri.query_pairs_mut().append_pair("state", state);
};
}
ResponseMode::Fragment => {
redirect_uri.set_query(None);
// Per [the RFC](https://www.rfc-editor.org/rfc/rfc6749#section-3.1.2), we can't set query pairs on fragment-containing redirects, only query ones.
let mut uri_builder = url::form_urlencoded::Serializer::new(String::new());
uri_builder.append_pair("code", &self.code);
if let Some(state) = self.state.as_ref() {
uri_builder.append_pair("state", state);
};
let encoded = uri_builder.finish();
redirect_uri.set_fragment(Some(&encoded))
}
}
redirect_uri
@ -2742,6 +2750,44 @@ impl IdmServerProxyReadTransaction<'_> {
})
}
#[instrument(level = "debug", skip_all)]
pub fn oauth2_openid_webfinger(
&mut self,
client_id: &str,
resource_id: &str,
) -> Result<OidcWebfingerResponse, OperationError> {
let o2rs = self.oauth2rs.inner.rs_set.get(client_id).ok_or_else(|| {
admin_warn!(
"Invalid OAuth2 client_id (have you configured the OAuth2 resource server?)"
);
OperationError::NoMatchingEntries
})?;
let Some(spn) = PartialValue::new_spn_s(resource_id) else {
return Err(OperationError::NoMatchingEntries);
};
// Ensure that the account exists.
if !self
.qs_read
.internal_exists(Filter::new(f_eq(Attribute::Spn, spn)))?
{
return Err(OperationError::NoMatchingEntries);
}
let issuer = o2rs.iss.clone();
Ok(OidcWebfingerResponse {
// we set the subject to the resource_id to ensure we always send something valid back
// but realistically this will be overwritten on at the API layer
subject: resource_id.to_string(),
links: vec![OidcWebfingerRel {
rel: "http://openid.net/specs/connect/1.0/issuer".into(),
href: issuer.into(),
}],
})
}
#[instrument(level = "debug", skip_all)]
pub fn oauth2_openid_publickey(&self, client_id: &str) -> Result<JwkKeySet, OperationError> {
let o2rs = self.oauth2rs.inner.rs_set.get(client_id).ok_or_else(|| {
@ -2982,7 +3028,7 @@ fn check_is_loopback(redirect_uri: &Url) -> bool {
#[cfg(test)]
mod tests {
use base64::{engine::general_purpose, Engine as _};
use std::collections::BTreeSet;
use std::collections::{BTreeMap, BTreeSet};
use std::convert::TryFrom;
use std::str::FromStr;
use std::time::Duration;
@ -3106,7 +3152,7 @@ mod tests {
),
(
Attribute::OAuth2RsOrigin,
Value::new_url_s("https://portal.example.com").unwrap()
Value::new_url_s("https://portal.example.com/?custom=foo").unwrap()
),
(
Attribute::OAuth2RsOrigin,
@ -3642,6 +3688,70 @@ mod tests {
== Oauth2Error::InvalidOrigin
);
// * invalid uri (doesn't match query params)
let auth_req = AuthorisationRequest {
response_type: ResponseType::Code,
response_mode: None,
client_id: "test_resource_server".to_string(),
state: Some("123".to_string()),
pkce_request: pkce_request.clone(),
redirect_uri: Url::parse("https://portal.example.com/?custom=foo&too=many").unwrap(),
scope: btreeset![OAUTH2_SCOPE_OPENID.to_string()],
nonce: None,
oidc_ext: Default::default(),
max_age: None,
unknown_keys: Default::default(),
};
assert!(
idms_prox_read
.check_oauth2_authorisation(Some(&ident), &auth_req, ct)
.unwrap_err()
== Oauth2Error::InvalidOrigin
);
let auth_req = AuthorisationRequest {
response_type: ResponseType::Code,
response_mode: None,
client_id: "test_resource_server".to_string(),
state: Some("123".to_string()),
pkce_request: pkce_request.clone(),
redirect_uri: Url::parse("https://portal.example.com").unwrap(),
scope: btreeset![OAUTH2_SCOPE_OPENID.to_string()],
nonce: None,
oidc_ext: Default::default(),
max_age: None,
unknown_keys: Default::default(),
};
assert!(
idms_prox_read
.check_oauth2_authorisation(Some(&ident), &auth_req, ct)
.unwrap_err()
== Oauth2Error::InvalidOrigin
);
let auth_req = AuthorisationRequest {
response_type: ResponseType::Code,
response_mode: None,
client_id: "test_resource_server".to_string(),
state: Some("123".to_string()),
pkce_request: pkce_request.clone(),
redirect_uri: Url::parse("https://portal.example.com/?wrong=queryparam").unwrap(),
scope: btreeset![OAUTH2_SCOPE_OPENID.to_string()],
nonce: None,
oidc_ext: Default::default(),
max_age: None,
unknown_keys: Default::default(),
};
assert!(
idms_prox_read
.check_oauth2_authorisation(Some(&ident), &auth_req, ct)
.unwrap_err()
== Oauth2Error::InvalidOrigin
);
// Not Authenticated
let auth_req = AuthorisationRequest {
response_type: ResponseType::Code,
@ -4003,6 +4113,8 @@ mod tests {
// == Setup the authorisation request
let (code_verifier, code_challenge) = create_code_verifier!("Whar Garble");
let redirect_uri = Url::parse("https://portal.example.com/?custom=foo").unwrap();
let auth_req = AuthorisationRequest {
response_type: ResponseType::Code,
response_mode: None,
@ -4012,7 +4124,7 @@ mod tests {
code_challenge: code_challenge.clone(),
code_challenge_method: CodeChallengeMethod::S256,
}),
redirect_uri: Url::parse("https://portal.example.com").unwrap(),
redirect_uri: redirect_uri.clone(),
scope: btreeset![OAUTH2_SCOPE_GROUPS.to_string()],
nonce: Some("abcdef".to_string()),
oidc_ext: Default::default(),
@ -4042,12 +4154,22 @@ mod tests {
// Check we are reflecting the CSRF properly.
assert_eq!(permit_success.state.as_deref(), None);
// Assert we followed the redirect uri including the query elements
// we have in the url.
let permit_redirect_uri = permit_success.build_redirect_uri();
assert_eq!(permit_redirect_uri.origin(), redirect_uri.origin());
assert_eq!(permit_redirect_uri.path(), redirect_uri.path());
let query = BTreeMap::from_iter(permit_redirect_uri.query_pairs().into_owned());
// Assert the query pair wasn't changed
assert_eq!(query.get("custom").map(|s| s.as_str()), Some("foo"));
// == Submit the token exchange code.
// ⚠️ This is where we submit a different origin!
let token_req = AccessTokenRequest {
grant_type: GrantTypeReq::AuthorizationCode {
code: permit_success.code,
redirect_uri: Url::parse("https://portal.example.com").unwrap(),
redirect_uri,
// From the first step.
code_verifier: code_verifier.clone(),
},
@ -5434,6 +5556,34 @@ mod tests {
.expect("Oauth2 authorisation failed");
}
#[idm_test]
async fn test_idm_oauth2_webfinger(idms: &IdmServer, _idms_delayed: &mut IdmServerDelayed) {
let ct = Duration::from_secs(TEST_CURRENT_TIME);
let (_secret, _uat, _ident, _) =
setup_oauth2_resource_server_basic(idms, ct, true, false, true).await;
let mut idms_prox_read = idms.proxy_read().await.unwrap();
let user = "testperson1@example.com";
let webfinger = idms_prox_read
.oauth2_openid_webfinger("test_resource_server", user)
.expect("Failed to get webfinger");
assert_eq!(webfinger.subject, user);
assert_eq!(webfinger.links.len(), 1);
let link = &webfinger.links[0];
assert_eq!(link.rel, "http://openid.net/specs/connect/1.0/issuer");
assert_eq!(
link.href,
"https://idm.example.com/oauth2/openid/test_resource_server"
);
let failed_webfinger = idms_prox_read
.oauth2_openid_webfinger("test_resource_server", "someone@another.domain");
assert!(failed_webfinger.is_err());
}
#[idm_test]
async fn test_idm_oauth2_openid_legacy_crypto(
idms: &IdmServer,

View file

@ -5,7 +5,7 @@ use base64::{
Engine as _,
};
use compact_jwt::{Jws, JwsCompact, JwsEs256Signer, JwsSigner};
use compact_jwt::{Jws, JwsCompact};
use kanidm_proto::internal::{ApiTokenPurpose, ScimSyncToken};
use kanidm_proto::scim_v1::*;
use std::collections::{BTreeMap, BTreeSet};
@ -25,7 +25,6 @@ pub(crate) struct SyncAccount {
pub name: String,
pub uuid: Uuid,
pub sync_tokens: BTreeMap<Uuid, ApiToken>,
pub jws_key: Option<JwsEs256Signer>,
}
macro_rules! try_from_entry {
@ -40,15 +39,6 @@ macro_rules! try_from_entry {
.map(|s| s.to_string())
.ok_or(OperationError::MissingAttribute(Attribute::Name))?;
let jws_key = $value
.get_ava_single_jws_key_es256(Attribute::JwsEs256PrivateKey)
.cloned()
.map(|jws_key| {
jws_key
.set_sign_option_embed_jwk(true)
.set_sign_option_legacy_kid(true)
});
let sync_tokens = $value
.get_ava_as_apitoken_map(Attribute::SyncTokenSession)
.cloned()
@ -60,7 +50,6 @@ macro_rules! try_from_entry {
name,
uuid,
sync_tokens,
jws_key,
})
}};
}
@ -123,16 +112,6 @@ impl IdmServerProxyWriteTransaction<'_> {
gte: &GenerateScimSyncTokenEvent,
ct: Duration,
) -> Result<JwsCompact, OperationError> {
// Get the target signing key.
let sync_account = self
.qs_write
.internal_search_uuid(gte.target)
.and_then(|entry| SyncAccount::try_from_entry_rw(&entry))
.map_err(|e| {
admin_error!(?e, "Failed to search service account");
e
})?;
let session_id = Uuid::new_v4();
let issued_at = time::OffsetDateTime::UNIX_EPOCH + ct;
@ -185,25 +164,9 @@ impl IdmServerProxyWriteTransaction<'_> {
})?;
// The modify succeeded and was allowed, now sign the token for return.
if self.qs_write.get_domain_version() < DOMAIN_LEVEL_6 {
sync_account
.jws_key
.as_ref()
.ok_or_else(|| {
admin_error!("Unable to sign sync token, no sync keys available");
OperationError::CryptographyError
})
.and_then(|jws_key| {
jws_key.sign(&token).map_err(|err| {
admin_error!(?err, "Unable to sign sync token");
OperationError::CryptographyError
})
})
} else {
self.qs_write
.get_domain_key_object_handle()?
.jws_es256_sign(&token, ct)
}
self.qs_write
.get_domain_key_object_handle()?
.jws_es256_sign(&token, ct)
// Done!
}

View file

@ -1,7 +1,7 @@
use std::collections::BTreeMap;
use std::time::Duration;
use compact_jwt::{Jws, JwsCompact, JwsEs256Signer, JwsSigner};
use compact_jwt::{Jws, JwsCompact};
use kanidm_proto::internal::ApiToken as ProtoApiToken;
use time::OffsetDateTime;
@ -23,15 +23,6 @@ macro_rules! try_from_entry {
));
}
let jws_key = $value
.get_ava_single_jws_key_es256(Attribute::JwsEs256PrivateKey)
.cloned()
.map(|jws_key| {
jws_key
.set_sign_option_embed_jwk(true)
.set_sign_option_legacy_kid(true)
});
let api_tokens = $value
.get_ava_as_apitoken_map(Attribute::ApiTokenSession)
.cloned()
@ -48,7 +39,6 @@ macro_rules! try_from_entry {
valid_from,
expire,
api_tokens,
jws_key,
})
}};
}
@ -60,8 +50,6 @@ pub struct ServiceAccount {
pub expire: Option<OffsetDateTime>,
pub api_tokens: BTreeMap<Uuid, ApiToken>,
pub jws_key: Option<JwsEs256Signer>,
}
impl ServiceAccount {
@ -253,25 +241,9 @@ impl IdmServerProxyWriteTransaction<'_> {
err
})?;
if self.qs_write.get_domain_version() < DOMAIN_LEVEL_6 {
service_account
.jws_key
.as_ref()
.ok_or_else(|| {
admin_error!("Unable to sign sync token, no sync keys available");
OperationError::CryptographyError
})
.and_then(|jws_key| {
jws_key.sign(&token).map_err(|err| {
admin_error!(?err, "Unable to sign sync token");
OperationError::CryptographyError
})
})
} else {
self.qs_write
.get_domain_key_object_handle()?
.jws_es256_sign(&token, ct)
}
self.qs_write
.get_domain_key_object_handle()?
.jws_es256_sign(&token, ct)
}
pub fn service_account_destroy_api_token(

View file

@ -89,8 +89,8 @@ pub mod prelude {
};
pub use crate::event::{CreateEvent, DeleteEvent, ExistsEvent, ModifyEvent, SearchEvent};
pub use crate::filter::{
f_and, f_andnot, f_eq, f_id, f_inc, f_lt, f_or, f_pres, f_self, f_spn_name, f_sub, Filter,
FilterInvalid, FilterValid, FC,
f_and, f_andnot, f_eq, f_id, f_inc, f_invalid, f_lt, f_or, f_pres, f_self, f_spn_name,
f_sub, Filter, FilterInvalid, FilterValid, FC,
};
pub use crate::idm::server::{IdmServer, IdmServerAudit, IdmServerDelayed};
pub use crate::idm::{ClientAuthInfo, ClientCertInfo};

View file

@ -7,8 +7,6 @@
use std::iter::once;
use std::sync::Arc;
use compact_jwt::JwsEs256Signer;
use rand::prelude::*;
use regex::Regex;
use tracing::trace;
@ -61,13 +59,6 @@ impl Plugin for Domain {
}
}
fn generate_domain_cookie_key() -> Value {
let mut key = [0; 64];
let mut rng = StdRng::from_entropy();
rng.fill(&mut key);
Value::new_privatebinary(&key)
}
impl Domain {
/// Generates the cookie key for the domain.
fn modify_inner<T: Clone + std::fmt::Debug>(
@ -79,11 +70,14 @@ impl Domain {
&& e.attribute_equality(Attribute::Uuid, &PVUUID_DOMAIN_INFO)
{
// Validate the domain ldap basedn syntax.
if let Some(basedn) = e
.get_ava_single_iutf8(Attribute::DomainLdapBasedn) {
if let Some(basedn) = e.get_ava_single_iutf8(Attribute::DomainLdapBasedn) {
if !DOMAIN_LDAP_BASEDN_RE.is_match(basedn) {
error!("Invalid {} '{}'. Must pass regex \"{}\"", Attribute::DomainLdapBasedn,basedn, *DOMAIN_LDAP_BASEDN_RE);
error!(
"Invalid {} '{}'. Must pass regex \"{}\"",
Attribute::DomainLdapBasedn,
basedn,
*DOMAIN_LDAP_BASEDN_RE
);
return Err(OperationError::InvalidState);
}
}
@ -109,39 +103,26 @@ impl Domain {
debug!("plugin_domain: NOT Applying domain version transform");
};
// create the domain_display_name if it's missing
if !e.attribute_pres(Attribute::DomainDisplayName) {
let domain_display_name = Value::new_utf8(format!("Kanidm {}", qs.get_domain_name()));
security_info!("plugin_domain: setting default domain_display_name to {:?}", domain_display_name);
// create the domain_display_name if it's missing. This was the behaviour in versions
// prior to DL10. Rather than checking the domain version itself, the issue is we
// have to check the min remigration level. This is because during a server setup
// we start from the MIN remigration level and work up, and the domain version == 0.
//
// So effectively we only skip setting this value after we know that we are at DL12
// since we could never go back to anything lower than 10 at that point.
if DOMAIN_MIN_REMIGRATION_LEVEL < DOMAIN_LEVEL_10
&& !e.attribute_pres(Attribute::DomainDisplayName)
{
let domain_display_name =
Value::new_utf8(format!("Kanidm {}", qs.get_domain_name()));
security_info!(
"plugin_domain: setting default domain_display_name to {:?}",
domain_display_name
);
e.set_ava(&Attribute::DomainDisplayName, once(domain_display_name));
}
if qs.get_domain_version() < DOMAIN_LEVEL_6 && !e.attribute_pres(Attribute::FernetPrivateKeyStr) {
security_info!("regenerating domain token encryption key");
let k = fernet::Fernet::generate_key();
let v = Value::new_secret_str(&k);
e.add_ava(Attribute::FernetPrivateKeyStr, v);
}
if qs.get_domain_version() < DOMAIN_LEVEL_6 && !e.attribute_pres(Attribute::Es256PrivateKeyDer) {
security_info!("regenerating domain es256 private key");
let der = JwsEs256Signer::generate_es256()
.and_then(|jws| jws.private_key_to_der())
.map_err(|e| {
admin_error!(err = ?e, "Unable to generate ES256 JwsSigner private key");
OperationError::CryptographyError
})?;
let v = Value::new_privatebinary(&der);
e.add_ava(Attribute::Es256PrivateKeyDer, v);
}
if qs.get_domain_version() < DOMAIN_LEVEL_6 && !e.attribute_pres(Attribute::PrivateCookieKey) {
security_info!("regenerating domain cookie key");
e.add_ava(Attribute::PrivateCookieKey, generate_domain_cookie_key());
}
trace!(?e);
Ok(())
} else {
Ok(())

View file

@ -62,10 +62,7 @@ pub const GID_UNUSED_D_MAX: u32 = 0x7fff_ffff;
pub struct GidNumber {}
fn apply_gidnumber<T: Clone>(
e: &mut Entry<EntryInvalid, T>,
domain_version: DomainVersion,
) -> Result<(), OperationError> {
fn apply_gidnumber<T: Clone>(e: &mut Entry<EntryInvalid, T>) -> Result<(), OperationError> {
if (e.attribute_equality(Attribute::Class, &EntryClass::PosixGroup.into())
|| e.attribute_equality(Attribute::Class, &EntryClass::PosixAccount.into()))
&& !e.attribute_pres(Attribute::GidNumber)
@ -89,48 +86,33 @@ fn apply_gidnumber<T: Clone>(
e.set_ava(&Attribute::GidNumber, once(gid_v));
Ok(())
} else if let Some(gid) = e.get_ava_single_uint32(Attribute::GidNumber) {
if domain_version <= DOMAIN_LEVEL_6 {
if gid < GID_REGULAR_USER_MIN {
error!(
"Requested GID ({}) overlaps a system range. Allowed ranges are {} to {}, {} to {} and {} to {}",
gid,
GID_REGULAR_USER_MIN, GID_REGULAR_USER_MAX,
GID_UNUSED_C_MIN, GID_UNUSED_C_MAX,
GID_UNUSED_D_MIN, GID_UNUSED_D_MAX
);
Err(OperationError::PL0001GidOverlapsSystemRange)
} else {
Ok(())
}
// If they provided us with a gid number, ensure it's in a safe range.
if (GID_REGULAR_USER_MIN..=GID_REGULAR_USER_MAX).contains(&gid)
|| (GID_UNUSED_A_MIN..=GID_UNUSED_A_MAX).contains(&gid)
|| (GID_UNUSED_B_MIN..= GID_UNUSED_B_MAX).contains(&gid)
|| (GID_UNUSED_C_MIN..=GID_UNUSED_C_MAX).contains(&gid)
// We won't ever generate an id in the nspawn range, but we do secretly allow
// it to be set for compatibility with services like freeipa or openldap. TBH
// most people don't even use systemd nspawn anyway ...
//
// I made this design choice to avoid a tunable that may confuse people to
// its purpose. This way things "just work" for imports and existing systems
// but we do the right thing in the future.
|| (GID_NSPAWN_MIN..=GID_NSPAWN_MAX).contains(&gid)
|| (GID_UNUSED_D_MIN..=GID_UNUSED_D_MAX).contains(&gid)
{
Ok(())
} else {
// If they provided us with a gid number, ensure it's in a safe range.
if (GID_REGULAR_USER_MIN..=GID_REGULAR_USER_MAX).contains(&gid)
|| (GID_UNUSED_A_MIN..=GID_UNUSED_A_MAX).contains(&gid)
|| (GID_UNUSED_B_MIN..= GID_UNUSED_B_MAX).contains(&gid)
|| (GID_UNUSED_C_MIN..=GID_UNUSED_C_MAX).contains(&gid)
// We won't ever generate an id in the nspawn range, but we do secretly allow
// it to be set for compatibility with services like freeipa or openldap. TBH
// most people don't even use systemd nspawn anyway ...
//
// I made this design choice to avoid a tunable that may confuse people to
// its purpose. This way things "just work" for imports and existing systems
// but we do the right thing in the future.
|| (GID_NSPAWN_MIN..=GID_NSPAWN_MAX).contains(&gid)
|| (GID_UNUSED_D_MIN..=GID_UNUSED_D_MAX).contains(&gid)
{
Ok(())
} else {
// Note that here we don't advertise that we allow the nspawn range to be set, even
// though we do allow it.
error!(
"Requested GID ({}) overlaps a system range. Allowed ranges are {} to {}, {} to {} and {} to {}",
gid,
GID_REGULAR_USER_MIN, GID_REGULAR_USER_MAX,
GID_UNUSED_C_MIN, GID_UNUSED_C_MAX,
GID_UNUSED_D_MIN, GID_UNUSED_D_MAX
);
Err(OperationError::PL0001GidOverlapsSystemRange)
}
// Note that here we don't advertise that we allow the nspawn range to be set, even
// though we do allow it.
error!(
"Requested GID ({}) overlaps a system range. Allowed ranges are {} to {}, {} to {} and {} to {}",
gid,
GID_REGULAR_USER_MIN, GID_REGULAR_USER_MAX,
GID_UNUSED_C_MIN, GID_UNUSED_C_MAX,
GID_UNUSED_D_MIN, GID_UNUSED_D_MAX
);
Err(OperationError::PL0001GidOverlapsSystemRange)
}
} else {
Ok(())
@ -144,37 +126,31 @@ impl Plugin for GidNumber {
#[instrument(level = "debug", name = "gidnumber_pre_create_transform", skip_all)]
fn pre_create_transform(
qs: &mut QueryServerWriteTransaction,
_qs: &mut QueryServerWriteTransaction,
cand: &mut Vec<Entry<EntryInvalid, EntryNew>>,
_ce: &CreateEvent,
) -> Result<(), OperationError> {
let dv = qs.get_domain_version();
cand.iter_mut()
.try_for_each(|cand| apply_gidnumber(cand, dv))
cand.iter_mut().try_for_each(apply_gidnumber)
}
#[instrument(level = "debug", name = "gidnumber_pre_modify", skip_all)]
fn pre_modify(
qs: &mut QueryServerWriteTransaction,
_qs: &mut QueryServerWriteTransaction,
_pre_cand: &[Arc<EntrySealedCommitted>],
cand: &mut Vec<Entry<EntryInvalid, EntryCommitted>>,
_me: &ModifyEvent,
) -> Result<(), OperationError> {
let dv = qs.get_domain_version();
cand.iter_mut()
.try_for_each(|cand| apply_gidnumber(cand, dv))
cand.iter_mut().try_for_each(apply_gidnumber)
}
#[instrument(level = "debug", name = "gidnumber_pre_batch_modify", skip_all)]
fn pre_batch_modify(
qs: &mut QueryServerWriteTransaction,
_qs: &mut QueryServerWriteTransaction,
_pre_cand: &[Arc<EntrySealedCommitted>],
cand: &mut Vec<Entry<EntryInvalid, EntryCommitted>>,
_me: &BatchModifyEvent,
) -> Result<(), OperationError> {
let dv = qs.get_domain_version();
cand.iter_mut()
.try_for_each(|cand| apply_gidnumber(cand, dv))
cand.iter_mut().try_for_each(apply_gidnumber)
}
}
@ -186,9 +162,7 @@ mod tests {
};
use crate::prelude::*;
use kanidm_proto::internal::DomainUpgradeCheckStatus as ProtoDomainUpgradeCheckStatus;
#[qs_test(domain_level=DOMAIN_LEVEL_7)]
#[qs_test]
async fn test_gidnumber_generate(server: &QueryServer) {
let mut server_txn = server.write(duration_from_epoch_now()).await.expect("txn");
@ -423,85 +397,4 @@ mod tests {
assert!(server_txn.commit().is_ok());
}
#[qs_test(domain_level=DOMAIN_LEVEL_6)]
async fn test_gidnumber_domain_level_6(server: &QueryServer) {
let mut server_txn = server.write(duration_from_epoch_now()).await.expect("txn");
// This will be INVALID in DL 7 but it's allowed for DL6
let user_a_uuid = uuid!("d90fb0cb-6785-4f36-94cb-e364d9c13255");
{
let op_result = server_txn.internal_create(vec![entry_init!(
(Attribute::Class, EntryClass::Account.to_value()),
(Attribute::Class, EntryClass::PosixAccount.to_value()),
(Attribute::Name, Value::new_iname("testperson_2")),
(Attribute::Uuid, Value::Uuid(user_a_uuid)),
// NOTE HERE: We do GID_UNUSED_A_MIN minus 1 which isn't accepted
// on DL7
(Attribute::GidNumber, Value::Uint32(GID_UNUSED_A_MIN - 1)),
(Attribute::Description, Value::new_utf8s("testperson")),
(Attribute::DisplayName, Value::new_utf8s("testperson"))
)]);
assert!(op_result.is_ok());
let user_a = server_txn
.internal_search_uuid(user_a_uuid)
.expect("Unable to access user");
let user_a_uid = user_a
.get_ava_single_uint32(Attribute::GidNumber)
.expect("gidnumber not present on account");
assert_eq!(user_a_uid, GID_UNUSED_A_MIN - 1);
}
assert!(server_txn.commit().is_ok());
// Now, do the DL6 upgrade check - will FAIL because the above user has an invalid ID.
let mut server_txn = server.read().await.unwrap();
let check_item = server_txn
.domain_upgrade_check_6_to_7_gidnumber()
.expect("Failed to perform migration check.");
assert_eq!(
check_item.status,
ProtoDomainUpgradeCheckStatus::Fail6To7Gidnumber
);
drop(server_txn);
let mut server_txn = server.write(duration_from_epoch_now()).await.expect("txn");
// Test rejection of important gid values.
let user_b_uuid = uuid!("33afc396-2434-47e5-b143-05176148b50e");
// Test that an entry when modified to have posix attributes, if a gidnumber
// is provided then it is respected.
{
let op_result = server_txn.internal_create(vec![entry_init!(
(Attribute::Class, EntryClass::Account.to_value()),
(Attribute::Class, EntryClass::Person.to_value()),
(Attribute::Name, Value::new_iname("testperson_6")),
(Attribute::Uuid, Value::Uuid(user_b_uuid)),
(Attribute::Description, Value::new_utf8s("testperson")),
(Attribute::DisplayName, Value::new_utf8s("testperson"))
)]);
assert!(op_result.is_ok());
for id in [0, 500, GID_REGULAR_USER_MIN - 1] {
let modlist = modlist!([
m_pres(Attribute::Class, &EntryClass::PosixAccount.to_value()),
m_pres(Attribute::GidNumber, &Value::Uint32(id))
]);
let op_result = server_txn.internal_modify_uuid(user_b_uuid, &modlist);
trace!(?id);
assert_eq!(op_result, Err(OperationError::PL0001GidOverlapsSystemRange));
}
}
assert!(server_txn.commit().is_ok());
}
}

View file

@ -45,7 +45,7 @@ impl Plugin for JwsKeygen {
impl JwsKeygen {
fn modify_inner<T: Clone>(
qs: &mut QueryServerWriteTransaction,
_qs: &mut QueryServerWriteTransaction,
cand: &mut [Entry<EntryInvalid, T>],
) -> Result<(), OperationError> {
cand.iter_mut().try_for_each(|e| {
@ -88,20 +88,6 @@ impl JwsKeygen {
}
}
if qs.get_domain_version() < DOMAIN_LEVEL_6 &&
(e.attribute_equality(Attribute::Class, &EntryClass::ServiceAccount.into()) ||
e.attribute_equality(Attribute::Class, &EntryClass::SyncAccount.into())) &&
!e.attribute_pres(Attribute::JwsEs256PrivateKey) {
security_info!("regenerating jws es256 private key");
let jwssigner = JwsEs256Signer::generate_es256()
.map_err(|e| {
admin_error!(err = ?e, "Unable to generate ES256 JwsSigner private key");
OperationError::CryptographyError
})?;
let v = Value::JwsKeyEs256(jwssigner);
e.add_ava(Attribute::JwsEs256PrivateKey, v);
}
Ok(())
})
}

View file

@ -25,6 +25,7 @@ lazy_static! {
// modification of some domain info types for local configuratiomn.
Attribute::DomainSsid,
Attribute::DomainLdapBasedn,
Attribute::LdapMaxQueryableAttrs,
Attribute::LdapAllowUnixPwBind,
Attribute::FernetPrivateKeyStr,
Attribute::Es256PrivateKeyDer,

View file

@ -10,7 +10,7 @@ impl Plugin for ValueDeny {
"plugin_value_deny"
}
#[instrument(level = "debug", name = "base_pre_create_transform", skip_all)]
#[instrument(level = "debug", name = "denied_names_pre_create_transform", skip_all)]
#[allow(clippy::cognitive_complexity)]
fn pre_create_transform(
qs: &mut QueryServerWriteTransaction,
@ -19,9 +19,25 @@ impl Plugin for ValueDeny {
) -> Result<(), OperationError> {
let denied_names = qs.denied_names();
if denied_names.is_empty() {
// Nothing to check.
return Ok(());
}
let mut pass = true;
for entry in cand {
// If the entry doesn't have a uuid, it's invalid anyway and will fail schema.
if let Some(e_uuid) = entry.get_uuid() {
// SAFETY - Thanks to JpWarren blowing his nipper clean off, we need to
// assert that the break glass and system accounts are NOT subject to
// this process.
if e_uuid < DYNAMIC_RANGE_MINIMUM_UUID {
// These entries are exempt
continue;
}
}
if let Some(name) = entry.get_ava_single_iname(Attribute::Name) {
if denied_names.contains(name) {
pass = false;
@ -37,27 +53,24 @@ impl Plugin for ValueDeny {
}
}
#[instrument(level = "debug", name = "base_pre_modify", skip_all)]
fn pre_modify(
qs: &mut QueryServerWriteTransaction,
_pre_cand: &[Arc<EntrySealedCommitted>],
pre_cand: &[Arc<EntrySealedCommitted>],
cand: &mut Vec<Entry<EntryInvalid, EntryCommitted>>,
_me: &ModifyEvent,
) -> Result<(), OperationError> {
Self::modify(qs, cand)
Self::modify(qs, pre_cand, cand)
}
#[instrument(level = "debug", name = "base_pre_modify", skip_all)]
fn pre_batch_modify(
qs: &mut QueryServerWriteTransaction,
_pre_cand: &[Arc<EntrySealedCommitted>],
pre_cand: &[Arc<EntrySealedCommitted>],
cand: &mut Vec<Entry<EntryInvalid, EntryCommitted>>,
_me: &BatchModifyEvent,
) -> Result<(), OperationError> {
Self::modify(qs, cand)
Self::modify(qs, pre_cand, cand)
}
#[instrument(level = "debug", name = "base::verify", skip_all)]
fn verify(qs: &mut QueryServerReadTransaction) -> Vec<Result<(), ConsistencyError>> {
let denied_names = qs.denied_names().clone();
@ -68,7 +81,15 @@ impl Plugin for ValueDeny {
match qs.internal_search(filt) {
Ok(entries) => {
for entry in entries {
results.push(Err(ConsistencyError::DeniedName(entry.get_uuid())));
let e_uuid = entry.get_uuid();
// SAFETY - Thanks to JpWarren blowing his nipper clean off, we need to
// assert that the break glass accounts are NOT subject to this process.
if e_uuid < DYNAMIC_RANGE_MINIMUM_UUID {
// These entries are exempt
continue;
}
results.push(Err(ConsistencyError::DeniedName(e_uuid)));
}
}
Err(err) => {
@ -83,17 +104,37 @@ impl Plugin for ValueDeny {
}
impl ValueDeny {
#[instrument(level = "debug", name = "denied_names_modify", skip_all)]
fn modify(
qs: &mut QueryServerWriteTransaction,
cand: &mut Vec<Entry<EntryInvalid, EntryCommitted>>,
pre_cand: &[Arc<EntrySealedCommitted>],
cand: &mut [EntryInvalidCommitted],
) -> Result<(), OperationError> {
let denied_names = qs.denied_names();
if denied_names.is_empty() {
// Nothing to check.
return Ok(());
}
let mut pass = true;
for entry in cand {
if let Some(name) = entry.get_ava_single_iname(Attribute::Name) {
if denied_names.contains(name) {
for (pre_entry, post_entry) in pre_cand.iter().zip(cand.iter()) {
// If the entry doesn't have a uuid, it's invalid anyway and will fail schema.
let e_uuid = pre_entry.get_uuid();
// SAFETY - Thanks to JpWarren blowing his nipper clean off, we need to
// assert that the break glass accounts are NOT subject to this process.
if e_uuid < DYNAMIC_RANGE_MINIMUM_UUID {
// These entries are exempt
continue;
}
let pre_name = pre_entry.get_ava_single_iname(Attribute::Name);
let post_name = post_entry.get_ava_single_iname(Attribute::Name);
if let Some(name) = post_name {
// Only if the name is changing, and is denied.
if pre_name != post_name && denied_names.contains(name) {
pass = false;
error!(?name, "name denied by system configuration");
}
@ -117,10 +158,10 @@ mod tests {
let me_inv_m = ModifyEvent::new_internal_invalid(
filter!(f_eq(Attribute::Uuid, PVUUID_SYSTEM_CONFIG.clone())),
ModifyList::new_list(vec![Modify::Present(
Attribute::DeniedName,
Value::new_iname("tobias"),
)]),
ModifyList::new_list(vec![
Modify::Present(Attribute::DeniedName, Value::new_iname("tobias")),
Modify::Present(Attribute::DeniedName, Value::new_iname("ellie")),
]),
);
assert!(server_txn.modify(&me_inv_m).is_ok());
@ -148,30 +189,103 @@ mod tests {
#[qs_test]
async fn test_valuedeny_modify(server: &QueryServer) {
setup_name_deny(server).await;
// Create an entry that has a name which will become denied to test how it
// interacts.
let mut server_txn = server.write(duration_from_epoch_now()).await.unwrap();
let t_uuid = Uuid::new_v4();
let e_uuid = Uuid::new_v4();
assert!(server_txn
.internal_create(vec![entry_init!(
(Attribute::Class, EntryClass::Object.to_value()),
(Attribute::Class, EntryClass::Account.to_value()),
(Attribute::Class, EntryClass::Person.to_value()),
(Attribute::Name, Value::new_iname("newname")),
(Attribute::Uuid, Value::Uuid(t_uuid)),
(Attribute::Description, Value::new_utf8s("Tobias")),
(Attribute::DisplayName, Value::new_utf8s("Tobias"))
(Attribute::Name, Value::new_iname("ellie")),
(Attribute::Uuid, Value::Uuid(e_uuid)),
(Attribute::Description, Value::new_utf8s("Ellie Meow")),
(Attribute::DisplayName, Value::new_utf8s("Ellie Meow"))
),])
.is_ok());
// Now mod it
assert!(server_txn.commit().is_ok());
setup_name_deny(server).await;
let mut server_txn = server.write(duration_from_epoch_now()).await.unwrap();
// Attempt to mod ellie.
// Can mod a different attribute
assert!(server_txn
.internal_modify_uuid(
t_uuid,
e_uuid,
&ModifyList::new_purge_and_set(Attribute::DisplayName, Value::new_utf8s("tobias"))
)
.is_ok());
// Can't mod to another invalid name.
assert!(server_txn
.internal_modify_uuid(
e_uuid,
&ModifyList::new_purge_and_set(Attribute::Name, Value::new_iname("tobias"))
)
.is_err());
// Can mod to a valid name.
assert!(server_txn
.internal_modify_uuid(
e_uuid,
&ModifyList::new_purge_and_set(
Attribute::Name,
Value::new_iname("miss_meowington")
)
)
.is_ok());
// Now mod from the valid name to an invalid one.
assert!(server_txn
.internal_modify_uuid(
e_uuid,
&ModifyList::new_purge_and_set(Attribute::Name, Value::new_iname("tobias"))
)
.is_err());
assert!(server_txn.commit().is_ok());
}
#[qs_test]
async fn test_valuedeny_jpwarren_special(server: &QueryServer) {
// Assert that our break glass accounts are exempt from this processing.
let mut server_txn = server.write(duration_from_epoch_now()).await.unwrap();
let me_inv_m = ModifyEvent::new_internal_invalid(
filter!(f_eq(Attribute::Uuid, PVUUID_SYSTEM_CONFIG.clone())),
ModifyList::new_list(vec![
Modify::Present(Attribute::DeniedName, Value::new_iname("admin")),
Modify::Present(Attribute::DeniedName, Value::new_iname("idm_admin")),
]),
);
assert!(server_txn.modify(&me_inv_m).is_ok());
assert!(server_txn.commit().is_ok());
let mut server_txn = server.write(duration_from_epoch_now()).await.unwrap();
assert!(server_txn
.internal_modify_uuid(
UUID_IDM_ADMIN,
&ModifyList::new_purge_and_set(
Attribute::DisplayName,
Value::new_utf8s("Idm Admin")
)
)
.is_ok());
assert!(server_txn
.internal_modify_uuid(
UUID_ADMIN,
&ModifyList::new_purge_and_set(Attribute::DisplayName, Value::new_utf8s("Admin"))
)
.is_ok());
assert!(server_txn.commit().is_ok());
}
#[qs_test]

View file

@ -158,7 +158,7 @@ impl QueryServer {
// If we are new enough to support patches, and we are lower than the target patch level
// then a reload will be applied after we raise the patch level.
if domain_target_level >= DOMAIN_LEVEL_7 && domain_patch_level < DOMAIN_TGT_PATCH_LEVEL {
if domain_patch_level < DOMAIN_TGT_PATCH_LEVEL {
write_txn
.internal_modify_uuid(
UUID_DOMAIN_INFO,
@ -294,346 +294,6 @@ impl QueryServerWriteTransaction<'_> {
}
}
/// Migration domain level 6 to 7
#[instrument(level = "info", skip_all)]
pub(crate) fn migrate_domain_6_to_7(&mut self) -> Result<(), OperationError> {
if !cfg!(test) && DOMAIN_MAX_LEVEL < DOMAIN_LEVEL_7 {
error!("Unable to raise domain level from 6 to 7.");
return Err(OperationError::MG0004DomainLevelInDevelopment);
}
// ============== Apply constraints ===============
// Due to changes in gidnumber allocation, in the *extremely* unlikely
// case that a user's ID was generated outside the valid range, we re-request
// the creation of their gid number to proceed.
let filter = filter!(f_and!([
f_or!([
f_eq(Attribute::Class, EntryClass::PosixAccount.into()),
f_eq(Attribute::Class, EntryClass::PosixGroup.into())
]),
// This logic gets a bit messy but it would be:
// If ! (
// (GID_REGULAR_USER_MIN < value < GID_REGULAR_USER_MAX) ||
// (GID_UNUSED_A_MIN < value < GID_UNUSED_A_MAX) ||
// (GID_UNUSED_B_MIN < value < GID_UNUSED_B_MAX) ||
// (GID_UNUSED_C_MIN < value < GID_UNUSED_D_MAX)
// )
f_andnot(f_or!([
f_and!([
// The gid value must be less than GID_REGULAR_USER_MAX
f_lt(
Attribute::GidNumber,
PartialValue::Uint32(crate::plugins::gidnumber::GID_REGULAR_USER_MAX)
),
// This bit of mental gymnastics is "greater than".
// The gid value must not be less than USER_MIN
f_andnot(f_lt(
Attribute::GidNumber,
PartialValue::Uint32(crate::plugins::gidnumber::GID_REGULAR_USER_MIN)
))
]),
f_and!([
f_lt(
Attribute::GidNumber,
PartialValue::Uint32(crate::plugins::gidnumber::GID_UNUSED_A_MAX)
),
f_andnot(f_lt(
Attribute::GidNumber,
PartialValue::Uint32(crate::plugins::gidnumber::GID_UNUSED_A_MIN)
))
]),
f_and!([
f_lt(
Attribute::GidNumber,
PartialValue::Uint32(crate::plugins::gidnumber::GID_UNUSED_B_MAX)
),
f_andnot(f_lt(
Attribute::GidNumber,
PartialValue::Uint32(crate::plugins::gidnumber::GID_UNUSED_B_MIN)
))
]),
// If both of these conditions are true we get:
// C_MIN < value < D_MAX, which the outer and-not inverts.
f_and!([
// The gid value must be less than GID_UNUSED_D_MAX
f_lt(
Attribute::GidNumber,
PartialValue::Uint32(crate::plugins::gidnumber::GID_UNUSED_D_MAX)
),
// This bit of mental gymnastics is "greater than".
// The gid value must not be less than C_MIN
f_andnot(f_lt(
Attribute::GidNumber,
PartialValue::Uint32(crate::plugins::gidnumber::GID_UNUSED_C_MIN)
))
]),
]))
]));
let results = self.internal_search(filter).map_err(|err| {
error!(?err, "migrate_domain_6_to_7 -> Error");
err
})?;
if !results.is_empty() {
error!("Unable to proceed. Not all entries meet gid/uid constraints.");
for entry in results {
error!(gid_invalid = ?entry.get_display_id());
}
return Err(OperationError::MG0005GidConstraintsNotMet);
}
// =========== Apply changes ==============
// For each oauth2 client, if it is missing a landing page then we clone the origin
// into landing. This is because previously we implied the landing to be origin if
// unset, but now landing is the primary url and implies an origin.
let filter = filter!(f_and!([
f_eq(Attribute::Class, EntryClass::OAuth2ResourceServer.into()),
f_pres(Attribute::OAuth2RsOrigin),
f_andnot(f_pres(Attribute::OAuth2RsOriginLanding)),
]));
let pre_candidates = self.internal_search(filter).map_err(|err| {
error!(?err, "migrate_domain_6_to_7 internal search failure");
err
})?;
let modset: Vec<_> = pre_candidates
.into_iter()
.filter_map(|ent| {
ent.get_ava_single_url(Attribute::OAuth2RsOrigin)
.map(|origin_url| {
// Copy the origin url to the landing.
let modlist = vec![Modify::Present(
Attribute::OAuth2RsOriginLanding,
Value::Url(origin_url.clone()),
)];
(ent.get_uuid(), ModifyList::new_list(modlist))
})
})
.collect();
// If there is nothing, we don't need to do anything.
if !modset.is_empty() {
self.internal_batch_modify(modset.into_iter())?;
}
// Do this before schema change since domain info has cookie key
// as may at this point.
//
// Domain info should have the attribute private cookie key removed.
let modlist = ModifyList::new_list(vec![
Modify::Purged(Attribute::PrivateCookieKey),
Modify::Purged(Attribute::Es256PrivateKeyDer),
Modify::Purged(Attribute::FernetPrivateKeyStr),
]);
self.internal_modify_uuid(UUID_DOMAIN_INFO, &modlist)?;
let filter = filter!(f_or!([
f_eq(Attribute::Class, EntryClass::ServiceAccount.into()),
f_eq(Attribute::Class, EntryClass::SyncAccount.into())
]));
let modlist = ModifyList::new_list(vec![Modify::Purged(Attribute::JwsEs256PrivateKey)]);
self.internal_modify(&filter, &modlist)?;
// Now update schema
let idm_schema_classes = [
SCHEMA_ATTR_PATCH_LEVEL_DL7.clone().into(),
SCHEMA_ATTR_DOMAIN_DEVELOPMENT_TAINT_DL7.clone().into(),
SCHEMA_ATTR_REFERS_DL7.clone().into(),
SCHEMA_ATTR_CERTIFICATE_DL7.clone().into(),
SCHEMA_ATTR_OAUTH2_RS_ORIGIN_DL7.clone().into(),
SCHEMA_ATTR_OAUTH2_STRICT_REDIRECT_URI_DL7.clone().into(),
SCHEMA_ATTR_MAIL_DL7.clone().into(),
SCHEMA_ATTR_LEGALNAME_DL7.clone().into(),
SCHEMA_ATTR_DISPLAYNAME_DL7.clone().into(),
SCHEMA_CLASS_DOMAIN_INFO_DL7.clone().into(),
SCHEMA_CLASS_SERVICE_ACCOUNT_DL7.clone().into(),
SCHEMA_CLASS_SYNC_ACCOUNT_DL7.clone().into(),
SCHEMA_CLASS_CLIENT_CERTIFICATE_DL7.clone().into(),
SCHEMA_CLASS_OAUTH2_RS_DL7.clone().into(),
];
idm_schema_classes
.into_iter()
.try_for_each(|entry| self.internal_migrate_or_create(entry))
.map_err(|err| {
error!(?err, "migrate_domain_6_to_7 -> Error");
err
})?;
self.reload()?;
// Update access controls
let idm_data = [
BUILTIN_GROUP_PEOPLE_SELF_NAME_WRITE_DL7
.clone()
.try_into()?,
IDM_PEOPLE_SELF_MAIL_WRITE_DL7.clone().try_into()?,
BUILTIN_GROUP_CLIENT_CERTIFICATE_ADMINS_DL7
.clone()
.try_into()?,
IDM_HIGH_PRIVILEGE_DL7.clone().try_into()?,
];
idm_data
.into_iter()
.try_for_each(|entry| {
self.internal_migrate_or_create_ignore_attrs(entry, &[Attribute::Member])
})
.map_err(|err| {
error!(?err, "migrate_domain_6_to_7 -> Error");
err
})?;
let idm_data = [
IDM_ACP_SELF_WRITE_DL7.clone().into(),
IDM_ACP_SELF_NAME_WRITE_DL7.clone().into(),
IDM_ACP_HP_CLIENT_CERTIFICATE_MANAGER_DL7.clone().into(),
IDM_ACP_OAUTH2_MANAGE_DL7.clone().into(),
];
idm_data
.into_iter()
.try_for_each(|entry| self.internal_migrate_or_create(entry))
.map_err(|err| {
error!(?err, "migrate_domain_6_to_7 -> Error");
err
})?;
Ok(())
}
/// Patch Application - This triggers a one-shot fixup task for issue #2756
/// to correct the content of dyngroups after the dyngroups are now loaded.
#[instrument(level = "info", skip_all)]
pub(crate) fn migrate_domain_patch_level_1(&mut self) -> Result<(), OperationError> {
admin_warn!("applying domain patch 1.");
debug_assert!(*self.phase >= ServerPhase::SchemaReady);
let filter = filter!(f_eq(Attribute::Class, EntryClass::DynGroup.into()));
let modlist = modlist!([m_pres(Attribute::Class, &EntryClass::DynGroup.into())]);
self.internal_modify(&filter, &modlist).map(|()| {
info!("forced dyngroups to re-calculate memberships");
})
}
/// Migration domain level 7 to 8
#[instrument(level = "info", skip_all)]
pub(crate) fn migrate_domain_7_to_8(&mut self) -> Result<(), OperationError> {
if !cfg!(test) && DOMAIN_MAX_LEVEL < DOMAIN_LEVEL_8 {
error!("Unable to raise domain level from 7 to 8.");
return Err(OperationError::MG0004DomainLevelInDevelopment);
}
// ============== Apply constraints ===============
let filter = filter!(f_and!([
f_eq(Attribute::Class, EntryClass::Account.into()),
f_pres(Attribute::PrimaryCredential),
]));
let results = self.internal_search(filter)?;
let affected_entries = results
.into_iter()
.filter_map(|entry| {
if entry
.get_ava_single_credential(Attribute::PrimaryCredential)
.map(|cred| cred.has_securitykey())
.unwrap_or_default()
{
Some(entry.get_display_id())
} else {
None
}
})
.collect::<Vec<_>>();
if !affected_entries.is_empty() {
error!("Unable to proceed. Some accounts still use legacy security keys, which need to be removed.");
for sk_present in affected_entries {
error!(%sk_present);
}
return Err(OperationError::MG0006SKConstraintsNotMet);
}
// Check oauth2 strict uri
let filter = filter!(f_and!([
f_eq(Attribute::Class, EntryClass::OAuth2ResourceServer.into()),
f_andnot(f_pres(Attribute::OAuth2StrictRedirectUri)),
]));
let results = self.internal_search(filter)?;
let affected_entries = results
.into_iter()
.map(|entry| entry.get_display_id())
.collect::<Vec<_>>();
if !affected_entries.is_empty() {
error!("Unable to proceed. Not all oauth2 clients have strict redirect verification enabled.");
for missing_oauth2_strict_redirect_uri in affected_entries {
error!(%missing_oauth2_strict_redirect_uri);
}
return Err(OperationError::MG0007Oauth2StrictConstraintsNotMet);
}
// =========== Apply changes ==============
let idm_schema_classes = [
SCHEMA_ATTR_LINKED_GROUP_DL8.clone().into(),
SCHEMA_ATTR_APPLICATION_PASSWORD_DL8.clone().into(),
SCHEMA_CLASS_APPLICATION_DL8.clone().into(),
SCHEMA_CLASS_PERSON_DL8.clone().into(),
SCHEMA_CLASS_DOMAIN_INFO_DL8.clone().into(),
SCHEMA_ATTR_ALLOW_PRIMARY_CRED_FALLBACK_DL8.clone().into(),
SCHEMA_CLASS_ACCOUNT_POLICY_DL8.clone().into(),
];
idm_schema_classes
.into_iter()
.try_for_each(|entry| self.internal_migrate_or_create(entry))
.map_err(|err| {
error!(?err, "migrate_domain_6_to_7 -> Error");
err
})?;
self.reload()?;
// Update access controls.
let idm_data = [
BUILTIN_GROUP_APPLICATION_ADMINS.clone().try_into()?,
IDM_ACP_SELF_READ_DL8.clone().into(),
IDM_ACP_SELF_WRITE_DL8.clone().into(),
IDM_ACP_APPLICATION_MANAGE_DL8.clone().into(),
IDM_ACP_APPLICATION_ENTRY_MANAGER_DL8.clone().into(),
// Add the new types for mail server
BUILTIN_GROUP_MAIL_SERVICE_ADMINS_DL8.clone().try_into()?,
BUILTIN_IDM_MAIL_SERVERS_DL8.clone().try_into()?,
IDM_ACP_MAIL_SERVERS_DL8.clone().into(),
IDM_ACP_DOMAIN_ADMIN_DL8.clone().into(),
IDM_ACP_GROUP_ACCOUNT_POLICY_MANAGE_DL8.clone().into(),
];
idm_data
.into_iter()
.try_for_each(|entry| self.internal_migrate_or_create(entry))
.map_err(|err| {
error!(?err, "migrate_domain_7_to_8 -> Error");
err
})?;
Ok(())
}
/// Migration domain level 8 to 9 (1.5.0)
#[instrument(level = "info", skip_all)]
pub(crate) fn migrate_domain_8_to_9(&mut self) -> Result<(), OperationError> {
@ -760,7 +420,37 @@ impl QueryServerWriteTransaction<'_> {
#[instrument(level = "info", skip_all)]
pub(crate) fn migrate_domain_9_to_10(&mut self) -> Result<(), OperationError> {
if !cfg!(test) && DOMAIN_TGT_LEVEL < DOMAIN_LEVEL_9 {
error!("Unable to raise domain level from 8 to 9.");
error!("Unable to raise domain level from 9 to 10.");
return Err(OperationError::MG0004DomainLevelInDevelopment);
}
// =========== Apply changes ==============
// Now update schema
let idm_schema_changes = [
SCHEMA_ATTR_DENIED_NAME_DL10.clone().into(),
SCHEMA_CLASS_DOMAIN_INFO_DL10.clone().into(),
SCHEMA_ATTR_LDAP_MAXIMUM_QUERYABLE_ATTRIBUTES.clone().into(),
];
idm_schema_changes
.into_iter()
.try_for_each(|entry| self.internal_migrate_or_create(entry))
.map_err(|err| {
error!(?err, "migrate_domain_9_to_10 -> Error");
err
})?;
self.reload()?;
Ok(())
}
/// Migration domain level 10 to 11 (1.7.0)
#[instrument(level = "info", skip_all)]
pub(crate) fn migrate_domain_10_to_11(&mut self) -> Result<(), OperationError> {
if !cfg!(test) && DOMAIN_TGT_LEVEL < DOMAIN_LEVEL_10 {
error!("Unable to raise domain level from 10 to 11.");
return Err(OperationError::MG0004DomainLevelInDevelopment);
}
@ -817,7 +507,7 @@ impl QueryServerWriteTransaction<'_> {
//
// DO NOT MODIFY THIS DEFINITION
let idm_schema: Vec<EntryInitNew> = vec![
SCHEMA_ATTR_MAIL.clone().into(),
// SCHEMA_ATTR_MAIL.clone().into(),
SCHEMA_ATTR_ACCOUNT_EXPIRE.clone().into(),
SCHEMA_ATTR_ACCOUNT_VALID_FROM.clone().into(),
SCHEMA_ATTR_API_TOKEN_SESSION.clone().into(),
@ -827,7 +517,7 @@ impl QueryServerWriteTransaction<'_> {
SCHEMA_ATTR_BADLIST_PASSWORD.clone().into(),
SCHEMA_ATTR_CREDENTIAL_UPDATE_INTENT_TOKEN.clone().into(),
SCHEMA_ATTR_ATTESTED_PASSKEYS.clone().into(),
SCHEMA_ATTR_DISPLAYNAME.clone().into(),
// SCHEMA_ATTR_DISPLAYNAME.clone().into(),
SCHEMA_ATTR_DOMAIN_DISPLAY_NAME.clone().into(),
SCHEMA_ATTR_DOMAIN_LDAP_BASEDN.clone().into(),
SCHEMA_ATTR_DOMAIN_NAME.clone().into(),
@ -842,7 +532,7 @@ impl QueryServerWriteTransaction<'_> {
SCHEMA_ATTR_GIDNUMBER.clone().into(),
SCHEMA_ATTR_GRANT_UI_HINT.clone().into(),
SCHEMA_ATTR_JWS_ES256_PRIVATE_KEY.clone().into(),
SCHEMA_ATTR_LEGALNAME.clone().into(),
// SCHEMA_ATTR_LEGALNAME.clone().into(),
SCHEMA_ATTR_LOGINSHELL.clone().into(),
SCHEMA_ATTR_NAME_HISTORY.clone().into(),
SCHEMA_ATTR_NSUNIQUEID.clone().into(),
@ -856,7 +546,7 @@ impl QueryServerWriteTransaction<'_> {
SCHEMA_ATTR_OAUTH2_RS_IMPLICIT_SCOPES.clone().into(),
SCHEMA_ATTR_OAUTH2_RS_NAME.clone().into(),
SCHEMA_ATTR_OAUTH2_RS_ORIGIN_LANDING.clone().into(),
SCHEMA_ATTR_OAUTH2_RS_ORIGIN.clone().into(),
// SCHEMA_ATTR_OAUTH2_RS_ORIGIN.clone().into(),
SCHEMA_ATTR_OAUTH2_RS_SCOPE_MAP.clone().into(),
SCHEMA_ATTR_OAUTH2_RS_SUP_SCOPE_MAP.clone().into(),
SCHEMA_ATTR_OAUTH2_RS_TOKEN_KEY.clone().into(),
@ -891,6 +581,17 @@ impl QueryServerWriteTransaction<'_> {
// DL7
SCHEMA_ATTR_PATCH_LEVEL_DL7.clone().into(),
SCHEMA_ATTR_DOMAIN_DEVELOPMENT_TAINT_DL7.clone().into(),
SCHEMA_ATTR_REFERS_DL7.clone().into(),
SCHEMA_ATTR_CERTIFICATE_DL7.clone().into(),
SCHEMA_ATTR_OAUTH2_RS_ORIGIN_DL7.clone().into(),
SCHEMA_ATTR_OAUTH2_STRICT_REDIRECT_URI_DL7.clone().into(),
SCHEMA_ATTR_MAIL_DL7.clone().into(),
SCHEMA_ATTR_LEGALNAME_DL7.clone().into(),
SCHEMA_ATTR_DISPLAYNAME_DL7.clone().into(),
// DL8
SCHEMA_ATTR_LINKED_GROUP_DL8.clone().into(),
SCHEMA_ATTR_APPLICATION_PASSWORD_DL8.clone().into(),
SCHEMA_ATTR_ALLOW_PRIMARY_CRED_FALLBACK_DL8.clone().into(),
];
let r = idm_schema
@ -917,14 +618,14 @@ impl QueryServerWriteTransaction<'_> {
// DL4
SCHEMA_CLASS_OAUTH2_RS_PUBLIC_DL4.clone().into(),
// DL5
SCHEMA_CLASS_PERSON_DL5.clone().into(),
// SCHEMA_CLASS_PERSON_DL5.clone().into(),
SCHEMA_CLASS_ACCOUNT_DL5.clone().into(),
SCHEMA_CLASS_OAUTH2_RS_DL5.clone().into(),
// SCHEMA_CLASS_OAUTH2_RS_DL5.clone().into(),
SCHEMA_CLASS_OAUTH2_RS_BASIC_DL5.clone().into(),
// DL6
SCHEMA_CLASS_ACCOUNT_POLICY_DL6.clone().into(),
SCHEMA_CLASS_SERVICE_ACCOUNT_DL6.clone().into(),
SCHEMA_CLASS_SYNC_ACCOUNT_DL6.clone().into(),
// SCHEMA_CLASS_ACCOUNT_POLICY_DL6.clone().into(),
// SCHEMA_CLASS_SERVICE_ACCOUNT_DL6.clone().into(),
// SCHEMA_CLASS_SYNC_ACCOUNT_DL6.clone().into(),
SCHEMA_CLASS_GROUP_DL6.clone().into(),
SCHEMA_CLASS_KEY_PROVIDER_DL6.clone().into(),
SCHEMA_CLASS_KEY_PROVIDER_INTERNAL_DL6.clone().into(),
@ -932,7 +633,18 @@ impl QueryServerWriteTransaction<'_> {
SCHEMA_CLASS_KEY_OBJECT_JWT_ES256_DL6.clone().into(),
SCHEMA_CLASS_KEY_OBJECT_JWE_A128GCM_DL6.clone().into(),
SCHEMA_CLASS_KEY_OBJECT_INTERNAL_DL6.clone().into(),
SCHEMA_CLASS_DOMAIN_INFO_DL6.clone().into(),
// SCHEMA_CLASS_DOMAIN_INFO_DL6.clone().into(),
// DL7
// SCHEMA_CLASS_DOMAIN_INFO_DL7.clone().into(),
SCHEMA_CLASS_SERVICE_ACCOUNT_DL7.clone().into(),
SCHEMA_CLASS_SYNC_ACCOUNT_DL7.clone().into(),
SCHEMA_CLASS_CLIENT_CERTIFICATE_DL7.clone().into(),
SCHEMA_CLASS_OAUTH2_RS_DL7.clone().into(),
// DL8
SCHEMA_CLASS_ACCOUNT_POLICY_DL8.clone().into(),
SCHEMA_CLASS_APPLICATION_DL8.clone().into(),
SCHEMA_CLASS_PERSON_DL8.clone().into(),
SCHEMA_CLASS_DOMAIN_INFO_DL8.clone().into(),
];
let r: Result<(), _> = idm_schema_classes_dl1
@ -1023,10 +735,10 @@ impl QueryServerWriteTransaction<'_> {
IDM_ACP_RADIUS_SERVERS_V1.clone(),
IDM_ACP_RADIUS_SECRET_MANAGE_V1.clone(),
IDM_ACP_PEOPLE_SELF_WRITE_MAIL_V1.clone(),
IDM_ACP_SELF_READ_V1.clone(),
IDM_ACP_SELF_WRITE_V1.clone(),
// IDM_ACP_SELF_READ_V1.clone(),
// IDM_ACP_SELF_WRITE_V1.clone(),
IDM_ACP_ACCOUNT_SELF_WRITE_V1.clone(),
IDM_ACP_SELF_NAME_WRITE_V1.clone(),
// IDM_ACP_SELF_NAME_WRITE_V1.clone(),
IDM_ACP_ALL_ACCOUNTS_POSIX_READ_V1.clone(),
IDM_ACP_SYSTEM_CONFIG_ACCOUNT_POLICY_MANAGE_V1.clone(),
IDM_ACP_GROUP_UNIX_MANAGE_V1.clone(),
@ -1048,13 +760,26 @@ impl QueryServerWriteTransaction<'_> {
IDM_ACP_SERVICE_ACCOUNT_MANAGE_V1.clone(),
// DL4
// DL5
IDM_ACP_OAUTH2_MANAGE_DL5.clone(),
// IDM_ACP_OAUTH2_MANAGE_DL5.clone(),
// DL6
IDM_ACP_GROUP_ACCOUNT_POLICY_MANAGE_DL6.clone(),
// IDM_ACP_GROUP_ACCOUNT_POLICY_MANAGE_DL6.clone(),
IDM_ACP_PEOPLE_CREATE_DL6.clone(),
IDM_ACP_GROUP_MANAGE_DL6.clone(),
IDM_ACP_ACCOUNT_MAIL_READ_DL6.clone(),
IDM_ACP_DOMAIN_ADMIN_DL6.clone(),
// IDM_ACP_DOMAIN_ADMIN_DL6.clone(),
// DL7
// IDM_ACP_SELF_WRITE_DL7.clone(),
IDM_ACP_SELF_NAME_WRITE_DL7.clone(),
IDM_ACP_HP_CLIENT_CERTIFICATE_MANAGER_DL7.clone(),
IDM_ACP_OAUTH2_MANAGE_DL7.clone(),
// DL8
IDM_ACP_SELF_READ_DL8.clone(),
IDM_ACP_SELF_WRITE_DL8.clone(),
IDM_ACP_APPLICATION_MANAGE_DL8.clone(),
IDM_ACP_APPLICATION_ENTRY_MANAGER_DL8.clone(),
IDM_ACP_MAIL_SERVERS_DL8.clone(),
IDM_ACP_DOMAIN_ADMIN_DL8.clone(),
IDM_ACP_GROUP_ACCOUNT_POLICY_MANAGE_DL8.clone(),
];
let res: Result<(), _> = idm_entries
@ -1084,19 +809,6 @@ impl QueryServerReadTransaction<'_> {
let mut report_items = Vec::with_capacity(1);
if current_level <= DOMAIN_LEVEL_6 && upgrade_level >= DOMAIN_LEVEL_7 {
let item = self
.domain_upgrade_check_6_to_7_gidnumber()
.map_err(|err| {
error!(
?err,
"Failed to perform domain upgrade check 6 to 7 - gidnumber"
);
err
})?;
report_items.push(item);
}
if current_level <= DOMAIN_LEVEL_7 && upgrade_level >= DOMAIN_LEVEL_8 {
let item = self
.domain_upgrade_check_7_to_8_security_keys()
@ -1130,94 +842,6 @@ impl QueryServerReadTransaction<'_> {
})
}
pub(crate) fn domain_upgrade_check_6_to_7_gidnumber(
&mut self,
) -> Result<ProtoDomainUpgradeCheckItem, OperationError> {
let filter = filter!(f_and!([
f_or!([
f_eq(Attribute::Class, EntryClass::PosixAccount.into()),
f_eq(Attribute::Class, EntryClass::PosixGroup.into())
]),
// This logic gets a bit messy but it would be:
// If ! (
// (GID_REGULAR_USER_MIN < value < GID_REGULAR_USER_MAX) ||
// (GID_UNUSED_A_MIN < value < GID_UNUSED_A_MAX) ||
// (GID_UNUSED_B_MIN < value < GID_UNUSED_B_MAX) ||
// (GID_UNUSED_C_MIN < value < GID_UNUSED_D_MAX)
// )
f_andnot(f_or!([
f_and!([
// The gid value must be less than GID_REGULAR_USER_MAX
f_lt(
Attribute::GidNumber,
PartialValue::Uint32(crate::plugins::gidnumber::GID_REGULAR_USER_MAX)
),
// This bit of mental gymnastics is "greater than".
// The gid value must not be less than USER_MIN
f_andnot(f_lt(
Attribute::GidNumber,
PartialValue::Uint32(crate::plugins::gidnumber::GID_REGULAR_USER_MIN)
))
]),
f_and!([
f_lt(
Attribute::GidNumber,
PartialValue::Uint32(crate::plugins::gidnumber::GID_UNUSED_A_MAX)
),
f_andnot(f_lt(
Attribute::GidNumber,
PartialValue::Uint32(crate::plugins::gidnumber::GID_UNUSED_A_MIN)
))
]),
f_and!([
f_lt(
Attribute::GidNumber,
PartialValue::Uint32(crate::plugins::gidnumber::GID_UNUSED_B_MAX)
),
f_andnot(f_lt(
Attribute::GidNumber,
PartialValue::Uint32(crate::plugins::gidnumber::GID_UNUSED_B_MIN)
))
]),
// If both of these conditions are true we get:
// C_MIN < value < D_MAX, which the outer and-not inverts.
f_and!([
// The gid value must be less than GID_UNUSED_D_MAX
f_lt(
Attribute::GidNumber,
PartialValue::Uint32(crate::plugins::gidnumber::GID_UNUSED_D_MAX)
),
// This bit of mental gymnastics is "greater than".
// The gid value must not be less than C_MIN
f_andnot(f_lt(
Attribute::GidNumber,
PartialValue::Uint32(crate::plugins::gidnumber::GID_UNUSED_C_MIN)
))
]),
]))
]));
let results = self.internal_search(filter)?;
let affected_entries = results
.into_iter()
.map(|entry| entry.get_display_id())
.collect::<Vec<_>>();
let status = if affected_entries.is_empty() {
ProtoDomainUpgradeCheckStatus::Pass6To7Gidnumber
} else {
ProtoDomainUpgradeCheckStatus::Fail6To7Gidnumber
};
Ok(ProtoDomainUpgradeCheckItem {
status,
from_level: DOMAIN_LEVEL_6,
to_level: DOMAIN_LEVEL_7,
affected_entries,
})
}
pub(crate) fn domain_upgrade_check_7_to_8_security_keys(
&mut self,
) -> Result<ProtoDomainUpgradeCheckItem, OperationError> {
@ -1289,7 +913,7 @@ impl QueryServerReadTransaction<'_> {
#[cfg(test)]
mod tests {
use super::{ProtoDomainUpgradeCheckItem, ProtoDomainUpgradeCheckStatus};
// use super::{ProtoDomainUpgradeCheckItem, ProtoDomainUpgradeCheckStatus};
use crate::prelude::*;
#[qs_test]
@ -1318,9 +942,8 @@ mod tests {
}
}
#[qs_test(domain_level=DOMAIN_LEVEL_6)]
async fn test_migrations_dl6_dl7(server: &QueryServer) {
// Assert our instance was setup to version 6
#[qs_test(domain_level=DOMAIN_LEVEL_8)]
async fn test_migrations_dl8_dl9(server: &QueryServer) {
let mut write_txn = server.write(duration_from_epoch_now()).await.unwrap();
let db_domain_version = write_txn
@ -1329,164 +952,95 @@ mod tests {
.get_ava_single_uint32(Attribute::Version)
.expect("Attribute Version not present");
assert_eq!(db_domain_version, DOMAIN_LEVEL_6);
// Create an oauth2 client that doesn't have a landing url set.
let oauth2_client_uuid = Uuid::new_v4();
let ea: Entry<EntryInit, EntryNew> = entry_init!(
(Attribute::Class, EntryClass::Object.to_value()),
(Attribute::Class, EntryClass::Account.to_value()),
(Attribute::Uuid, Value::Uuid(oauth2_client_uuid)),
(
Attribute::Class,
EntryClass::OAuth2ResourceServer.to_value()
),
(
Attribute::Class,
EntryClass::OAuth2ResourceServerPublic.to_value()
),
(Attribute::Name, Value::new_iname("test_resource_server")),
(
Attribute::DisplayName,
Value::new_utf8s("test_resource_server")
),
(
Attribute::OAuth2RsOrigin,
Value::new_url_s("https://demo.example.com").unwrap()
)
);
write_txn
.internal_create(vec![ea])
.expect("Unable to create oauth2 client");
// Set the version to 7.
write_txn
.internal_apply_domain_migration(DOMAIN_LEVEL_7)
.expect("Unable to set domain level to version 7");
// post migration verification.
let domain_entry = write_txn
.internal_search_uuid(UUID_DOMAIN_INFO)
.expect("Unable to access domain entry");
assert!(!domain_entry.attribute_pres(Attribute::PrivateCookieKey));
let oauth2_entry = write_txn
.internal_search_uuid(oauth2_client_uuid)
.expect("Unable to access oauth2 client entry");
let origin = oauth2_entry
.get_ava_single_url(Attribute::OAuth2RsOrigin)
.expect("Unable to access oauth2 client origin");
// The origin should have been cloned to the landing.
let landing = oauth2_entry
.get_ava_single_url(Attribute::OAuth2RsOriginLanding)
.expect("Unable to access oauth2 client landing");
assert_eq!(origin, landing);
write_txn.commit().expect("Unable to commit");
}
#[qs_test(domain_level=DOMAIN_LEVEL_7)]
async fn test_migrations_dl7_dl8(server: &QueryServer) {
// Assert our instance was setup to version 7
let mut write_txn = server.write(duration_from_epoch_now()).await.unwrap();
let db_domain_version = write_txn
.internal_search_uuid(UUID_DOMAIN_INFO)
.expect("unable to access domain entry")
.get_ava_single_uint32(Attribute::Version)
.expect("Attribute Version not present");
assert_eq!(db_domain_version, DOMAIN_LEVEL_7);
// Create an oauth2 client that doesn't have a landing url set.
let oauth2_client_uuid = Uuid::new_v4();
let ea: Entry<EntryInit, EntryNew> = entry_init!(
(Attribute::Class, EntryClass::Object.to_value()),
(Attribute::Class, EntryClass::Account.to_value()),
(Attribute::Uuid, Value::Uuid(oauth2_client_uuid)),
(
Attribute::Class,
EntryClass::OAuth2ResourceServer.to_value()
),
(
Attribute::Class,
EntryClass::OAuth2ResourceServerPublic.to_value()
),
(Attribute::Name, Value::new_iname("test_resource_server")),
(
Attribute::DisplayName,
Value::new_utf8s("test_resource_server")
),
(
Attribute::OAuth2RsOriginLanding,
Value::new_url_s("https://demo.example.com/oauth2").unwrap()
),
(
Attribute::OAuth2RsOrigin,
Value::new_url_s("https://demo.example.com").unwrap()
)
);
write_txn
.internal_create(vec![ea])
.expect("Unable to create oauth2 client");
assert_eq!(db_domain_version, DOMAIN_LEVEL_8);
write_txn.commit().expect("Unable to commit");
// pre migration verification.
// == pre migration verification. ==
// check we currently would fail a migration.
let mut read_txn = server.read().await.unwrap();
match read_txn.domain_upgrade_check_7_to_8_oauth2_strict_redirect_uri() {
Ok(ProtoDomainUpgradeCheckItem {
status: ProtoDomainUpgradeCheckStatus::Fail7To8Oauth2StrictRedirectUri,
..
}) => {
trace!("Failed as expected, very good.");
}
other => {
error!(?other);
unreachable!();
}
};
drop(read_txn);
// Okay, fix the problem.
// let mut read_txn = server.read().await.unwrap();
// drop(read_txn);
let mut write_txn = server.write(duration_from_epoch_now()).await.unwrap();
write_txn
.internal_modify_uuid(
oauth2_client_uuid,
&ModifyList::new_purge_and_set(
Attribute::OAuth2StrictRedirectUri,
Value::Bool(true),
),
)
.expect("Unable to enforce strict mode.");
// Fix any issues
// Set the version to 8.
// == Increase the version ==
write_txn
.internal_apply_domain_migration(DOMAIN_LEVEL_8)
.expect("Unable to set domain level to version 8");
.internal_apply_domain_migration(DOMAIN_LEVEL_9)
.expect("Unable to set domain level to version 9");
// post migration verification.
write_txn.commit().expect("Unable to commit");
}
#[qs_test(domain_level=DOMAIN_LEVEL_8)]
async fn test_migrations_dl8_dl9(_server: &QueryServer) {}
#[qs_test(domain_level=DOMAIN_LEVEL_9)]
async fn test_migrations_dl9_dl10(_server: &QueryServer) {}
async fn test_migrations_dl9_dl10(server: &QueryServer) {
let mut write_txn = server.write(duration_from_epoch_now()).await.unwrap();
let db_domain_version = write_txn
.internal_search_uuid(UUID_DOMAIN_INFO)
.expect("unable to access domain entry")
.get_ava_single_uint32(Attribute::Version)
.expect("Attribute Version not present");
assert_eq!(db_domain_version, DOMAIN_LEVEL_9);
write_txn.commit().expect("Unable to commit");
// == pre migration verification. ==
// check we currently would fail a migration.
// let mut read_txn = server.read().await.unwrap();
// drop(read_txn);
let mut write_txn = server.write(duration_from_epoch_now()).await.unwrap();
// Fix any issues
// == Increase the version ==
write_txn
.internal_apply_domain_migration(DOMAIN_LEVEL_10)
.expect("Unable to set domain level to version 10");
// post migration verification.
write_txn.commit().expect("Unable to commit");
}
#[qs_test(domain_level=DOMAIN_LEVEL_10)]
async fn test_migrations_dl10_dl11(server: &QueryServer) {
let mut write_txn = server.write(duration_from_epoch_now()).await.unwrap();
let db_domain_version = write_txn
.internal_search_uuid(UUID_DOMAIN_INFO)
.expect("unable to access domain entry")
.get_ava_single_uint32(Attribute::Version)
.expect("Attribute Version not present");
assert_eq!(db_domain_version, DOMAIN_LEVEL_10);
write_txn.commit().expect("Unable to commit");
// == pre migration verification. ==
// check we currently would fail a migration.
// let mut read_txn = server.read().await.unwrap();
// drop(read_txn);
let mut write_txn = server.write(duration_from_epoch_now()).await.unwrap();
// Fix any issues
// == Increase the version ==
write_txn
.internal_apply_domain_migration(DOMAIN_LEVEL_11)
.expect("Unable to set domain level to version 11");
// post migration verification.
write_txn.commit().expect("Unable to commit");
}
}

View file

@ -35,6 +35,7 @@ use concread::arcache::{ARCacheBuilder, ARCacheReadTxn};
use concread::cowcell::*;
use hashbrown::{HashMap, HashSet};
use kanidm_proto::internal::{DomainInfo as ProtoDomainInfo, ImageValue, UiHint};
use kanidm_proto::scim_v1::client::ScimFilter;
use kanidm_proto::scim_v1::server::ScimOAuth2ClaimMap;
use kanidm_proto::scim_v1::server::ScimOAuth2ScopeMap;
use kanidm_proto::scim_v1::server::ScimReference;
@ -492,7 +493,7 @@ pub trait QueryServerTransaction<'a> {
f_intent_valid: Filter<FilterValid>,
event: &Identity,
) -> Result<Vec<Arc<EntrySealedCommitted>>, OperationError> {
let se = SearchEvent::new_impersonate(event, f_valid, f_intent_valid);
let se = SearchEvent::new_impersonate(event, f_valid, f_intent_valid, None, false);
self.search(&se)
}
@ -502,8 +503,10 @@ pub trait QueryServerTransaction<'a> {
f_valid: Filter<FilterValid>,
f_intent_valid: Filter<FilterValid>,
event: &Identity,
attrs: Option<BTreeSet<Attribute>>,
acp: bool,
) -> Result<Vec<Entry<EntryReduced, EntryCommitted>>, OperationError> {
let se = SearchEvent::new_impersonate(event, f_valid, f_intent_valid);
let se = SearchEvent::new_impersonate(event, f_valid, f_intent_valid, attrs, acp);
self.search_ext(&se)
}
@ -529,6 +532,8 @@ pub trait QueryServerTransaction<'a> {
filter: Filter<FilterInvalid>,
filter_intent: Filter<FilterInvalid>,
event: &Identity,
attrs: Option<BTreeSet<Attribute>>,
acp: bool,
) -> Result<Vec<Entry<EntryReduced, EntryCommitted>>, OperationError> {
let f_valid = filter
.validate(self.get_schema())
@ -536,7 +541,7 @@ pub trait QueryServerTransaction<'a> {
let f_intent_valid = filter_intent
.validate(self.get_schema())
.map_err(OperationError::SchemaViolation)?;
self.impersonate_search_ext_valid(f_valid, f_intent_valid, event)
self.impersonate_search_ext_valid(f_valid, f_intent_valid, event, attrs, acp)
}
/// Get a single entry by its UUID. This is used heavily for internal
@ -609,7 +614,7 @@ pub trait QueryServerTransaction<'a> {
let filter_intent = filter_all!(f_eq(Attribute::Uuid, PartialValue::Uuid(uuid)));
let filter = filter!(f_eq(Attribute::Uuid, PartialValue::Uuid(uuid)));
let mut vs = self.impersonate_search_ext(filter, filter_intent, event)?;
let mut vs = self.impersonate_search_ext(filter, filter_intent, event, None, false)?;
match vs.pop() {
Some(entry) if vs.is_empty() => Ok(entry),
_ => {
@ -934,6 +939,64 @@ pub trait QueryServerTransaction<'a> {
}
}
fn resolve_scim_json_get(
&mut self,
attr: &Attribute,
value: &JsonValue,
) -> Result<PartialValue, OperationError> {
let schema = self.get_schema();
// Lookup the attr
let Some(schema_a) = schema.get_attributes().get(attr) else {
// No attribute of this name exists - fail fast, there is no point to
// proceed, as nothing can be satisfied.
return Err(OperationError::InvalidAttributeName(attr.to_string()));
};
match schema_a.syntax {
SyntaxType::Utf8String => {
let JsonValue::String(value) = value else {
return Err(OperationError::InvalidAttribute(attr.to_string()));
};
Ok(PartialValue::Utf8(value.to_string()))
}
SyntaxType::Utf8StringInsensitive => {
let JsonValue::String(value) = value else {
return Err(OperationError::InvalidAttribute(attr.to_string()));
};
Ok(PartialValue::new_iutf8(value))
}
SyntaxType::Utf8StringIname => {
let JsonValue::String(value) = value else {
return Err(OperationError::InvalidAttribute(attr.to_string()));
};
Ok(PartialValue::new_iname(value))
}
SyntaxType::Uuid => {
let JsonValue::String(value) = value else {
return Err(OperationError::InvalidAttribute(attr.to_string()));
};
let un = self.name_to_uuid(value).unwrap_or(UUID_DOES_NOT_EXIST);
Ok(PartialValue::Uuid(un))
}
SyntaxType::ReferenceUuid
| SyntaxType::OauthScopeMap
| SyntaxType::Session
| SyntaxType::ApiToken
| SyntaxType::Oauth2Session
| SyntaxType::ApplicationPassword => {
let JsonValue::String(value) = value else {
return Err(OperationError::InvalidAttribute(attr.to_string()));
};
let un = self.name_to_uuid(value).unwrap_or(UUID_DOES_NOT_EXIST);
Ok(PartialValue::Refer(un))
}
_ => return Err(OperationError::InvalidAttribute(attr.to_string())),
}
}
fn resolve_scim_json_put(
&mut self,
attr: &Attribute,
@ -1238,13 +1301,6 @@ pub trait QueryServerTransaction<'a> {
}
fn get_domain_key_object_handle(&self) -> Result<Arc<KeyObject>, OperationError> {
#[cfg(test)]
if self.get_domain_version() < DOMAIN_LEVEL_6 {
// We must be in tests, and this is a DL5 to 6 test. For this we'll just make
// an ephemeral provider.
return Ok(crate::server::keys::KeyObjectInternal::new_test());
};
self.get_key_providers()
.get_key_object_handle(UUID_DOMAIN_INFO)
.ok_or(OperationError::KP0031KeyObjectNotFound)
@ -1562,6 +1618,40 @@ impl QueryServerReadTransaction<'_> {
}
}
}
#[instrument(level = "debug", skip_all)]
pub fn scim_search_ext(
&mut self,
ident: Identity,
filter: ScimFilter,
query: ScimEntryGetQuery,
) -> Result<Vec<ScimEntryKanidm>, OperationError> {
let filter_intent = Filter::from_scim_ro(&ident, &filter, self)?;
let f_intent_valid = filter_intent
.validate(self.get_schema())
.map_err(OperationError::SchemaViolation)?;
let f_valid = f_intent_valid.clone().into_ignore_hidden();
let r_attrs = query
.attributes
.map(|attr_set| attr_set.into_iter().collect());
let se = SearchEvent {
ident,
filter: f_valid,
filter_orig: f_intent_valid,
attrs: r_attrs,
effective_access_check: query.ext_access_check,
};
let vs = self.search_ext(&se)?;
vs.into_iter()
.map(|entry| entry.to_scim_kanidm(self))
.collect()
}
}
impl<'a> QueryServerTransaction<'a> for QueryServerWriteTransaction<'a> {
@ -2335,7 +2425,7 @@ impl<'a> QueryServerWriteTransaction<'a> {
debug!(domain_previous_patch_level = ?previous_patch_level, domain_target_patch_level = ?domain_info_patch_level);
// We have to check for DL0 since that's the initialisation level.
if previous_version <= DOMAIN_LEVEL_5 && previous_version != DOMAIN_LEVEL_0 {
if previous_version < DOMAIN_MIN_REMIGRATION_LEVEL && previous_version != DOMAIN_LEVEL_0 {
error!("UNABLE TO PROCEED. You are attempting a Skip update which is NOT SUPPORTED. You must upgrade one-version of Kanidm at a time.");
error!("For more see: https://kanidm.github.io/kanidm/stable/support.html#upgrade-policy and https://kanidm.github.io/kanidm/stable/server_updates.html");
error!(domain_previous_version = ?previous_version, domain_target_version = ?domain_info_version);
@ -2343,21 +2433,8 @@ impl<'a> QueryServerWriteTransaction<'a> {
return Err(OperationError::MG0008SkipUpgradeAttempted);
}
if previous_version <= DOMAIN_LEVEL_6 && domain_info_version >= DOMAIN_LEVEL_7 {
self.migrate_domain_6_to_7()?;
}
// Similar to the older system info migration handler, these allow "one shot" fixes
// to be issued and run by bumping the patch level.
if previous_patch_level < PATCH_LEVEL_1 && domain_info_patch_level >= PATCH_LEVEL_1 {
self.migrate_domain_patch_level_1()?;
}
if previous_version <= DOMAIN_LEVEL_7 && domain_info_version >= DOMAIN_LEVEL_8 {
self.migrate_domain_7_to_8()?;
}
if previous_version <= DOMAIN_LEVEL_8 && domain_info_version >= DOMAIN_LEVEL_9 {
// 1.4 -> 1.5
self.migrate_domain_8_to_9()?;
}
@ -2366,9 +2443,15 @@ impl<'a> QueryServerWriteTransaction<'a> {
}
if previous_version <= DOMAIN_LEVEL_9 && domain_info_version >= DOMAIN_LEVEL_10 {
// 1.5 -> 1.6
self.migrate_domain_9_to_10()?;
}
if previous_version <= DOMAIN_LEVEL_10 && domain_info_version >= DOMAIN_LEVEL_11 {
// 1.6 -> 1.7
self.migrate_domain_10_to_11()?;
}
// This is here to catch when we increase domain levels but didn't create the migration
// hooks. If this fails it probably means you need to add another migration hook
// in the above.
@ -2390,7 +2473,7 @@ impl<'a> QueryServerWriteTransaction<'a> {
let display_name = domain_entry
.get_ava_single_utf8(Attribute::DomainDisplayName)
.map(str::to_string)
.ok_or(OperationError::InvalidEntryState)?;
.unwrap_or_else(|| format!("Kanidm {}", domain_name));
let domain_ldap_allow_unix_pw_bind = domain_entry
.get_ava_single_bool(Attribute::LdapAllowUnixPwBind)
@ -2639,7 +2722,9 @@ impl<'a> QueryServerWriteTransaction<'a> {
#[cfg(test)]
mod tests {
use crate::prelude::*;
use kanidm_proto::scim_v1::client::ScimFilter;
use kanidm_proto::scim_v1::server::ScimReference;
use kanidm_proto::scim_v1::JsonValue;
use kanidm_proto::scim_v1::ScimEntryGetQuery;
#[qs_test]
@ -3091,4 +3176,44 @@ mod tests {
assert!(ext_access_check.modify_present.check(&Attribute::Name));
assert!(ext_access_check.modify_remove.check(&Attribute::Name));
}
#[qs_test]
async fn test_scim_basic_search_ext_query(server: &QueryServer) {
let mut server_txn = server.write(duration_from_epoch_now()).await.unwrap();
let group_uuid = Uuid::new_v4();
let e1 = entry_init!(
(Attribute::Class, EntryClass::Object.to_value()),
(Attribute::Class, EntryClass::Group.to_value()),
(Attribute::Name, Value::new_iname("testgroup")),
(Attribute::Uuid, Value::Uuid(group_uuid))
);
assert!(server_txn.internal_create(vec![e1]).is_ok());
assert!(server_txn.commit().is_ok());
// Now read that entry.
let mut server_txn = server.read().await.unwrap();
let idm_admin_entry = server_txn.internal_search_uuid(UUID_IDM_ADMIN).unwrap();
let idm_admin_ident = Identity::from_impersonate_entry_readwrite(idm_admin_entry);
let filter = ScimFilter::And(
Box::new(ScimFilter::Equal(
Attribute::Class.into(),
EntryClass::Group.into(),
)),
Box::new(ScimFilter::Equal(
Attribute::Uuid.into(),
JsonValue::String(group_uuid.to_string()),
)),
);
let base: Vec<ScimEntryKanidm> = server_txn
.scim_search_ext(idm_admin_ident, filter, ScimEntryGetQuery::default())
.unwrap();
assert_eq!(base.len(), 1);
assert_eq!(base[0].header.id, group_uuid);
}
}

View file

@ -279,6 +279,7 @@ impl ValueSetT for ValueSetOauthScopeMap {
match value {
Value::OauthScopeMap(u, m) => {
match self.map.entry(u) {
// We are going to assume that a vacant entry will not be set to empty.
BTreeEntry::Vacant(e) => {
e.insert(m);
Ok(true)
@ -289,7 +290,12 @@ impl ValueSetT for ValueSetOauthScopeMap {
// associated map state. So by always replacing on a present, we are true to
// the intent of the api.
BTreeEntry::Occupied(mut e) => {
e.insert(m);
if m.is_empty() {
e.remove();
} else {
e.insert(m);
}
Ok(true)
}
}

View file

@ -49,7 +49,6 @@ url = { workspace = true, features = ["serde"] }
kanidm_build_profiles = { workspace = true }
[dev-dependencies]
assert_cmd = "2.0.16"
compact_jwt = { workspace = true }
escargot = "0.5.13"
# used for webdriver testing
@ -59,11 +58,14 @@ oauth2_ext = { workspace = true, default-features = false, features = [
"reqwest",
] }
openssl = { workspace = true }
petgraph = { version = "0.7.1", features = ["serde", "serde-1"] }
petgraph = { version = "0.7.1", features = ["serde"] }
serde_json = { workspace = true }
time = { workspace = true }
tokio-openssl = { workspace = true }
kanidm_lib_crypto = { workspace = true }
uuid = { workspace = true }
webauthn-authenticator-rs = { workspace = true }
jsonschema = "0.28.3"
jsonschema = "0.29.0"
[package.metadata.cargo-machete]
ignored = ["escargot", "futures", "kanidm_build_profiles"]

View file

@ -3,14 +3,13 @@
//! - @yaleman
//!
use std::collections::{BTreeMap, BTreeSet};
// use kanidm_client::KanidmClient;
use kanidmd_lib::constants::entries::Attribute;
use kanidmd_lib::constants::groups::{idm_builtin_admin_groups, idm_builtin_non_admin_groups};
use kanidmd_lib::prelude::{builtin_accounts, EntryInitNew};
use petgraph::graphmap::{AllEdges, GraphMap, NodeTrait};
use petgraph::Directed;
use serde::{Deserialize, Serialize};
use std::collections::{BTreeMap, BTreeSet};
use uuid::Uuid;
#[derive(Clone, Deserialize, Serialize)]

View file

@ -26,6 +26,19 @@ async fn test_idm_domain_set_ldap_basedn(rsclient: KanidmClient) {
.expect("Failed to set idm_domain_set_ldap_basedn");
}
#[kanidmd_testkit::test]
async fn test_idm_domain_set_ldap_max_queryable_attrs(rsclient: KanidmClient) {
rsclient
.auth_simple_password(ADMIN_TEST_USER, ADMIN_TEST_PASSWORD)
.await
.expect("Failed to login as admin");
rsclient
.idm_domain_set_ldap_max_queryable_attrs(30)
.await
.expect("Failed to set idm_domain_set_ldap_max_queryable_attrs");
}
#[kanidmd_testkit::test]
async fn test_idm_domain_set_display_name(rsclient: KanidmClient) {
rsclient

Some files were not shown because too many files have changed in this diff Show more