Domain Display Name (#872)

This commit is contained in:
James Hodgkinson 2022-07-07 13:03:08 +10:00 committed by GitHub
parent 9cf4e180dc
commit d2ea936b16
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
67 changed files with 1394 additions and 807 deletions

View file

@ -35,6 +35,9 @@ jobs:
toolchain: stable toolchain: stable
- uses: actions-rs/cargo@v1 - uses: actions-rs/cargo@v1
with:
command: install
args: mdbook-template
- name: Make all the books - name: Make all the books
run: | run: |

View file

@ -1,435 +0,0 @@
Access Profiles
---------------
Access profiles are a way of expressing which persons are allowed what actions to be
performed on any database record (object) in the system.
As a result, there are specific requirements to what these can control and how they are
expressed.
Access profiles define an action of allow or deny: Denies are enforced before allows, and
will override even if applicable. They should only be created by system access profiles,
because we have certain requirements to deny certain changes.
Access profiles are stored as entries and are dynamically loaded into a structure that is
more efficent for use at runtime. Schema and its transactions are a similar implementation.
Search Requirements
-------------------
A search access profile must be able to limit:
1. the content of a search request and its scoping.
2. the returned set of data from the objects visible.
An example is that user Alice should only be able to search for objects where the class
is person and the object is a memberOf "visible" group. Alice should only be able to
see those users displayNames (not their legalName for example), and their public email.
Worded a bit differently. You need permission over the scope of entries, you need to be able
to read the attribute to filter on it, and you need to be able to read the attribute to recieve
it in the result entry.
Threat: If we search for '(&(name=william)(secretdata=x))', we should not allow this to
proceed because you don't have the rights to read secret data, so you should not be allowed
to filter on it. How does this work with two overlapping ACPs? For example one that allows read
of name and description to class = group, and one that allows name to user. We don't want to
say '(&(name=x)(description=foo))' and have it allowed, because we don't know the target class
of the filter. Do we "unmatch" all users because they have no access to the filter components? (Could
be done by inverting and putting in an AndNot of the non-matchable overlaps). Or do we just
filter our description from the users returned (But that implies they DID match, which is a disclosure).
More concrete:
search {
action: allow
targetscope: Eq("class", "group")
targetattr: name
targetattr: description
}
search {
action: allow
targetscope: Eq("class", "user")
targetattr: name
}
SearchRequest {
...
filter: And: {
Pres("name"),
Pres("description"),
}
}
A potential defense is:
acp class group: Pres(name) and Pres(desc) both in target attr, allow
acp class user: Pres(name) allow, Pres(desc) deny. Invert and Append
So the filter now is:
And: {
AndNot: {
Eq("class", "user")
},
And: {
Pres("name"),
Pres("description"),
},
}
This would now only allow access to the name/desc of group.
If we extend this to a third, this would work. But a more complex example:
search {
action: allow
targetscope: Eq("class", "group")
targetattr: name
targetattr: description
}
search {
action: allow
targetscope: Eq("class", "user")
targetattr: name
}
search {
action: allow
targetscope: And(Eq("class", "user"), Eq("name", "william"))
targetattr: description
}
Now we have a single user where we can read desc. So the compiled filter above as:
And: {
AndNot: {
Eq("class", "user")
},
And: {
Pres("name"),
Pres("description"),
},
}
This would now be invalid, first, because we would see that class=user and william has no name
so that would be excluded also. We also may not even have "class=user" in the second ACP, so we can't
use subset filter matching to merge the two.
As a result, I think the only possible valid solution is to perform the initial filter, then determine
on the candidates if we *could* have have valid access to filter on all required attributes. IE
this means even with an index look up, we still are required to perform some filter application
on the candidates.
I think this will mean on a possible candidate, we have to apply all ACP, then create a union of
the resulting targetattrs, and then compared that set into the set of attributes in the filter.
This will be slow on large candidate sets (potentially), but could be sped up with parallelism, caching
or other. However, in the same step, we can also apply the step of extracting only the allowed
read target attrs, so this is a valuable exercise.
Delete Requirements
-------------------
A delete profile must contain the content and scope of a delete.
An example is that user Alice should only be able to delete objects where the memberOf is
"purgeable", and where they are not marked as "protected".
Create Requirements
-------------------
A create profile defines a filtering limit on what content can be created and its requirements.
A create profile defines a limit on what attributes can be created in addition to the filtering
requirements.
An example is user Alice should only be able to create objects where the class is group, and can
only name the group - they can not add members to the group.
A content requirement could be something such as the value an attribute can contain must conform to a
regex, IE, you can create a group of any name, except where the name contains "admin" somewhere
in its name. Arguable, this is partially possible with filtering.
For example, we want to be able to limit the classes that someone *could* create on something
because classes often are used as a security type.
Modify Requirements
-------------------
A modify profile defines a filter limit of what can be modified in the directory.
A modify profile defines a limit of what attributes can be altered in the modification.
A modify profile defines a limit on the modlist actions: For example you may only be allowed to
ensure presence of a value. (Modify allowing purge, not-present, and presence).
Content requirements (see create requirements) are out of scope at the moment.
An example is Alice should only be able to modify a users password if that user is a member of the
students group.
Note, modify, does not imply *read* of the attribute. Care should be taken that we don't disclose
the current value in any error messages if the operation fails.
Targetting Requirements
-----------------------
The target of an access profile should be a filter defining the objects that this applies to.
THe filter limit for the profiles of what they are acting on requires a single special operation
which is the concept of "targetting self". For example, we could define a rule that says "members
of group X are allowed self-write mobile phone number".
An extension to the filter code, could allow an extra filter enum of "Self", that would allow this
to operate correctly, and would consume the entry in the event as the target of "Self". This would
be best implemented as a compilation of self -> eq(uuid, self.uuid).
Implementation Details
----------------------
CHANGE: Receiver should be a group, and should be single value/multivalue? Can *only* be a group.
Example profiles:
search {
action: allow
receiver: Eq("memberof", "admins")
targetscope: Pres("class")
targetattr: legalName
targetattr: displayName
description: Allow admins to read all users names
}
search {
action: allow
receiver: Self
targetscope: Self
targetattr: homeAddress
description: Allow everyone to read only their own homeAddress
}
delete {
action: allow
receiver: Or(Eq("memberof", "admins), Eq("memberof", "servicedesk"))
targetscope: Eq("memberof", "tempaccount")
description: Allow admins or servicedesk to delete any member of "temp accounts".
}
// This difference in targetscope behaviour could be justification to change the keyword here
// to prevent confusion.
create {
action: allow
receiver: Eq("name", "alice")
targetscope: And(Eq("class", "person"), Eq("location", "AU"))
createattr: location
createattr: legalName
createattr: mail
createclass: person
createclass: object
description: Allow alice to make new persons, only with class person+object, and only set
the attributes mail, location and legalName. The created object must conform to targetscope
}
modify {
action: allow
receiver: Eq("name", "claire")
targetscope: And(Eq("class", "group"), Eq("name", "admins"))
presentattr: member
description: Allow claire to promote people as members of the admins group.
}
modify {
action: allow
receiver: Eq("name", "claire")
targetscope: And(Eq("class", "person"), Eq("memberof", "students"))
presentattr: sshkeys
presentattr: class
targetclass: unixuser
description: Allow claire to modify persons in the students group, and to grant them the
class of unixuser (only this class can be granted!). Subsequently, she may then give
the sshkeys values as a modification.
}
modify {
action: allow
receiver: Eq("name", "alice")
targetscope: Eq("memberof", "students")
removedattr: sshkeys
description: Allow allice to purge or remove sshkeys from members of the students group,
but not add new ones
}
modify {
action: allow
receiver: Eq("name", "alice")
targetscope: Eq("memberof", "students")
removedattr: sshkeys
presentattr: sshkeys
description: Allow alice full control over the ssh keys attribute on members of students.
}
// This may not be valid: Perhaps if <*>attr: is on modify/create, then targetclass, must
// must be set, else class is considered empty.
//
// This profile could in fact be an invalid example, because presentattr: class, but not
// targetclass, so nothing could be granted.
modify {
action: allow
receiver: Eq("name", "alice")
targetscope: Eq("memberof", "students")
presentattr: class
description: Allow alice to grant any class to members of students.
}
Formalised Schema
-----------------
A complete schema would be:
attributes:
* acp_allow single value, bool
* acp_enable single value, bool
* acp_receiver single value, filter
* acp_targetscope single value, filter
* acp_search_attr multi value, utf8 case insense
* acp_create_class multi value, utf8 case insense
* acp_create_attr multi value, utf8 case insense
* acp_modify_removedattr multi value, utf8 case insense
* acp_modify_presentattr multi value, utf8 case insense
* acp_modify_class multi value, utf8 case insense
classes:
* access_control_profile MUST [acp_receiver, acp_targetscope] MAY [description] MAY acp_allow
* access_control_search MUST [acp_search_attr]
* access_control_delete
* access_control_modify MAY [acp_modify_removedattr, acp_modify_presentattr, acp_modify_class]
* access_control_create MAY [acp_create_class, acp_create_attr]
Important, but empty sets really mean empty sets! The ACP code will assert that both
access_control_profile *and* one of the search/delete/modify/create classes exists on an ACP. An
important factor of this design is now the ability to *compose* mulitple ACP's to a single entry
allowing a create/delete/modify to exist! However, each one must still list their respective actions
to allow proper granularity.
Search Application
------------------
The set of access controls is checked, and the set where receiver matches the current identified
user is collected. These then are added to the users requested search as:
And(<User Search Request>, Or(<Set of Search Profile Filters))
In this manner, the search security is easily applied, as if the targets to conform to one of the
required search profile filters, the outer And condition is nullified and no results returned.
Once complete, in the translation of the entry -> proto_entry, each access control and its allowed
set of attrs has to be checked to determine what of that entry can be displayed. Consider there are
three entries, A, B, C. An ACI that allows read of "name" on A, B exists, and a read of "mail" on
B, C. The correct behaviour is then:
A: name
B: name, mail
C: mail
So this means that the entry -> proto entry part is likely the most expensive part of the access
control operation, but also one of the most important. It may be possible to compile to some kind
of faster method, but initially a simple version is needed.
Delete Application
------------------
Delete is similar to search, however there is the risk that the user may say something like:
Pres("class").
Now, were we to approach this like search, this would then have "every thing the identified user
is allowed to delete, is deleted". A consideration here is that Pres("class") would delete "all"
objects in the directory, but with the access control present, it would limit the delete to the
set of allowed deletes.
In a sense, this is a correct behaviour - they were allowed to delete everything they asked to
delete. However, in another it's not valid: the request was broad and they were not allowed access
to delete everything they request.
The possible abuse here is that you could then use deletes to determine existance of entries in
the database that you do not have access to. This however, requires someone to HAVE a delete
privilege which is itself, very high level of access, so this risk may be minimal.
So the choices are:
* Treat it like search and allow the user to delete "what they are allowed to delete"
* Deny the request, because their delete was too broad, and they should specify better
what they want to delet.
Option 2 seems more correct because the delete request is an explicit request, not a request where
you want partial results - imagine someone wants to delete users A, B at the same time, but only
have access to A. They wwant this request to fail so they KNOW B was not deleted, rather than
succeed and have B still exist with a partial delete status.
However, the issue is Option 2 means that you could have And(Eq(attr, accessible), Eq(attr, denied)), and denial of that, would indicate presence of the denied attr. So option 1 makes sense in terms
of preventing a security risk of info disclosure.
This is also a concern for modification, where the modification attempt may or may not
fail depending on the entries and if you can/can't see them.
BETTER IDEA. You can only delete/modify within the scope of the read you have. If you can't
read it (based on the read rules of search), you can't delete it. This is in addition to the filter
rules of the delete applying as well. So doing a delete of Pres(class), will only delete
in your READ SCOPE and will never disclose if you have no access.
Create Application
------------------
Create seems like the easiest to apply. Ensure that only the attributes in createattr are in the
createevent, ensure the classes only contain the set in createclass, then finally apply
filter_no_index to the entry to entry. If all of this passes, the create is allowed.
A key point, is that there is no union of create aci's - the WHOLE aci must pass, not parts of
multiple. This means if a control say "allows creating group with member" and "allows creating
user with name", creating a gorup with name is not allowed - despite your ability to create
an entry with "name", its classes don't match. This way, the admin of the service can define
create controls with really specific intent to how they'll be used, without risk of two
controls causing un-intended effects (users that are also groups, or allowing values that
were not intended).
An important consideration is how to handle overlapping aci. If two aci *could* match the create
should we enforce both conditions are upheld? Or only a single upheld aci allows the create?
In some cases it may not be possible to satisfy both, and that would block creates. The intent
of the access profile is that "something like this CAN" be created, so I believe that provided
only a single control passes, the create should be allowed.
Modify Application
------------------
Modify is similar to above, however, we specifically filter on the modlist action of present,
removed or purged with the action. Otherwise, the rules of create stand where provided all requirements
of the modify are "upheld", then it is allowed provided at least a single profile allows the change.
A key difference is that if the modify lists multiple presentattr types, the modify so long as it has
one presentattr of the profile, it is conforming. IE we say "presentattr: name, email", but we
only attempt to modify "email".
Considerations
--------------
* When should access controls be applied? During an operation, we only schema validate after
pre plugins, so likely it has to be "at that point", to ensure schema validity of the entries
we want to assert changes to.
* Self filter keyword should compile to eq("uuid", "...."). When do we do this and how?
* memberof could take name or uuid, we need to be able to resolve this correctly, but this is likely
a memberof issue we need to address, ie memberofuuid vs memberof attr.
* Content controls in create and modify will be important to get right to avoid the security issues
of ldap access controls. Given that class has special importance, it's only right to give it extra
consideration in these controls.
* In the future when recyclebin is added, a re-animation access profile should be created allowing
revival of entries given certain conditions of the entry we are attempting to revive.

View file

@ -2,41 +2,41 @@
Architectural Overview Architectural Overview
---------------------- ----------------------
Kanidm like any project, has a number of components and layers that make it up. As this project Kanidm has a number of components and layers that make it up. As this project
is continually evolving, if you have questions or notice discrepancies with this document is continually evolving, if you have questions or notice discrepancies
please contact me (william) at anytime. with this document please contact William (Firstyear) at any time.
Tools Tools
----- -----
Kanidm Tools are a set of command line clients that are intended to help administrators deploy, Kanidm Tools are a set of command line clients that are intended to help
interact with, and support a kanidmd server installation. These tools may also be used for administrators deploy, interact with, and support a Kanidm server installation.
servers or machines to authenticate and identify users. This is the "human interaction" These tools may also be used for servers or machines to authenticate and
part of the server from a cli perspective. identify users. This is the "human interaction" part of the server from a
CLI perspective.
Clients Clients
------- -------
The kanidm client is a reference implementation of the client library, that others may consume The `kanidm` client is a reference implementation of the client library, that
or interact with to communicate with a kanidmd instance. The tools above use this client library others may consume or interact with to communicate with a Kanidm server instance.
for all of it's actions. This library is intended to encapsulate some high level logic as an The tools above use this client library for all of its actions. This library
abstraction over the REST api. is intended to encapsulate some high level logic as an abstraction over the REST API.
Proto Proto
----- -----
The kanidm proto is a set of structures that are used by the REST and raw api's for HTTP The `kanidm` proto is a set of structures that are used by the REST and raw API's
communication. These are intended to be a reference implementation of the on-the-wire protocol, for HTTP communication. These are intended to be a reference implementation of the on-the-wire protocol, but importantly these are also how the server represents its communication. This makes this the authorative source of protocol layouts
but importantly these are also how the server represents it's communication. This makes this with regard to REST or raw communication.
the authorative source of protocol layouts with regard to REST or raw communication.
Kanidmd (main server) Kanidmd (main server)
--------------------- ---------------------
Kanidmd is intended to have minimal (thin) client tools, where the server itself contains most Kanidmd is intended to have minimal (thin) client tools, where the server itself
logic for operations, transformations, and routing of requests to their relevant datatypes. As contains most logic for operations, transformations, and routing of requests to
a result, the kanidmd section is the largest component of the project as it implements nearly their relevant datatypes. As a result, the `kanidmd` section is the largest component
everything required for IDM functionality to exist. of the project as it implements nearly everything required for IDM functionality to exist.
Search Search
====== ======
@ -44,29 +44,33 @@ Search
Search is the "hard worker" of the server, intended to be a fast path with minimal overhead Search is the "hard worker" of the server, intended to be a fast path with minimal overhead
so that clients can acquire data as quickly as possible. The server follows the below pattern. so that clients can acquire data as quickly as possible. The server follows the below pattern.
.. image:: diagrams/search-flow.png ![Search flow diagram](diagrams/search-flow.png)
:width: 800
1. All incoming requests are from a client on the left. These are either REST requests, or a structured (1) All incoming requests are from a client on the left. These are either REST
protocol request via the raw interface. It's interesting to note the raw request is almost identical requests, or a structured protocol request via the raw interface. It's
to the queryserver event types - where as REST requests we have to generate request messages that can interesting to note the raw request is almost identical to the queryserver
event types - where as REST requests we have to generate request messages that can
become events. become events.
The frontend uses a webserver with a thread-pool to process and decode network IO operations The frontend uses a webserver with a thread-pool to process and decode
concurrently. This then sends asynchronous messages to a worker (actor) pool for handing. network I/O operations concurrently. This then sends asynchronous messages
to a worker (actor) pool for handing.
2. These search messages in the actors are transformed into "events" - a self contained structure containing (2) These search messages in the actors are transformed into "events" - a self
all relevant data related to the operation at hand. This may be the event origin (a user or internal), contained structure containing all relevant data related to the operation at hand.
the requested filter (query), and perhaps even a list of attributes requested. These events are designed This may be the event origin (a user or internal), the requested filter (query),
to ensure correctness. When a search message is transformed to a search event, it is checked by and perhaps even a list of attributes requested. These events are designed
the schema to ensure that the request is valid and can be satisfied securely. to ensure correctness. When a search message is transformed to a search event, it
is checked by the schema to ensure that the request is valid and can be
satisfied securely.
As these workers are in a thread pool, it's important that these are concurrent and do not lock As these workers are in a thread pool, it's important that these are concurrent and
or block - this concurrency is key to high performance and safety. It's also worth noting that this do not lock or block - this concurrency is key to high performance and safety.
is the level where read transactions are created and commited - all operations are transactionally It's also worth noting that this is the level where read transactions are created
proctected from an early stage to guarantee consistency of the operations. and commited - all operations are transactionally proctected from an early stage
to guarantee consistency of the operations.
3. When the event is known consistent, it is then handed to the queryserver - the query server 3. When the event is known to be consistent, it is then handed to the queryserver - the query server
begins a process of steps on the event to apply it and determine the results for the request. begins a process of steps on the event to apply it and determine the results for the request.
This process involves further validation of the query, association of metadata to the query This process involves further validation of the query, association of metadata to the query
for the backend, and then submission of the high-level query to the backend. for the backend, and then submission of the high-level query to the backend.
@ -105,12 +109,12 @@ worth paying attention to.
.. image:: diagrams/write-flow.png .. image:: diagrams/write-flow.png
:width: 800 :width: 800
1., 2. Like search, all client operations come from the REST or raw apis, and are transformed or (1), (2) Like search, all client operations come from the REST or raw apis, and are transformed or
generated into messages. These messages are sent to a single write worker. There is only a single generated into messages. These messages are sent to a single write worker. There is only a single
write worker due to the use of copy-on-write structures in the server, limiting us to a single writer, write worker due to the use of copy-on-write structures in the server, limiting us to a single writer,
but allowing search transaction to proceed without blocking in parallel. but allowing search transaction to proceed without blocking in parallel.
3. From the worker, the relevent event is created. This may be a "Create", "Modify" or "Delete" event. (3) From the worker, the relevent event is created. This may be a "Create", "Modify" or "Delete" event.
The query server handles these slightly differently. In the create path, we take the set of entries The query server handles these slightly differently. In the create path, we take the set of entries
you wish to create as our candidate set. In modify or delete, we perform an impersonation search, you wish to create as our candidate set. In modify or delete, we perform an impersonation search,
and use the set of entries within your read bounds to generate the candidate set. This candidate and use the set of entries within your read bounds to generate the candidate set. This candidate
@ -119,18 +123,18 @@ set will now be used for the remainder of the writing operation.
It is at this point, we assert access controls over the candidate set and the changes you wish It is at this point, we assert access controls over the candidate set and the changes you wish
to make. If you are not within rights to perform these operations the event returns an error. to make. If you are not within rights to perform these operations the event returns an error.
4. The entries are now sent to the pre-operation plugins for the relevant operation type. This allows (4) The entries are now sent to the pre-operation plugins for the relevant operation type. This allows
transformation of the candidate entries beyond the scope of your access controls, and to maintain transformation of the candidate entries beyond the scope of your access controls, and to maintain
some elements of data consistency. For example one plugin prevents creation of system protected types some elements of data consistency. For example one plugin prevents creation of system protected types
where another ensures that uuid exists on every entry. where another ensures that uuid exists on every entry.
5. These transformed entries are now returned to the query server. (5) These transformed entries are now returned to the query server.
6. The backend is sent the list of entries for writing. Indexers are generated (7.) as required based (6) The backend is sent the list of entries for writing. Indexes are generated (7) as required based
on the new or modified entries, and the entries themself are written (8.) into the core db tables. This on the new or modified entries, and the entries themself are written (8) into the core db tables. This
operation returns a result (9.) to the backend, which is then filtered up to the query server (10.) operation returns a result (9) to the backend, which is then filtered up to the query server (10)
11. Provided all operations to this point have been successful, we now apply post write plugins which (11) Provided all operations to this point have been successful, we now apply post write plugins which
may enforce or generate different properties in the transaction. This is similar to the pre plugins, may enforce or generate different properties in the transaction. This is similar to the pre plugins,
but allows different operations. For example, a post plugin ensurs uuid reference types are but allows different operations. For example, a post plugin ensurs uuid reference types are
consistent and valid across the set of changes in the database. The most critical is memberof, consistent and valid across the set of changes in the database. The most critical is memberof,
@ -139,7 +143,7 @@ rbac operations. These are done as post plugins because at this point internal s
yield and see the modified entries that we have just added to the indexes and datatables, which yield and see the modified entries that we have just added to the indexes and datatables, which
is important for consistency (and simplicity) especially when you consider batched operations. is important for consistency (and simplicity) especially when you consider batched operations.
12. Finally the result is returned up (13.) through (14.) the layers (15.) to the client to (12) Finally the result is returned up (13) through (14) the layers (15) to the client to
inform them of the success (or failure) of the operation. inform them of the success (or failure) of the operation.

View file

@ -2,13 +2,12 @@
Default IDM Layout Default IDM Layout
------------------ ------------------
It's important we have a good default IDM entry layout, as this will serve as examples and It's important we have a good default IDM entry layout, as this will serve as
guidance for many users. We also need to consider that the defaults may be ignored also, but examples and guidance for many users. We also need to consider that the defaults
many users will consume them by default. may be ignored, but many users will consume them by default.
Additionally, we also need to think carefully about the roles and interactions with the Additionally, we also need to think carefully about the roles and interactions with the
default entries, and how people will deploy and interact with software like this. This document default entries, and how people will deploy and interact with software like this. This document is to discuss the roles and their requirements, rather than the absolute details of the implementation.
is to discuss the roles and their requirements, rather than the absolute details of the implementation.
Privileged Groups Privileged Groups
----------------- -----------------

View file

@ -0,0 +1,6 @@
# Domain Display Name
A human-facing string to use in places like web page titles, TOTP issuer codes, the Oauth authorisation server name etc.
On system creation, or if it hasn't been set, it'll default to `format!("Kanidm {}", domain_name)` so that you'll see `Kanidm idm.example.com` if your domain is `idm.example.com`.

View file

@ -8,26 +8,24 @@ search term (filter) faster.
World without indexing World without indexing
---------------------- ----------------------
Almost all databases are built ontop of a key-value storage engine of some nature. In our Almost all databases are built ontop of a key-value storage engine of some nature.
case we are using (feb 2019) sqlite and hopefully SLED in the future. In our case we are using (feb 2019) sqlite and hopefully SLED in the future.
So our entries that contain sets of avas, these are serialised into a byte format (feb 2019, json So our entries that contain sets of avas, these are serialised into a byte format
but soon cbor) and stored in a table of "id: entry". For example: (feb 2019, json but soon cbor) and stored in a table of "id: entry". For example:
|----------------------------------------------------------------------------------------| | ID | data |
| ID | data | |------|-----------------------------------------------------------------------------|
|----------------------------------------------------------------------------------------| | 01 | `{ 'Entry': { 'name': ['name'], 'class': ['person'], 'uuid': ['...'] } }` |
| 01 | { 'Entry': { 'name': ['name'], 'class': ['person'], 'uuid': ['...'] } } | | 02 | `{ 'Entry': { 'name': ['beth'], 'class': ['person'], 'uuid': ['...'] } }` |
| 02 | { 'Entry': { 'name': ['beth'], 'class': ['person'], 'uuid': ['...'] } } | | 03 | `{ 'Entry': { 'name': ['alan'], 'class': ['person'], 'uuid': ['...'] } }` |
| 03 | { 'Entry': { 'name': ['alan'], 'class': ['person'], 'uuid': ['...'] } } | | 04 | `{ 'Entry': { 'name': ['john'], 'class': ['person'], 'uuid': ['...'] } }` |
| 04 | { 'Entry': { 'name': ['john'], 'class': ['person'], 'uuid': ['...'] } } | | 05 | `{ 'Entry': { 'name': ['kris'], 'class': ['person'], 'uuid': ['...'] } }` |
| 05 | { 'Entry': { 'name': ['kris'], 'class': ['person'], 'uuid': ['...'] } } |
|----------------------------------------------------------------------------------------|
The ID column is *private* to the backend implementation and is never revealed to the higher The ID column is *private* to the backend implementation and is never revealed to the higher
level components. However the ID is very important to indexing :) level components. However the ID is very important to indexing :)
If we wanted to find Eq(name, john) here, what do we need to do? A full table scan is where we If we wanted to find `Eq(name, john)` here, what do we need to do? A full table scan is where we
perform: perform:
data = sqlite.do(SELECT * from id2entry); data = sqlite.do(SELECT * from id2entry);
@ -45,38 +43,39 @@ How does indexing work?
Indexing is a pre-computed lookup table of what you *might* search in a specific format. Let's say Indexing is a pre-computed lookup table of what you *might* search in a specific format. Let's say
in our example we have an equality index on "name" as an attribute. Now in our backend we define in our example we have an equality index on "name" as an attribute. Now in our backend we define
an extra table called "index_eq_name". It's contents would look like: an extra table called "index_eq_name". Its contents would look like:
|------------------------------------------| | index | idl (ID List) |
| index | idl | |-----------|---------------|
|------------------------------------------| | alan | [03, ] |
| alan | [03, ] | | beth | [02, ] |
| beth | [02, ] | | john | [04, ] |
| john | [04, ] | | kris | [05, ] |
| kris | [05, ] | | name | [01, ] |
| name | [01, ] |
|------------------------------------------|
So when we perform our search for Eq(name, john) again, we see name is indexed. We then perform: So when we perform our search for Eq(name, john) again, we see name is indexed. We then perform:
SELECT * from index_eq_name where index=john; ```sql
SELECT * from index_eq_name where index=john;
```
This would give us the idl (ID list) of [04,]. This is the "ID's of every entry where name equals This would give us the idl (ID list) of [04,]. This is the "ID's of every entry where name equals
john". john".
We can now take this back to our id2entry table and perform: We can now take this back to our id2entry table and perform:
data = sqlite.do(SELECT * from id2entry where ID = 04) ```sql
data = sqlite.do(SELECT * from id2entry where ID = 04)
```
The key-value engine only gives us the entry for john, and we have a match! If id2entry had 1 million The key-value engine only gives us the entry for john, and we have a match! If id2entry
entries, a full table scan would be 1 million loads and compares - with the index, it was 2 loads and had 1 million entries, a full table scan would be 1 million loads and compares - with the
one compare. That's 30000x faster (potentially ;) )! index, it was 2 loads and one compare. That's 30000x faster (potentially ;) )!
To improve on this, if we had a query like Or(Eq(name, john), Eq(name, kris)) we can use our To improve on this, if we had a query like Or(Eq(name, john), Eq(name, kris)) we can use our
indexes to speed this up. indexes to speed this up.
We would query index_eq_name again, and we would perform the search for both john, and kris. Because We would query index_eq_name again, and we would perform the search for both john, and kris. Because this is an OR we then union the two idl's, and we would have:
this is an OR we then union the two idl's, and we would have:
[04, 05,] [04, 05,]
@ -95,12 +94,12 @@ Filter Optimisation
------------------- -------------------
Filter optimisation begins to play an important role when we have indexes. If we indexed Filter optimisation begins to play an important role when we have indexes. If we indexed
something like "Pres(class)", then the idl for that search is the set of all database something like `Pres(class)`, then the idl for that search is the set of all database
entries. Similar, if our database of 1 million entries has 250,000 class=person, then entries. Similar, if our database of 1 million entries has 250,000 `class=person`, then
Eq(class, person), will have an idl containing 250,000 ids. Even with idl compression, this `Eq(class, person)`, will have an idl containing 250,000 ids. Even with idl compression, this
is still a lot of data! is still a lot of data!
There tend to be two types of searches against a directory like kanidm. There tend to be two types of searches against a directory like Kanidm.
* Broad searches * Broad searches
* Targetted single entry searches * Targetted single entry searches
@ -108,45 +107,53 @@ There tend to be two types of searches against a directory like kanidm.
For broad searches, filter optimising does little - we just have to load those large idls, and For broad searches, filter optimising does little - we just have to load those large idls, and
use them. (Yes, loading the large idl and using it is still better than full table scan though!) use them. (Yes, loading the large idl and using it is still better than full table scan though!)
However, for targetted searches, filter optimisng really helps. However, for targeted searches, filter optimisation really helps.
Imagine a query like: Imagine a query like:
And(Eq(class, person), Eq(name, claire)) ```
And(Eq(class, person), Eq(name, claire))
```
In this case with our database of 250,000 persons, our idl's would have: In this case with our database of 250,000 persons, our idl's would have:
And( idl[250,000 ids], idl(1 id)) ```
And( idl[250,000 ids], idl(1 id))
```
Which means the result will always be the *single* id in the idl or *no* value because it wasn't Which means the result will always be the *single* id in the idl or *no* value
present. because it wasn't present.
We add a single concept to the server called the "filter test threshold". This is the state in which We add a single concept to the server called the "filter test threshold". This is the
a candidate set that is not completed operation, is shortcut, and we then apply the filter in state in which a candidate set that is not completed operation, is shortcut, and we
the manner of a full table scan to the partial set because it will be faster than the index loading then apply the filter in the manner of a full table scan to the partial set because
and testing. it will be faster than the index loading and testing.
When we have this test threshold, there exists two possibilities for this filter. When we have this test threshold, there exists two possibilities for this filter.
And( idl[250,000 ids], idl(1 id)) ```
And( idl[250,000 ids], idl(1 id))
```
We load 250,000 idl and then perform the intersection with the idl of 1 value, and result in 1 or 0. We load 250,000 idl and then perform the intersection with the idl of 1 value, and
result in 1 or 0.
And( idl(1 id), idl[250,000 ids]) ```
And( idl(1 id), idl[250,000 ids])
```
We load the single idl value for name, and then as we are below the test-threshold we shortcut out We load the single idl value for name, and then as we are below the test-threshold we
and apply the filter to entry ID 1 - yielding a match or no match. shortcut out and apply the filter to entry ID 1 - yielding a match or no match.
Notice in the second, by promoting the "smaller" idl, we were able to save the work of the idl load Notice in the second, by promoting the "smaller" idl, we were able to save the work of the idl load and intersection as our first equality of "name" was more targetted?
and intersection as our first equality of "name" was more targetted?
Filter optimisation is about re-arranging these filters in the server using our insight to Filter optimisation is about re-arranging these filters in the server using our insight to
data to provide faster searches and avoid indexes that are costly unless they are needed. data to provide faster searches and avoid indexes that are costly unless they are needed.
In this case, we would *demote* any filter where Eq(class, ...) to the *end* of the And, because it In this case, we would *demote* any filter where Eq(class, ...) to the *end* of the And,
is highly likely to be less targetted than the other Eq types. Another example would be promotion because it is highly likely to be less targetted than the other Eq types. Another example
of Eq filters to the front of an And over a Sub term, wherh Sub indexes tend to be larger and have would be promotion of Eq filters to the front of an And over a Sub term, wherh Sub indexes
longer IDLs. tend to be larger and have longer IDLs.
Implementation Details and Notes Implementation Details and Notes
-------------------------------- --------------------------------
@ -173,16 +180,16 @@ to remember this is a *set* of possible index emissions, where we could have mul
returned. This will be important with claims for credentials so that the claims can be indexed returned. This will be important with claims for credentials so that the claims can be indexed
correctly. correctly.
We also require a special name to uuid, and uuid to name index. These are to accelerate the name2uuid We also require a special name to uuid, and uuid to name index. These are to accelerate
and uuid2name functions which are common in resolving on search. These will be named in the tables the name2uuid and uuid2name functions which are common in resolving on search. These will
as: be named in the tables as:
* idx_name2uuid * idx_name2uuid
* idx_uuid2name * idx_uuid2name
They will be structured as string, string for both - where the uuid and name column matchs the correct They will be structured as string, string for both - where the uuid and name column matches
direction, and is the primary key. We could use a single table, but if we change to sled we need the correct direction, and is the primary key. We could use a single table, but if
to split this, so we pre-empt this change and duplicate the data here. we change to sled we need to split this, so we pre-empt this change and duplicate the data here.
Indexing States Indexing States
=============== ===============
@ -201,25 +208,24 @@ would just remove all the index tables before hand.
* Write operation index metadata * Write operation index metadata
At the start of a write transaction, the schema passes us a map of the current attribute index states At the start of a write transaction, the schema passes us a map of the current attribute
so that on filter application or modification we are aware of what attrs are indexed. It is assumed index states so that on filter application or modification we are aware of what attrs are indexed. It is assumed that `name2uuid` and `uuid2name` are always indexed.
that name2uuid and uuid2name are always indexed.
* Search Index Metadata * Search Index Metadata
When filters are resolved they are tagged by their indexed state to allow optimisation to occur. We When filters are resolved they are tagged by their indexed state to allow optimisation
then process each filter element and their tag to determine the indexes needed to built a candidate to occur. We then process each filter element and their tag to determine the indexes
set. Once we reach threshold we return the partial candidate set, and begin the id2entry process and needed to built a candidate set. Once we reach threshold we return the partial candidate set,
the entry_match_no_index routine. and begin the `id2entry` process and the `entry_match_no_index` routine.
And and Or terms have flags if they are partial or fully indexed, meaning we could have a `And` and `Or` terms have flags if they are partial or fully indexed, meaning we could have a
shortcut where if the outermost term is a full indexed term, then we can avoid the entry_match_no_index shortcut where if the outermost term is a full indexed term, then we can avoid the `entry_match_no_index` Scall.
call.
* Create * Create
This is one of the simplest steps. On create we iterate over the entries ava's and referencing the This is one of the simplest steps. On create we iterate over the entries ava's and
index metadata of the txn, we create the indexes as needed from the values (before dbv conversion). referencing the index metadata of the transaction, we create the indexes as needed from
the values (before dbv conversion).
* Delete * Delete
@ -230,18 +236,17 @@ removal of all the other attributes.
* Modify * Modify
This is the truly scary and difficult situation. The simple method would be to "delete" all indexes This is the truly scary and difficult situation. The simple method would be to "delete" all indexes based on the pre-entry state, and then to create again. However the current design
based on the pre-entry state, and then to create again. However the current design of Entry of Entry and modification doesn't work like this as we only get the Entry to add.
and modification doesn't work like this as we only get the Entry to add.
Most likely we will need to change modify to take the set of (pre, post) candidates as a pair Most likely we will need to change modify to take the set of (pre, post) candidates as a pair
*OR* we have the entry store it's own pre-post internally. Given we already need to store the pre *OR* we have the entry store it's own pre-post internally. Given we already need to store the pre
/post entries in the txn, it's likely better to have a pairing of these, and that allows us to /post entries in the txn, it's likely better to have a pairing of these, and that allows us to
then index replication metadata later as the entry will contain it's own changelog internally. then index replication metadata later as the entry will contain it's own changelog internally.
Given the pair, we then assert they are infact, the same entry (id), and we can then use the Given the pair, we then assert that they are the same entry (id). We can then use the
index metadata to generate an indexing diff between them, containing a set of index items index metadata to generate an indexing diff between them, containing a set of index items
to remove (due to removal of the attr or value) and what to add (due to addition). to remove (due to removal of the attr or value), and what to add (due to addition).
The major transformation cases for testing are: The major transformation cases for testing are:

View file

@ -2,11 +2,11 @@
Schema References Schema References
----------------- -----------------
On top of normal schema, it is sometimes necessary for objects to be able to refer to each other. On top of normal schema, it is sometimes necessary for objects to be able to refer
The classic example of groups containing members, and memberof which is a reverse lookup of these to each other. The classic example of groups containing members, and memberof which
relationships. In order to improve the clarity and performance of these types, instead of having is a reverse lookup of these relationships. In order to improve the clarity and
them simply as free-utf8 fields that require upkeep, we should have a dedicated reference type in performance of these types, instead of having them simply as free-utf8 fields that
the schema. require upkeep, we should have a dedicated reference type in the schema.
Benefits Benefits
-------- --------

View file

@ -1,7 +1,14 @@
#!/bin/sh #!/bin/sh
# you can set the hostname if you want, but it'll default to localhost
if [ -z "$CERT_HOSTNAME" ]; then
CERT_HOSTNAME="localhost"
fi
KANI_TMP=/tmp/kanidm/ # also where the files are stored
if [ -z "$KANI_TMP" ]; then
KANI_TMP=/tmp/kanidm/
fi
ALTNAME_FILE="${KANI_TMP}altnames.cnf" ALTNAME_FILE="${KANI_TMP}altnames.cnf"
CACERT="${KANI_TMP}ca.pem" CACERT="${KANI_TMP}ca.pem"
@ -65,19 +72,21 @@ DEVEOF
openssl req -x509 -new -newkey rsa:4096 -sha256 \ openssl req -x509 -new -newkey rsa:4096 -sha256 \
-keyout "${CAKEY}" \ -keyout "${CAKEY}" \
-out "${CACERT}" \ -out "${CACERT}" \
-days 31 \ -days +31 \
-subj "/C=AU/ST=Queensland/L=Brisbane/O=INSECURE/CN=insecure.ca.localhost" -nodes -subj "/C=AU/ST=Queensland/L=Brisbane/O=INSECURE/CN=insecure.ca.localhost" -nodes
# generate the private key # generate the ca private key
openssl genrsa -out "${KEYFILE}" 4096 openssl genrsa -out "${KEYFILE}" 4096
# generate the certficate signing request # generate the certficate signing request
openssl req -sha256 \ openssl req -sha256 \
-config "${ALTNAME_FILE}" \ -config "${ALTNAME_FILE}" \
-days 31 \
-new -extensions v3_req \ -new -extensions v3_req \
-key "${KEYFILE}"\ -key "${KEYFILE}"\
-subj "/C=AU/ST=Queensland/L=Brisbane/O=INSECURE/CN=${CERT_HOSTNAME}" \
-nodes \
-out "${CSRFILE}" -out "${CSRFILE}"
# sign the cert # sign the cert
openssl x509 -req -days 31 \ openssl x509 -req -days 31 \
-extfile "${ALTNAME_FILE}" \ -extfile "${ALTNAME_FILE}" \
@ -95,4 +104,5 @@ openssl dhparam -in "${CAFILE}" -out "${DHFILE}" 2048
echo "Certificate chain is at: ${CHAINFILE}" echo "Certificate chain is at: ${CHAINFILE}"
echo "Private key is at: ${KEYFILE}" echo "Private key is at: ${KEYFILE}"
echo ""
echo "**Remember** the default action is to store the files in /tmp/ so they'll be deleted on reboot! Set the KANI_TMP environment variable before running this script if you want to change that. You'll need to update server config elsewhere if you do, however."

View file

@ -1,7 +1,8 @@
[book] [book]
authors = [ authors = [
"James Hodgkinson",
"William Brown", "William Brown",
"James Hodgkinson",
"Carla Schroder",
] ]
language = "en" language = "en"
multilingual = false multilingual = false
@ -11,3 +12,5 @@ title = "Kanidm Administration"
[output.html] [output.html]
edit-url-template = "https://github.com/kanidm/kanidm/edit/master/kanidm_book/{path}" edit-url-template = "https://github.com/kanidm/kanidm/edit/master/kanidm_book/{path}"
git-repository-url = "https://github.com/kanidm/kanidm" git-repository-url = "https://github.com/kanidm/kanidm"
[preprocessor.template]

View file

@ -1,4 +1,4 @@
# Summary # Kanidm
- [Introduction to Kanidm](intro.md) - [Introduction to Kanidm](intro.md)
- [Installing the Server](installing_the_server.md) - [Installing the Server](installing_the_server.md)
@ -18,9 +18,12 @@
# For Developers # For Developers
- [Developer Guide](DEVELOPER_README.md) - [Developer Guide](DEVELOPER_README.md)
- [Design Documents]()
- [Access Profiles](developers/designs/access_profiles_and_security.md)
- [Python Module](developers/python.md) - [Python Module](developers/python.md)
- [RADIUS Integration](developers/radius.md) - [RADIUS Integration](developers/radius.md)
# Integrations # Integrations
- [Oauth2](integrations/oauth2.md) - [Oauth2](integrations/oauth2.md)

View file

@ -156,7 +156,7 @@ Second, change `domain` and `origin` in `server.toml`.
Third, trigger the database domain rename process. Third, trigger the database domain rename process.
docker run --rm -i -t -v kanidmd:/data \ docker run --rm -i -t -v kanidmd:/data \
kanidm/server:latest /sbin/kanidmd domain_name_change -c /data/server.toml kanidm/server:latest /sbin/kanidmd domain rename -c /data/server.toml
Finally, you can now start your instance again. Finally, you can now start your instance again.

View file

@ -0,0 +1 @@
# Designs

View file

@ -0,0 +1,477 @@
Access Profiles
===============
Access Profiles (ACPs) are a way of expressing the set of actions which accounts are
permitted to perform on database records (`object`) in the system.
As a result, there are specific requirements to what these can control and how they are
expressed.
Access profiles define an action of `allow` or `deny`: `deny` has priority over `allow`
and will override even if applicable. They should only be created by system access profiles
because certain changes must be denied.
Access profiles are stored as entries and are dynamically loaded into a structure that is
more efficent for use at runtime. `Schema` and its transactions are a similar implementation.
Search Requirements
-------------------
A search access profile must be able to limit:
1. the content of a search request and its scope.
2. the set of data returned from the objects visible.
An example:
> Alice should only be able to search for objects where the class is `person`
> and the object is a memberOf the group called "visible".
>
> Alice should only be able to see those the attribute `displayName` for those
> users (not their `legalName`), and their public `email`.
Worded a bit differently. You need permission over the scope of entries, you need to be able
to read the attribute to filter on it, and you need to be able to read the attribute to recieve
it in the result entry.
If Alice searches for `(&(name=william)(secretdata=x))`, we should not allow this to
proceed because Alice doesn't have the rights to read secret data, so they should not be allowed
to filter on it. How does this work with two overlapping ACPs? For example: one that allows read
of name and description to class = group, and one that allows name to user. We don't want to
say `(&(name=x)(description=foo))` and it to be allowed, because we don't know the target class
of the filter. Do we "unmatch" all users because they have no access to the filter components? (Could
be done by inverting and putting in an AndNot of the non-matchable overlaps). Or do we just
filter our description from the users returned (But that implies they DID match, which is a disclosure).
More concrete:
```yaml
search {
action: allow
targetscope: Eq("class", "group")
targetattr: name
targetattr: description
}
search {
action: allow
targetscope: Eq("class", "user")
targetattr: name
}
SearchRequest {
...
filter: And: {
Pres("name"),
Pres("description"),
}
}
```
A potential defense is:
```yaml
acp class group: Pres(name) and Pres(desc) both in target attr, allow
acp class user: Pres(name) allow, Pres(desc) deny. Invert and Append
```
So the filter now is:
```yaml
And: {
AndNot: {
Eq("class", "user")
},
And: {
Pres("name"),
Pres("description"),
},
}
```
This would now only allow access to the `name` and `description` of the class `group`.
If we extend this to a third, this would work. A more complex example:
```yaml
search {
action: allow
targetscope: Eq("class", "group")
targetattr: name
targetattr: description
}
search {
action: allow
targetscope: Eq("class", "user")
targetattr: name
}
search {
action: allow
targetscope: And(Eq("class", "user"), Eq("name", "william"))
targetattr: description
}
```
Now we have a single user where we can read `description`. So the compiled filter above as:
```yaml
And: {
AndNot: {
Eq("class", "user")
},
And: {
Pres("name"),
Pres("description"),
},
}
```
This would now be invalid, first, because we would see that `class=user` and `william` has no name
so that would be excluded also. We also may not even have "class=user" in the second ACP, so we can't
use subset filter matching to merge the two.
As a result, I think the only possible valid solution is to perform the initial filter, then determine
on the candidates if we *could* have have valid access to filter on all required attributes. IE
this means even with an index look up, we still are required to perform some filter application
on the candidates.
I think this will mean on a possible candidate, we have to apply all ACP, then create a union of
the resulting targetattrs, and then compared that set into the set of attributes in the filter.
This will be slow on large candidate sets (potentially), but could be sped up with parallelism, caching
or other methods. However, in the same step, we can also apply the step of extracting only the allowed
read target attrs, so this is a valuable exercise.
Delete Requirements
-------------------
A `delete` profile must contain the `content` and `scope` of a delete.
An example:
> Alice should only be able to delete objects where the `memberOf` is
> `purgeable`, and where they are not marked as `protected`.
Create Requirements
-------------------
A `create` profile defines the following limits to what objects can be created, through the combination of filters and atttributes.
An example:
> Alice should only be able to create objects where the `class` is `group`, and can
> only name the group, but they cannot add members to the group.
An example of a content requirement could be something like "the value of an attribute must pass a regular expression filter".
This could limit a user to creating a group of any name, except where the group's name contains "admin".
This a contrived example which is also possible with filtering, but more complex requirements are possible.
For example, we want to be able to limit the classes that someone *could* create on an object
because classes often are used in security rules.
Modify Requirements
-------------------
A `modify` profile defines the following limits:
- a filter for which objects can be modified,
- a set of attributes which can be modified.
A `modify` profile defines a limit on the `modlist` actions.
For example: you may only be allowed to ensure `presence` of a value. (Modify allowing purge, not-present, and presence).
Content requirements (see [Create Requirements](#create-requirements)) are out of scope at the moment.
An example:
> Alice should only be able to modify a user's password if that user is a member of the
> students group.
**Note:** `modify` does not imply `read` of the attribute. Care should be taken that we don't disclose
the current value in any error messages if the operation fails.
Targeting Requirements
-----------------------
The `target` of an access profile should be a filter defining the objects that this applies to.
The filter limit for the profiles of what they are acting on requires a single special operation
which is the concept of "targeting self".
For example: we could define a rule that says "members of group X are allowed self-write to the `mobilePhoneNumber` attribute".
An extension to the filter code could allow an extra filter enum of `self`, that would allow this
to operate correctly, and would consume the entry in the event as the target of "Self". This would
be best implemented as a compilation of `self -> eq(uuid, self.uuid)`.
Implementation Details
----------------------
CHANGE: Receiver should be a group, and should be single value/multivalue? Can *only* be a group.
Example profiles:
```yaml
search {
action: allow
receiver: Eq("memberof", "admins")
targetscope: Pres("class")
targetattr: legalName
targetattr: displayName
description: Allow admins to read all users names
}
search {
action: allow
receiver: Self
targetscope: Self
targetattr: homeAddress
description: Allow everyone to read only their own homeAddress
}
delete {
action: allow
receiver: Or(Eq("memberof", "admins), Eq("memberof", "servicedesk"))
targetscope: Eq("memberof", "tempaccount")
description: Allow admins or servicedesk to delete any member of "temp accounts".
}
// This difference in targetscope behaviour could be justification to change the keyword here
// to prevent confusion.
create {
action: allow
receiver: Eq("name", "alice")
targetscope: And(Eq("class", "person"), Eq("location", "AU"))
createattr: location
createattr: legalName
createattr: mail
createclass: person
createclass: object
description: Allow alice to make new persons, only with class person+object, and only set
the attributes mail, location and legalName. The created object must conform to targetscope
}
modify {
action: allow
receiver: Eq("name", "claire")
targetscope: And(Eq("class", "group"), Eq("name", "admins"))
presentattr: member
description: Allow claire to promote people as members of the admins group.
}
modify {
action: allow
receiver: Eq("name", "claire")
targetscope: And(Eq("class", "person"), Eq("memberof", "students"))
presentattr: sshkeys
presentattr: class
targetclass: unixuser
description: Allow claire to modify persons in the students group, and to grant them the
class of unixuser (only this class can be granted!). Subsequently, she may then give
the sshkeys values as a modification.
}
modify {
action: allow
receiver: Eq("name", "alice")
targetscope: Eq("memberof", "students")
removedattr: sshkeys
description: Allow allice to purge or remove sshkeys from members of the students group,
but not add new ones
}
modify {
action: allow
receiver: Eq("name", "alice")
targetscope: Eq("memberof", "students")
removedattr: sshkeys
presentattr: sshkeys
description: Allow alice full control over the ssh keys attribute on members of students.
}
// This may not be valid: Perhaps if <*>attr: is on modify/create, then targetclass, must
// must be set, else class is considered empty.
//
// This profile could in fact be an invalid example, because presentattr: class, but not
// targetclass, so nothing could be granted.
modify {
action: allow
receiver: Eq("name", "alice")
targetscope: Eq("memberof", "students")
presentattr: class
description: Allow alice to grant any class to members of students.
}
```
Formalised Schema
-----------------
A complete schema would be:
### Attributes
| Name | Single/Multi | Type | Description |
| --- | --- | --- | |
| acp_allow | single value | bool | |
| acp_enable | single value | bool | This ACP is enabled |
| acp_receiver | single value | filter | ??? |
| acp_targetscope | single value | filter | ??? |
| acp_search_attr | multi value | utf8 case insense | A list of attributes that can be searched. |
| acp_create_class | multi value | utf8 case insense | Object classes in which an object can be created. |
| acp_create_attr | multi value | utf8 case insense | Attribute Entries that can be created. |
| acp_modify_removedattr | multi value | utf8 case insense | Modify if removed? |
| acp_modify_presentattr | multi value | utf8 case insense | ??? |
| acp_modify_class | multi value | utf8 case insense | ??? |
### Classes
| Name | Must Have | May Have |
| --- | --- | --- |
| access_control_profile | `[acp_receiver, acp_targetscope]` | `[description, acp_allow]` |
| access_control_search | `[acp_search_attr]` | |
| access_control_delete | | |
| access_control_modify | | `[acp_modify_removedattr, acp_modify_presentattr, acp_modify_class]` |
| access_control_create | | `[acp_create_class, acp_create_attr]` |
**Important**: empty sets really mean empty sets!
The ACP code will assert that both `access_control_profile` *and* one of the `search/delete/modify/create`
classes exists on an ACP. An important factor of this design is now the ability to *compose*
multiple ACP's into a single entry allowing a `create/delete/modify` to exist! However, each one must
still list their respective actions to allow proper granularity.
"Search" Application
------------------
The set of access controls is checked, and the set where receiver matches the current identified
user is collected. These then are added to the users requested search as:
```
And(<User Search Request>, Or(<Set of Search Profile Filters))
```
In this manner, the search security is easily applied, as if the targets to conform to one of the
required search profile filters, the outer `And` condition is nullified and no results returned.
Once complete, in the translation of the entry -> proto_entry, each access control and its allowed
set of attrs has to be checked to determine what of that entry can be displayed. Consider there are
three entries, A, B, C. An ACI that allows read of "name" on A, B exists, and a read of "mail" on
B, C. The correct behaviour is then:
```
A: name
B: name, mail
C: mail
```
So this means that the `entry -> proto entry` part is likely the most expensive part of the access
control operation, but also one of the most important. It may be possible to compile to some kind
of faster method, but initially a simple version is needed.
"Delete" Application
------------------
Delete is similar to search, however there is the risk that the user may say something like:
```
Pres("class").
```
Were we to approach this like search, this would then have "every thing the identified user
is allowed to delete, is deleted". A consideration here is that `Pres("class")` would delete "all"
objects in the directory, but with the access control present, it would limit the deletion to the
set of allowed deletes.
In a sense this is a correct behaviour - they were allowed to delete everything they asked to
delete. However, in another it's not valid: the request was broad and they were not allowed access
to delete everything they requested.
The possible abuse vector here is that an attacker could then use delete requests to enumerate the
existence of entries in the database that they do not have access to. This requires someone to have
the delete privilege which in itself is very high level of access, so this risk may be minimal.
So the choices are:
1. Treat it like search and allow the user to delete what they are allowed to delete,
but ignore other objects
2. Deny the request because their delete was too broad, and they must specify a valid deletion request.
Option #2 seems more correct because the `delete` request is an explicit request, not a request where
you want partial results. Imagine someone wants to delete users A and B at the same time, but only
has access to A. They want this request to fail so they KNOW B was not deleted, rather than it
succeed and have B still exist with a partial delete status.
However, a possible issue is that Option #2 means that a delete request of
`And(Eq(attr, allowed_attribute), Eq(attr, denied))`, which is rejected may indicate presence of the
`denied` attribute. So option #1 may help in preventing a security risk of information disclosure.
<!-- TODO
@yaleman: not always, it could indicate that the attribute doesn't exist so it's an invalid filter, but
that would depend if the response was "invalid" in both cases, or "invalid" / "refused"
-->
This is also a concern for modification, where the modification attempt may or may not
fail depending on the entries and if you can/can't see them.
**IDEA:** You can only `delete`/`modify` within the read scope you have. If you can't
read it (based on the read rules of `search`), you can't `delete` it. This is in addition to the filter
rules of the `delete` applying as well. So performing a `delete` of `Pres(class)`, will only delete
in your `read` scope and will never disclose if you are denied access.
<!-- TODO
@yaleman: This goes back to the commentary on Option #2 and feels icky like SQL's `DELETE FROM <table>` just deleting everything. It's more complex from the client - you have to search for a set of things to delete - then delete them.
Explicitly listing the objects you want to delete feels.... way less bad. This applies to modifies too.  😁
-->
"Create" Application
------------------
Create seems like the easiest to apply. Ensure that only the attributes in `createattr` are in the
`createevent`, ensure the classes only contain the set in `createclass`, then finally apply
`filter_no_index` to the entry to entry. If all of this passes, the create is allowed.
A key point is that there is no union of `create` ACI's - the WHOLE ACI must pass, not parts of
multiple. This means if a control say "allows creating group with member" and "allows creating
user with name", creating a group with `name` is not allowed - despite your ability to create
an entry with `name`, its classes don't match. This way, the administrator of the service can define
create controls with specific intent for how they will be used without the risk of two
controls causing unintended effects (`users` that are also `groups`, or allowing invalid values.
An important consideration is how to handle overlapping ACI. If two ACI *could* match the create
should we enforce both conditions are upheld? Or only a single upheld ACI allows the create?
In some cases it may not be possible to satisfy both, and that would block creates. The intent
of the access profile is that "something like this CAN" be created, so I believe that provided
only a single control passes, the create should be allowed.
"Modify" Application
------------------
Modify is similar to Create, however we specifically filter on the `modlist` action of `present`,
`removed` or `purged` with the action. The rules of create still apply; provided all requirements
of the modify are permitted, then it is allowed once at least one profile allows the change.
A key difference is that if the modify ACP lists multiple `presentattr` types, the modify request
is valid if it is only modifying one attribute. IE we say `presentattr: name, email`, but we
only attempt to modify `email`.
Considerations
--------------
* When should access controls be applied? During an operation, we only validate schema after
pre* Plugin application, so likely it has to be "at that point", to ensure schema-based
validity of the entries that are allowed to be changed.
* Self filter keyword should compile to `eq("uuid", "....")`. When do we do this and how?
* `memberof` could take `name` or `uuid`, we need to be able to resolve this correctly, but this is
likely an issue in `memberof` which needs to be addressed, ie `memberof uuid` vs `memberof attr`.
* Content controls in `create` and `modify` will be important to get right to avoid the security issues
of LDAP access controls. Given that `class` has special importance, it's only right to give it extra
consideration in these controls.
* In the future when `recyclebin` is added, a `re-animation` access profile should be created allowing
revival of entries given certain conditions of the entry we are attempting to revive. A service-desk user
should not be able to revive a deleted high-privilege user.

View file

@ -9,11 +9,12 @@ The intent of the Kanidm project is to:
* Enable integrations to systems and services so they can authenticate accounts. * Enable integrations to systems and services so they can authenticate accounts.
* Make system, network, application and web authentication easy and accessible. * Make system, network, application and web authentication easy and accessible.
> ![Kanidm Alert](/images/kani-alert.png) **NOTICE:**
> {{#template
> templates/kani-warning.md
> This is a pre-release project. While all effort has been made to ensure no data loss title=NOTICE
> or security flaws, you should still be careful when using this in your environment. text=This is a pre-release project. While all effort has been made to ensure no data loss or security flaws, you should still be careful when using this in your environment.
}}
## Library documentation ## Library documentation

View file

@ -0,0 +1,9 @@
<table>
<tr>
<td rowspan=2><img src="/images/kani-warning.png" alt="Kani Warning" /></td>
<td><strong>[[#title]]</strong></td>
</tr>
<tr>
<td>[[#text]]</td>
</tr>
</table>

View file

@ -1883,6 +1883,18 @@ impl KanidmClient {
r.and_then(|mut v| v.pop().ok_or(ClientError::EmptyResponse)) r.and_then(|mut v| v.pop().ok_or(ClientError::EmptyResponse))
} }
/// Sets the domain display name using a PUT request
pub async fn idm_domain_set_display_name(
&self,
new_display_name: &str,
) -> Result<(), ClientError> {
self.perform_put_request(
"/v1/domain/_attr/domain_display_name",
vec![new_display_name.to_string()],
)
.await
}
pub async fn idm_domain_get_ssid(&self) -> Result<String, ClientError> { pub async fn idm_domain_get_ssid(&self) -> Result<String, ClientError> {
self.perform_get_request("/v1/domain/_attr/domain_ssid") self.perform_get_request("/v1/domain/_attr/domain_ssid")
.await .await

View file

@ -4,12 +4,15 @@ use serde::{Deserialize, Serialize};
use std::fmt; use std::fmt;
use std::str::FromStr; use std::str::FromStr;
/// This is used in user-facing CLIs to set the formatting for output,
/// and defaults to text.
#[derive(Debug, Serialize, Deserialize, Clone, Copy, PartialEq, Eq)] #[derive(Debug, Serialize, Deserialize, Clone, Copy, PartialEq, Eq)]
#[serde(rename_all = "lowercase")] #[serde(rename_all = "lowercase")]
pub enum ConsoleOutputMode { pub enum ConsoleOutputMode {
Text, Text,
JSON, JSON,
} }
impl Default for ConsoleOutputMode { impl Default for ConsoleOutputMode {
fn default() -> Self { fn default() -> Self {
ConsoleOutputMode::Text ConsoleOutputMode::Text
@ -18,7 +21,18 @@ impl Default for ConsoleOutputMode {
impl FromStr for ConsoleOutputMode { impl FromStr for ConsoleOutputMode {
type Err = &'static str; type Err = &'static str;
/// This can be safely unwrap'd because it'll always return a default /// This can be safely unwrap'd because it'll always return a default of text
/// ```
/// use kanidm_proto::messages::ConsoleOutputMode;
///
/// let mode: ConsoleOutputMode = "🦀".into();
/// assert_eq!(ConsoleOutputMode::Text, mode);
/// let mode: ConsoleOutputMode = "".into();
/// assert_eq!(ConsoleOutputMode::Text, mode);
///
/// let mode: ConsoleOutputMode = "json".into();
/// assert_eq!(ConsoleOutputMode::JSON, mode);
/// ```
fn from_str(s: &str) -> Result<Self, Self::Err> { fn from_str(s: &str) -> Result<Self, Self::Err> {
match s { match s {
"json" => Ok(ConsoleOutputMode::JSON), "json" => Ok(ConsoleOutputMode::JSON),
@ -70,7 +84,7 @@ impl From<String> for ConsoleOutputMode {
} }
} }
#[derive(Debug, Serialize, Deserialize)] #[derive(Debug, Serialize, Deserialize, PartialEq)]
#[serde(rename_all = "lowercase")] #[serde(rename_all = "lowercase")]
pub enum MessageStatus { pub enum MessageStatus {
Failure, Failure,
@ -111,6 +125,24 @@ impl Default for AccountChangeMessage {
} }
/// This outputs in either JSON or Text depending on the output_mode setting /// This outputs in either JSON or Text depending on the output_mode setting
/// ```
/// use std::fmt::format;
/// use kanidm_proto::messages::*;
/// let mut msg = AccountChangeMessage::default();
/// msg.action=String::from("cake_eating");
/// msg.src_user=String::from("Kani");
/// msg.dest_user=String::from("Krabby");
/// msg.result=String::from("It was amazing");
/// assert_eq!(msg.status, MessageStatus::Success);
///
/// let expected_result = "success - cake_eating for user Krabby: It was amazing";
/// assert_eq!(format!("{}", msg), expected_result);
///
/// msg.output_mode = ConsoleOutputMode::JSON;
/// let expected_result = "{\"action\":\"cake_eating\",\"result\":\"It was amazing\",\"status\":\"success\",\"src_user\":\"Kani\",\"dest_user\":\"Krabby\"}";
/// assert_eq!(format!("{}", msg), expected_result);
///
/// ```
impl fmt::Display for AccountChangeMessage { impl fmt::Display for AccountChangeMessage {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self.output_mode { match self.output_mode {
@ -127,3 +159,55 @@ impl fmt::Display for AccountChangeMessage {
} }
} }
} }
#[derive(Debug, Serialize, Deserialize)]
pub struct BasicMessage {
#[serde(skip_serializing)]
pub output_mode: ConsoleOutputMode,
pub action: String,
pub result: String,
pub status: MessageStatus,
}
impl Default for BasicMessage {
fn default() -> Self {
BasicMessage {
output_mode: ConsoleOutputMode::Text,
action: String::from(""),
result: String::from(""),
status: MessageStatus::Success,
}
}
}
/// This outputs in either JSON or Text depending on the output_mode setting
/// ```
/// use std::fmt::format;
/// use kanidm_proto::messages::*;
/// let mut msg = BasicMessage::default();
/// msg.action=String::from("cake_eating");
/// msg.result=String::from("It was amazing");
/// assert_eq!(msg.status, MessageStatus::Success);
///
/// let expected_result = "success - cake_eating: It was amazing";
/// assert_eq!(format!("{}", msg), expected_result);
///
/// msg.output_mode = ConsoleOutputMode::JSON;
/// let expected_result = "{\"action\":\"cake_eating\",\"result\":\"It was amazing\",\"status\":\"success\"}";
/// assert_eq!(format!("{}", msg), expected_result);
///
/// ```
impl fmt::Display for BasicMessage {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self.output_mode {
ConsoleOutputMode::JSON => write!(
f,
"{}",
serde_json::to_string(self).unwrap_or(format!("{:?}", self)) // if it fails to JSON serialize, just debug-dump it
),
ConsoleOutputMode::Text => {
write!(f, "{} - {}: {}", self.status, self.action, self.result,)
}
}
}
}

View file

@ -21,6 +21,7 @@ pub enum SchemaError {
MissingMustAttribute(Vec<String>), MissingMustAttribute(Vec<String>),
InvalidAttribute(String), InvalidAttribute(String),
InvalidAttributeSyntax(String), InvalidAttributeSyntax(String),
AttributeNotValidForClass(String),
EmptyFilter, EmptyFilter,
Corrupted, Corrupted,
PhantomAttribute(String), PhantomAttribute(String),
@ -776,6 +777,7 @@ impl fmt::Display for TotpAlgo {
#[derive(Debug, Clone, Serialize, Deserialize)] #[derive(Debug, Clone, Serialize, Deserialize)]
pub struct TotpSecret { pub struct TotpSecret {
pub accountname: String, pub accountname: String,
/// User-facing name of the system, issuer of the TOTP
pub issuer: String, pub issuer: String,
pub secret: Vec<u8>, pub secret: Vec<u8>,
pub algo: TotpAlgo, pub algo: TotpAlgo,

View file

@ -9,7 +9,7 @@ use kanidm_client::ClientError::Http as ClientErrorHttp;
use kanidm_client::KanidmClient; use kanidm_client::KanidmClient;
use kanidm_proto::messages::{AccountChangeMessage, ConsoleOutputMode, MessageStatus}; use kanidm_proto::messages::{AccountChangeMessage, ConsoleOutputMode, MessageStatus};
use kanidm_proto::v1::OperationError::{InvalidAttribute, PasswordQuality}; use kanidm_proto::v1::OperationError::{InvalidAttribute, PasswordQuality};
use kanidm_proto::v1::{CUIntentToken, CURegState, CUSessionToken, CUStatus}; use kanidm_proto::v1::{CUIntentToken, CURegState, CUSessionToken, CUStatus, TotpSecret};
use qrcode::{render::unicode, QrCode}; use qrcode::{render::unicode, QrCode};
use std::fmt::{self, Debug}; use std::fmt::{self, Debug};
use std::str::FromStr; use std::str::FromStr;
@ -661,7 +661,7 @@ impl FromStr for CUAction {
async fn totp_enroll_prompt(session_token: &CUSessionToken, client: &KanidmClient) { async fn totp_enroll_prompt(session_token: &CUSessionToken, client: &KanidmClient) {
// First, submit the server side gen. // First, submit the server side gen.
let totp_secret = match client let totp_secret: TotpSecret = match client
.idm_account_credential_update_init_totp(session_token) .idm_account_credential_update_init_totp(session_token)
.await .await
{ {

View file

@ -3,12 +3,27 @@ use crate::DomainOpt;
impl DomainOpt { impl DomainOpt {
pub fn debug(&self) -> bool { pub fn debug(&self) -> bool {
match self { match self {
DomainOpt::SetDomainDisplayName(copt) => copt.copt.debug,
DomainOpt::Show(copt) | DomainOpt::ResetTokenKey(copt) => copt.debug, DomainOpt::Show(copt) | DomainOpt::ResetTokenKey(copt) => copt.debug,
} }
} }
pub async fn exec(&self) { pub async fn exec(&self) {
match self { match self {
DomainOpt::SetDomainDisplayName(opt) => {
eprintln!(
"Attempting to set the domain's display name to: {:?}",
opt.new_display_name
);
let client = opt.copt.to_client().await;
match client
.idm_domain_set_display_name(&opt.new_display_name)
.await
{
Ok(_) => println!("Success"),
Err(e) => eprintln!("{:?}", e),
}
}
DomainOpt::Show(copt) => { DomainOpt::Show(copt) => {
let client = copt.to_client().await; let client = copt.to_client().await;
match client.idm_domain_get().await { match client.idm_domain_get().await {

View file

@ -474,10 +474,21 @@ pub enum Oauth2Opt {
DisableLegacyCrypto(Named), DisableLegacyCrypto(Named),
} }
#[derive(Args, Debug)]
pub struct OptSetDomainDisplayName{
#[clap(flatten)]
copt: CommonOpt,
#[clap(name = "new_display_Name")]
new_display_name: String,
}
#[derive(Debug, Subcommand)] #[derive(Debug, Subcommand)]
pub enum DomainOpt { pub enum DomainOpt {
#[clap[name = "set_domain_display_name"]]
/// Set the domain display name
SetDomainDisplayName(OptSetDomainDisplayName),
#[clap(name = "show")] #[clap(name = "show")]
/// Show information about this systems domain /// Show information about this system's domain
Show(CommonOpt), Show(CommonOpt),
#[clap(name = "reset_token_key")] #[clap(name = "reset_token_key")]
/// Reset this domain token signing key. This will cause all user sessions to be /// Reset this domain token signing key. This will cause all user sessions to be

View file

@ -85,7 +85,7 @@ fn main() {
Shell::Zsh, Shell::Zsh,
&mut UnixdStatusOpt::command(), &mut UnixdStatusOpt::command(),
"kanidm_unixd_status", "kanidm_unixd_status",
comp_dir.clone(), comp_dir,
) )
.ok(); .ok();
} }

View file

@ -16,6 +16,7 @@ extern crate libnss;
extern crate lazy_static; extern crate lazy_static;
use kanidm_unix_common::client_sync::call_daemon_blocking; use kanidm_unix_common::client_sync::call_daemon_blocking;
use kanidm_unix_common::constants::DEFAULT_CONFIG_PATH;
use kanidm_unix_common::unix_config::KanidmUnixdConfig; use kanidm_unix_common::unix_config::KanidmUnixdConfig;
use kanidm_unix_common::unix_proto::{ClientRequest, ClientResponse, NssGroup, NssUser}; use kanidm_unix_common::unix_proto::{ClientRequest, ClientResponse, NssGroup, NssUser};
@ -29,7 +30,7 @@ libnss_passwd_hooks!(kanidm, KanidmPasswd);
impl PasswdHooks for KanidmPasswd { impl PasswdHooks for KanidmPasswd {
fn get_all_entries() -> Response<Vec<Passwd>> { fn get_all_entries() -> Response<Vec<Passwd>> {
let cfg = let cfg =
match KanidmUnixdConfig::new().read_options_from_optional_config("/etc/kanidm/unixd") { match KanidmUnixdConfig::new().read_options_from_optional_config(DEFAULT_CONFIG_PATH) {
Ok(c) => c, Ok(c) => c,
Err(_) => { Err(_) => {
return Response::Unavail; return Response::Unavail;
@ -47,7 +48,7 @@ impl PasswdHooks for KanidmPasswd {
fn get_entry_by_uid(uid: libc::uid_t) -> Response<Passwd> { fn get_entry_by_uid(uid: libc::uid_t) -> Response<Passwd> {
let cfg = let cfg =
match KanidmUnixdConfig::new().read_options_from_optional_config("/etc/kanidm/unixd") { match KanidmUnixdConfig::new().read_options_from_optional_config(DEFAULT_CONFIG_PATH) {
Ok(c) => c, Ok(c) => c,
Err(_) => { Err(_) => {
return Response::Unavail; return Response::Unavail;
@ -67,7 +68,7 @@ impl PasswdHooks for KanidmPasswd {
fn get_entry_by_name(name: String) -> Response<Passwd> { fn get_entry_by_name(name: String) -> Response<Passwd> {
let cfg = let cfg =
match KanidmUnixdConfig::new().read_options_from_optional_config("/etc/kanidm/unixd") { match KanidmUnixdConfig::new().read_options_from_optional_config(DEFAULT_CONFIG_PATH) {
Ok(c) => c, Ok(c) => c,
Err(_) => { Err(_) => {
return Response::Unavail; return Response::Unavail;
@ -92,7 +93,7 @@ libnss_group_hooks!(kanidm, KanidmGroup);
impl GroupHooks for KanidmGroup { impl GroupHooks for KanidmGroup {
fn get_all_entries() -> Response<Vec<Group>> { fn get_all_entries() -> Response<Vec<Group>> {
let cfg = let cfg =
match KanidmUnixdConfig::new().read_options_from_optional_config("/etc/kanidm/unixd") { match KanidmUnixdConfig::new().read_options_from_optional_config(DEFAULT_CONFIG_PATH) {
Ok(c) => c, Ok(c) => c,
Err(_) => { Err(_) => {
return Response::Unavail; return Response::Unavail;
@ -110,7 +111,7 @@ impl GroupHooks for KanidmGroup {
fn get_entry_by_gid(gid: libc::gid_t) -> Response<Group> { fn get_entry_by_gid(gid: libc::gid_t) -> Response<Group> {
let cfg = let cfg =
match KanidmUnixdConfig::new().read_options_from_optional_config("/etc/kanidm/unixd") { match KanidmUnixdConfig::new().read_options_from_optional_config(DEFAULT_CONFIG_PATH) {
Ok(c) => c, Ok(c) => c,
Err(_) => { Err(_) => {
return Response::Unavail; return Response::Unavail;
@ -130,7 +131,7 @@ impl GroupHooks for KanidmGroup {
fn get_entry_by_name(name: String) -> Response<Group> { fn get_entry_by_name(name: String) -> Response<Group> {
let cfg = let cfg =
match KanidmUnixdConfig::new().read_options_from_optional_config("/etc/kanidm/unixd") { match KanidmUnixdConfig::new().read_options_from_optional_config(DEFAULT_CONFIG_PATH) {
Ok(c) => c, Ok(c) => c,
Err(_) => { Err(_) => {
return Response::Unavail; return Response::Unavail;

View file

@ -23,6 +23,7 @@ use std::convert::TryFrom;
use std::ffi::CStr; use std::ffi::CStr;
// use std::os::raw::c_char; // use std::os::raw::c_char;
use kanidm_unix_common::client_sync::call_daemon_blocking; use kanidm_unix_common::client_sync::call_daemon_blocking;
use kanidm_unix_common::constants::DEFAULT_CONFIG_PATH;
use kanidm_unix_common::unix_config::KanidmUnixdConfig; use kanidm_unix_common::unix_config::KanidmUnixdConfig;
use kanidm_unix_common::unix_proto::{ClientRequest, ClientResponse}; use kanidm_unix_common::unix_proto::{ClientRequest, ClientResponse};
@ -56,7 +57,7 @@ impl TryFrom<&Vec<&CStr>> for Options {
fn get_cfg() -> Result<KanidmUnixdConfig, PamResultCode> { fn get_cfg() -> Result<KanidmUnixdConfig, PamResultCode> {
KanidmUnixdConfig::new() KanidmUnixdConfig::new()
.read_options_from_optional_config("/etc/kanidm/unixd") .read_options_from_optional_config(DEFAULT_CONFIG_PATH)
.map_err(|_| PamResultCode::PAM_SERVICE_ERR) .map_err(|_| PamResultCode::PAM_SERVICE_ERR)
} }

View file

@ -18,6 +18,7 @@ use clap::Parser;
use futures::executor::block_on; use futures::executor::block_on;
use kanidm_unix_common::client::call_daemon; use kanidm_unix_common::client::call_daemon;
use kanidm_unix_common::constants::DEFAULT_CONFIG_PATH;
use kanidm_unix_common::unix_config::KanidmUnixdConfig; use kanidm_unix_common::unix_config::KanidmUnixdConfig;
use kanidm_unix_common::unix_proto::{ClientRequest, ClientResponse}; use kanidm_unix_common::unix_proto::{ClientRequest, ClientResponse};
@ -33,7 +34,7 @@ async fn main() {
debug!("Starting cache invalidate tool ..."); debug!("Starting cache invalidate tool ...");
let cfg = match KanidmUnixdConfig::new().read_options_from_optional_config("/etc/kanidm/unixd") let cfg = match KanidmUnixdConfig::new().read_options_from_optional_config(DEFAULT_CONFIG_PATH)
{ {
Ok(c) => c, Ok(c) => c,
Err(_e) => { Err(_e) => {

View file

@ -18,6 +18,7 @@ use clap::Parser;
use futures::executor::block_on; use futures::executor::block_on;
use kanidm_unix_common::client::call_daemon; use kanidm_unix_common::client::call_daemon;
use kanidm_unix_common::constants::DEFAULT_CONFIG_PATH;
use kanidm_unix_common::unix_config::KanidmUnixdConfig; use kanidm_unix_common::unix_config::KanidmUnixdConfig;
use kanidm_unix_common::unix_proto::{ClientRequest, ClientResponse}; use kanidm_unix_common::unix_proto::{ClientRequest, ClientResponse};
@ -33,7 +34,7 @@ async fn main() {
debug!("Starting cache invalidate tool ..."); debug!("Starting cache invalidate tool ...");
let cfg = match KanidmUnixdConfig::new().read_options_from_optional_config("/etc/kanidm/unixd") let cfg = match KanidmUnixdConfig::new().read_options_from_optional_config(DEFAULT_CONFIG_PATH)
{ {
Ok(c) => c, Ok(c) => c,
Err(_e) => { Err(_e) => {

View file

@ -1,5 +1,6 @@
use crate::unix_config::{HomeAttr, UidAttr}; use crate::unix_config::{HomeAttr, UidAttr};
pub const DEFAULT_CONFIG_PATH: &str = "/etc/kanidm/unixd";
pub const DEFAULT_SOCK_PATH: &str = "/var/run/kanidm-unixd/sock"; pub const DEFAULT_SOCK_PATH: &str = "/var/run/kanidm-unixd/sock";
pub const DEFAULT_TASK_SOCK_PATH: &str = "/var/run/kanidm-unixd/task_sock"; pub const DEFAULT_TASK_SOCK_PATH: &str = "/var/run/kanidm-unixd/task_sock";
pub const DEFAULT_DB_PATH: &str = "/var/cache/kanidm-unixd/kanidm.cache.db"; pub const DEFAULT_DB_PATH: &str = "/var/cache/kanidm-unixd/kanidm.cache.db";

View file

@ -39,6 +39,7 @@ use tokio_util::codec::{Decoder, Encoder};
use kanidm_client::KanidmClientBuilder; use kanidm_client::KanidmClientBuilder;
use kanidm_unix_common::cache::CacheLayer; use kanidm_unix_common::cache::CacheLayer;
use kanidm_unix_common::constants::DEFAULT_CONFIG_PATH;
use kanidm_unix_common::unix_config::KanidmUnixdConfig; use kanidm_unix_common::unix_config::KanidmUnixdConfig;
use kanidm_unix_common::unix_proto::{ClientRequest, ClientResponse, TaskRequest, TaskResponse}; use kanidm_unix_common::unix_proto::{ClientRequest, ClientResponse, TaskRequest, TaskResponse};
@ -422,7 +423,7 @@ async fn main() {
} }
} }
let unixd_path = Path::new("/etc/kanidm/unixd"); let unixd_path = Path::new(DEFAULT_CONFIG_PATH);
let unixd_path_str = match unixd_path.to_str() { let unixd_path_str = match unixd_path.to_str() {
Some(cps) => cps, Some(cps) => cps,
None => { None => {

View file

@ -20,6 +20,7 @@ use std::path::PathBuf;
// use futures::executor::block_on; // use futures::executor::block_on;
use kanidm_unix_common::client_sync::call_daemon_blocking; use kanidm_unix_common::client_sync::call_daemon_blocking;
use kanidm_unix_common::constants::DEFAULT_CONFIG_PATH;
use kanidm_unix_common::unix_config::KanidmUnixdConfig; use kanidm_unix_common::unix_config::KanidmUnixdConfig;
use kanidm_unix_common::unix_proto::{ClientRequest, ClientResponse}; use kanidm_unix_common::unix_proto::{ClientRequest, ClientResponse};
@ -34,7 +35,7 @@ fn main() {
trace!("Starting cache status tool ..."); trace!("Starting cache status tool ...");
let cfg = match KanidmUnixdConfig::new().read_options_from_optional_config("/etc/kanidm/unixd") let cfg = match KanidmUnixdConfig::new().read_options_from_optional_config(DEFAULT_CONFIG_PATH)
{ {
Ok(c) => c, Ok(c) => c,
Err(_e) => { Err(_e) => {

View file

@ -19,6 +19,7 @@ use std::path::PathBuf;
use futures::executor::block_on; use futures::executor::block_on;
use kanidm_unix_common::client::call_daemon; use kanidm_unix_common::client::call_daemon;
use kanidm_unix_common::constants::DEFAULT_CONFIG_PATH;
use kanidm_unix_common::unix_config::KanidmUnixdConfig; use kanidm_unix_common::unix_config::KanidmUnixdConfig;
use kanidm_unix_common::unix_proto::{ClientRequest, ClientResponse}; use kanidm_unix_common::unix_proto::{ClientRequest, ClientResponse};
@ -34,7 +35,7 @@ async fn main() {
debug!("Starting authorized keys tool ..."); debug!("Starting authorized keys tool ...");
let cfg = match KanidmUnixdConfig::new().read_options_from_optional_config("/etc/kanidm/unixd") let cfg = match KanidmUnixdConfig::new().read_options_from_optional_config(DEFAULT_CONFIG_PATH)
{ {
Ok(c) => c, Ok(c) => c,
Err(e) => { Err(e) => {

View file

@ -32,6 +32,7 @@ use tokio::time;
use tokio_util::codec::Framed; use tokio_util::codec::Framed;
use tokio_util::codec::{Decoder, Encoder}; use tokio_util::codec::{Decoder, Encoder};
use kanidm_unix_common::constants::DEFAULT_CONFIG_PATH;
use kanidm_unix_common::unix_config::KanidmUnixdConfig; use kanidm_unix_common::unix_config::KanidmUnixdConfig;
use kanidm_unix_common::unix_proto::{HomeDirectoryInfo, TaskRequest, TaskResponse}; use kanidm_unix_common::unix_proto::{HomeDirectoryInfo, TaskRequest, TaskResponse};
@ -214,7 +215,7 @@ async fn main() {
tracing_subscriber::fmt::init(); tracing_subscriber::fmt::init();
let unixd_path = Path::new("/etc/kanidm/unixd"); let unixd_path = Path::new(DEFAULT_CONFIG_PATH);
let unixd_path_str = match unixd_path.to_str() { let unixd_path_str = match unixd_path.to_str() {
Some(cps) => cps, Some(cps) => cps,
None => { None => {

View file

@ -7,6 +7,7 @@ use clap::Parser;
use futures::executor::block_on; use futures::executor::block_on;
use kanidm_unix_common::client::call_daemon; use kanidm_unix_common::client::call_daemon;
use kanidm_unix_common::constants::DEFAULT_CONFIG_PATH;
use kanidm_unix_common::unix_config::KanidmUnixdConfig; use kanidm_unix_common::unix_config::KanidmUnixdConfig;
use kanidm_unix_common::unix_proto::{ClientRequest, ClientResponse}; use kanidm_unix_common::unix_proto::{ClientRequest, ClientResponse};
@ -29,7 +30,7 @@ async fn main() {
debug!("Starting pam auth tester tool ..."); debug!("Starting pam auth tester tool ...");
let cfg = KanidmUnixdConfig::new() let cfg = KanidmUnixdConfig::new()
.read_options_from_optional_config("/etc/kanidm/unixd") .read_options_from_optional_config(DEFAULT_CONFIG_PATH)
.expect("Failed to parse /etc/kanidm/unixd"); .expect("Failed to parse /etc/kanidm/unixd");
let password = rpassword::prompt_password("Enter unix password: ").unwrap(); let password = rpassword::prompt_password("Enter unix password: ").unwrap();

View file

@ -83,10 +83,6 @@ impl KanidmdOpt {
match self { match self {
KanidmdOpt::Server(sopt) KanidmdOpt::Server(sopt)
| KanidmdOpt::ConfigTest(sopt) | KanidmdOpt::ConfigTest(sopt)
| KanidmdOpt::Verify(sopt)
| KanidmdOpt::Reindex(sopt)
| KanidmdOpt::Vacuum(sopt)
| KanidmdOpt::DomainChange(sopt)
| KanidmdOpt::DbScan { | KanidmdOpt::DbScan {
commands: DbScanOpt::ListIndexes(sopt), commands: DbScanOpt::ListIndexes(sopt),
} }
@ -96,8 +92,12 @@ impl KanidmdOpt {
| KanidmdOpt::DbScan { | KanidmdOpt::DbScan {
commands: DbScanOpt::ListIndexAnalysis(sopt), commands: DbScanOpt::ListIndexAnalysis(sopt),
} => &sopt, } => &sopt,
KanidmdOpt::Backup(bopt) => &bopt.commonopts, KanidmdOpt::Database {
KanidmdOpt::Restore(ropt) => &ropt.commonopts, commands: DbCommands::Backup(bopt),
} => &bopt.commonopts,
KanidmdOpt::Database {
commands: DbCommands::Restore(ropt),
} => &ropt.commonopts,
KanidmdOpt::RecoverAccount(ropt) => &ropt.commonopts, KanidmdOpt::RecoverAccount(ropt) => &ropt.commonopts,
KanidmdOpt::DbScan { KanidmdOpt::DbScan {
commands: DbScanOpt::ListIndex(dopt), commands: DbScanOpt::ListIndex(dopt),
@ -106,6 +106,18 @@ impl KanidmdOpt {
KanidmdOpt::DbScan { KanidmdOpt::DbScan {
commands: DbScanOpt::GetId2Entry(dopt), commands: DbScanOpt::GetId2Entry(dopt),
} => &dopt.commonopts, } => &dopt.commonopts,
KanidmdOpt::DomainSettings {
commands: DomainSettingsCmds::DomainChange(sopt),
} => &sopt,
KanidmdOpt::Database {
commands: DbCommands::Verify(sopt),
}
| KanidmdOpt::Database {
commands: DbCommands::Reindex(sopt),
} => &sopt,
KanidmdOpt::Database {
commands: DbCommands::Vacuum(copt),
} => &copt,
} }
} }
} }
@ -340,7 +352,9 @@ async fn main() {
eprintln!("stopped 🛑 "); eprintln!("stopped 🛑 ");
} }
} }
KanidmdOpt::Backup(bopt) => { KanidmdOpt::Database {
commands: DbCommands::Backup(bopt),
} => {
eprintln!("Running in backup mode ..."); eprintln!("Running in backup mode ...");
let p = match bopt.path.to_str() { let p = match bopt.path.to_str() {
Some(p) => p, Some(p) => p,
@ -351,7 +365,9 @@ async fn main() {
}; };
backup_server_core(&config, p); backup_server_core(&config, p);
} }
KanidmdOpt::Restore(ropt) => { KanidmdOpt::Database {
commands: DbCommands::Restore(ropt),
} => {
eprintln!("Running in restore mode ..."); eprintln!("Running in restore mode ...");
let p = match ropt.path.to_str() { let p = match ropt.path.to_str() {
Some(p) => p, Some(p) => p,
@ -362,7 +378,9 @@ async fn main() {
}; };
restore_server_core(&config, p); restore_server_core(&config, p);
} }
KanidmdOpt::Verify(_vopt) => { KanidmdOpt::Database {
commands: DbCommands::Verify(_vopt),
} => {
eprintln!("Running in db verification mode ..."); eprintln!("Running in db verification mode ...");
verify_server_core(&config); verify_server_core(&config);
} }
@ -370,18 +388,12 @@ async fn main() {
eprintln!("Running account recovery ..."); eprintln!("Running account recovery ...");
recover_account_core(&config, &raopt.name); recover_account_core(&config, &raopt.name);
} }
KanidmdOpt::Reindex(_copt) => { KanidmdOpt::Database {
commands: DbCommands::Reindex(_copt),
} => {
eprintln!("Running in reindex mode ..."); eprintln!("Running in reindex mode ...");
reindex_server_core(&config); reindex_server_core(&config);
} }
KanidmdOpt::Vacuum(_copt) => {
eprintln!("Running in vacuum mode ...");
vacuum_server_core(&config);
}
KanidmdOpt::DomainChange(_dopt) => {
eprintln!("Running in domain name change mode ... this may take a long time ...");
domain_rename_core(&config);
}
KanidmdOpt::DbScan { KanidmdOpt::DbScan {
commands: DbScanOpt::ListIndexes(_), commands: DbScanOpt::ListIndexes(_),
} => { } => {
@ -412,5 +424,17 @@ async fn main() {
eprintln!("👀 db scan - get id2 entry - {}", dopt.id); eprintln!("👀 db scan - get id2 entry - {}", dopt.id);
dbscan_get_id2entry_core(&config, dopt.id); dbscan_get_id2entry_core(&config, dopt.id);
} }
KanidmdOpt::DomainSettings {
commands: DomainSettingsCmds::DomainChange(_dopt),
} => {
eprintln!("Running in domain name change mode ... this may take a long time ...");
domain_rename_core(&config);
}
KanidmdOpt::Database {
commands: DbCommands::Vacuum(_copt),
} => {
eprintln!("Running in vacuum mode ...");
vacuum_server_core(&config);
}
} }
} }

View file

@ -38,6 +38,33 @@ struct RecoverAccountOpt {
commonopts: CommonOpt, commonopts: CommonOpt,
} }
#[derive(Debug, Subcommand)]
enum DomainSettingsCmds {
#[clap(name = "rename")]
/// Change the IDM domain name
DomainChange(CommonOpt),
}
#[derive(Debug, Subcommand)]
enum DbCommands {
#[clap(name = "vacuum")]
/// Vacuum the database to reclaim space or change db_fs_type/page_size (offline)
Vacuum(CommonOpt),
#[clap(name = "backup")]
/// Backup the database content (offline)
Backup(BackupOpt),
#[clap(name = "restore")]
/// Restore the database content (offline)
Restore(RestoreOpt),
#[clap(name = "verify")]
/// Verify database and entity consistency.
Verify(CommonOpt),
#[clap(name = "reindex")]
/// Reindex the database (offline)
Reindex(CommonOpt),
}
#[derive(Debug, Args)] #[derive(Debug, Args)]
struct DbScanListIndex { struct DbScanListIndex {
/// The name of the index to list /// The name of the index to list
@ -102,33 +129,27 @@ enum KanidmdOpt {
#[clap(name = "configtest")] #[clap(name = "configtest")]
/// Test the IDM Server configuration, without starting network listeners. /// Test the IDM Server configuration, without starting network listeners.
ConfigTest(CommonOpt), ConfigTest(CommonOpt),
#[clap(name = "backup")]
/// Backup the database content (offline)
Backup(BackupOpt),
#[clap(name = "restore")]
/// Restore the database content (offline)
Restore(RestoreOpt),
#[clap(name = "verify")]
/// Verify database and entity consistency.
Verify(CommonOpt),
#[clap(name = "recover_account")] #[clap(name = "recover_account")]
/// Recover an account's password /// Recover an account's password
RecoverAccount(RecoverAccountOpt), RecoverAccount(RecoverAccountOpt),
// #[clap(name = "reset_server_id")] // #[clap(name = "reset_server_id")]
// ResetServerId(CommonOpt), // ResetServerId(CommonOpt),
#[clap(name = "reindex")]
/// Reindex the database (offline)
Reindex(CommonOpt),
#[clap(name = "vacuum")]
/// Vacuum the database to reclaim space or change db_fs_type/page_size (offline)
Vacuum(CommonOpt),
#[clap(name = "domain_name_change")]
/// Change the IDM domain name
DomainChange(CommonOpt),
#[clap(name = "db_scan")] #[clap(name = "db_scan")]
/// Inspect the internal content of the database datastructures. /// Inspect the internal content of the database datastructures.
DbScan { DbScan {
#[clap(subcommand)] #[clap(subcommand)]
commands: DbScanOpt, commands: DbScanOpt,
}, },
/// Database maintenance, backups, restoration etc.
#[clap(name = "database")]
Database {
#[clap(subcommand)]
commands: DbCommands,
},
/// Change domain settings
#[clap(name = "domain")]
DomainSettings {
#[clap(subcommand)]
commands: DomainSettingsCmds,
},
} }

View file

@ -899,15 +899,27 @@ pub trait AccessControlsTransaction<'a> {
// is already checked above. // is already checked above.
if !requested_pres.is_subset(&allowed_pres) { if !requested_pres.is_subset(&allowed_pres) {
security_access!("requested_pres is not a subset of allowed"); security_access!("requested_pres is not a subset of allowed");
security_access!("{:?} !⊆ {:?}", requested_pres, allowed_pres); security_access!(
"requested_pres: {:?} !⊆ allowed: {:?}",
requested_pres,
allowed_pres
);
false false
} else if !requested_rem.is_subset(&allowed_rem) { } else if !requested_rem.is_subset(&allowed_rem) {
security_access!("requested_rem is not a subset of allowed"); security_access!("requested_rem is not a subset of allowed");
security_access!("{:?} !⊆ {:?}", requested_rem, allowed_rem); security_access!(
"requested_rem: {:?} !⊆ allowed: {:?}",
requested_rem,
allowed_rem
);
false false
} else if !requested_classes.is_subset(&allowed_classes) { } else if !requested_classes.is_subset(&allowed_classes) {
security_access!("requested_classes is not a subset of allowed"); security_access!("requested_classes is not a subset of allowed");
security_access!("{:?} !⊆ {:?}", requested_classes, allowed_classes); security_access!(
"requested_classes: {:?} !⊆ allowed: {:?}",
requested_classes,
allowed_classes
);
false false
} else { } else {
security_access!("passed pres, rem, classes check."); security_access!("passed pres, rem, classes check.");

View file

@ -1304,6 +1304,20 @@ impl QueryServerReadV1 {
res res
} }
#[instrument(
level = "trace",
name = "domain_display_name",
skip(self, eventid)
fields(uuid = ?eventid)
)]
pub async fn get_domain_display_name(&self, eventid: Uuid) -> String {
let idms_prox_read = self.idms.proxy_read_async().await;
let res = spanned!("actors::v1_read::handle<DomainDisplayName>", {
idms_prox_read.qs_read.get_domain_display_name().to_string()
});
res
}
#[instrument( #[instrument(
level = "trace", level = "trace",
name = "auth_valid", name = "auth_valid",

View file

@ -71,7 +71,7 @@ fn from_vec_dbval1(attr_val: Vec<DbValueV1>) -> Result<DbValueSetV2, OperationEr
} }
}) })
.collect(); .collect();
vs.map(|vs| DbValueSetV2::Utf8(vs)) vs.map(DbValueSetV2::Utf8)
} }
Some(DbValueV1::Iutf8(_)) => { Some(DbValueV1::Iutf8(_)) => {
let vs: Result<Vec<_>, _> = viter let vs: Result<Vec<_>, _> = viter

View file

@ -222,7 +222,11 @@ pub trait IdlSqliteTransaction {
) -> Result<Option<IDLBitRange>, OperationError> { ) -> Result<Option<IDLBitRange>, OperationError> {
spanned!("be::idl_sqlite::get_idl", { spanned!("be::idl_sqlite::get_idl", {
if !(self.exists_idx(attr, itype)?) { if !(self.exists_idx(attr, itype)?) {
filter_error!("Index {:?} {:?} not found", itype, attr); filter_error!(
"IdlSqliteTransaction: Index {:?} {:?} not found",
itype,
attr
);
return Ok(None); return Ok(None);
} }
// The table exists - lets now get the actual index itself. // The table exists - lets now get the actual index itself.

View file

@ -1450,18 +1450,21 @@ impl<'a> BackendWriteTransaction<'a> {
} }
} }
/// This generates a new domain UUID and stores it into the database,
/// returning the new UUID
fn reset_db_d_uuid(&self) -> Result<Uuid, OperationError> { fn reset_db_d_uuid(&self) -> Result<Uuid, OperationError> {
let nsid = Uuid::new_v4(); let nsid = Uuid::new_v4();
self.get_idlayer().write_db_d_uuid(nsid)?; self.get_idlayer().write_db_d_uuid(nsid)?;
Ok(nsid) Ok(nsid)
} }
/// This pulls the domain UUID from the database
pub fn get_db_d_uuid(&self) -> Uuid { pub fn get_db_d_uuid(&self) -> Uuid {
#[allow(clippy::expect_used)] #[allow(clippy::expect_used)]
match self match self
.get_idlayer() .get_idlayer()
.get_db_d_uuid() .get_db_d_uuid()
.expect("DBLayer Error!!!") .expect("DBLayer Error retrieving Domain UUID!!!")
{ {
Some(d_uuid) => d_uuid, Some(d_uuid) => d_uuid,
None => self.reset_db_d_uuid().expect("Failed to regenerate D_UUID"), None => self.reset_db_d_uuid().expect("Failed to regenerate D_UUID"),

View file

@ -100,6 +100,7 @@ pub struct Configuration {
impl fmt::Display for Configuration { impl fmt::Display for Configuration {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "address: {}, ", self.address) write!(f, "address: {}, ", self.address)
.and_then(|_| write!(f, "domain: {}, ", self.domain))
.and_then(|_| match &self.ldapaddress { .and_then(|_| match &self.ldapaddress {
Some(la) => write!(f, "ldap address: {}, ", la), Some(la) => write!(f, "ldap address: {}, ", la),
None => write!(f, "ldap address: disabled, "), None => write!(f, "ldap address: disabled, "),

View file

@ -970,20 +970,23 @@ pub const JSON_IDM_ACP_DOMAIN_ADMIN_PRIV_V1: &str = r#"{
"{\"and\": [{\"eq\": [\"uuid\",\"00000000-0000-0000-0000-ffffff000025\"]}, {\"andnot\": {\"or\": [{\"eq\": [\"class\", \"tombstone\"]}, {\"eq\": [\"class\", \"recycled\"]}]}}]}" "{\"and\": [{\"eq\": [\"uuid\",\"00000000-0000-0000-0000-ffffff000025\"]}, {\"andnot\": {\"or\": [{\"eq\": [\"class\", \"tombstone\"]}, {\"eq\": [\"class\", \"recycled\"]}]}}]}"
], ],
"acp_search_attr": [ "acp_search_attr": [
"name", "domain_display_name",
"uuid",
"domain_name", "domain_name",
"domain_ssid", "domain_ssid",
"domain_uuid", "domain_uuid",
"es256_private_key_der",
"fernet_private_key_str", "fernet_private_key_str",
"es256_private_key_der" "name",
"uuid"
], ],
"acp_modify_removedattr": [ "acp_modify_removedattr": [
"domain_display_name",
"domain_ssid", "domain_ssid",
"fernet_private_key_str", "es256_private_key_der",
"es256_private_key_der" "fernet_private_key_str"
], ],
"acp_modify_presentattr": [ "acp_modify_presentattr": [
"domain_display_name",
"domain_ssid" "domain_ssid"
] ]
} }

View file

@ -1,3 +1,6 @@
///! Constant Entries for the IDM
/// Builtin System Admin account.
pub const JSON_ADMIN_V1: &str = r#"{ pub const JSON_ADMIN_V1: &str = r#"{
"attrs": { "attrs": {
"class": ["account", "memberof", "object"], "class": ["account", "memberof", "object"],
@ -8,6 +11,7 @@ pub const JSON_ADMIN_V1: &str = r#"{
} }
}"#; }"#;
/// Builtin IDM Admin account.
pub const JSON_IDM_ADMIN_V1: &str = r#"{ pub const JSON_IDM_ADMIN_V1: &str = r#"{
"attrs": { "attrs": {
"class": ["account", "memberof", "object"], "class": ["account", "memberof", "object"],
@ -18,6 +22,7 @@ pub const JSON_IDM_ADMIN_V1: &str = r#"{
} }
}"#; }"#;
/// Builtin IDM Administrators Group.
pub const JSON_IDM_ADMINS_V1: &str = r#"{ pub const JSON_IDM_ADMINS_V1: &str = r#"{
"attrs": { "attrs": {
"class": ["group", "object"], "class": ["group", "object"],
@ -28,6 +33,7 @@ pub const JSON_IDM_ADMINS_V1: &str = r#"{
} }
}"#; }"#;
/// Builtin System Administrators Group.
pub const JSON_SYSTEM_ADMINS_V1: &str = r#"{ pub const JSON_SYSTEM_ADMINS_V1: &str = r#"{
"attrs": { "attrs": {
"class": ["group", "object"], "class": ["group", "object"],
@ -38,8 +44,8 @@ pub const JSON_SYSTEM_ADMINS_V1: &str = r#"{
} }
}"#; }"#;
// groups
// * People read managers // * People read managers
/// Builtin IDM Group for granting elevated people (personal data) read permissions.
pub const JSON_IDM_PEOPLE_READ_PRIV_V1: &str = r#"{ pub const JSON_IDM_PEOPLE_READ_PRIV_V1: &str = r#"{
"attrs": { "attrs": {
"class": ["group", "object"], "class": ["group", "object"],
@ -49,7 +55,9 @@ pub const JSON_IDM_PEOPLE_READ_PRIV_V1: &str = r#"{
"member": ["00000000-0000-0000-0000-000000000003"] "member": ["00000000-0000-0000-0000-000000000003"]
} }
}"#; }"#;
// * People write managers // * People write managers
/// Builtin IDM Group for granting elevated people (personal data) write and lifecycle management permissions.
pub const JSON_IDM_PEOPLE_MANAGE_PRIV_V1: &str = r#"{ pub const JSON_IDM_PEOPLE_MANAGE_PRIV_V1: &str = r#"{
"attrs": { "attrs": {
"class": ["group", "object"], "class": ["group", "object"],
@ -61,6 +69,8 @@ pub const JSON_IDM_PEOPLE_MANAGE_PRIV_V1: &str = r#"{
] ]
} }
}"#; }"#;
/// Builtin IDM Group for granting elevated people (personal data) write permissions.
pub const JSON_IDM_PEOPLE_WRITE_PRIV_V1: &str = r#"{ pub const JSON_IDM_PEOPLE_WRITE_PRIV_V1: &str = r#"{
"attrs": { "attrs": {
"class": ["group", "object"], "class": ["group", "object"],
@ -74,6 +84,7 @@ pub const JSON_IDM_PEOPLE_WRITE_PRIV_V1: &str = r#"{
} }
}"#; }"#;
/// Builtin IDM Group for importing passwords to person accounts - intended for service account membership only.
pub const JSON_IDM_PEOPLE_ACCOUNT_PASSWORD_IMPORT_PRIV_V1: &str = r#"{ pub const JSON_IDM_PEOPLE_ACCOUNT_PASSWORD_IMPORT_PRIV_V1: &str = r#"{
"attrs": { "attrs": {
"class": ["group", "object"], "class": ["group", "object"],
@ -83,6 +94,7 @@ pub const JSON_IDM_PEOPLE_ACCOUNT_PASSWORD_IMPORT_PRIV_V1: &str = r#"{
} }
}"#; }"#;
/// Builtin IDM Group for allowing the ability to extend accounts to have the "person" flag set.
pub const JSON_IDM_PEOPLE_EXTEND_PRIV_V1: &str = r#"{ pub const JSON_IDM_PEOPLE_EXTEND_PRIV_V1: &str = r#"{
"attrs": { "attrs": {
"class": ["group", "object"], "class": ["group", "object"],
@ -105,6 +117,7 @@ pub const JSON_IDM_PEOPLE_SELF_WRITE_MAIL_PRIV_V1: &str = r#"{
} }
}"#; }"#;
/// Builtin IDM Group for granting elevated high privilege people (personal data) read permissions.
pub const JSON_IDM_HP_PEOPLE_READ_PRIV_V1: &str = r#"{ pub const JSON_IDM_HP_PEOPLE_READ_PRIV_V1: &str = r#"{
"attrs": { "attrs": {
"class": ["group", "object"], "class": ["group", "object"],
@ -114,6 +127,8 @@ pub const JSON_IDM_HP_PEOPLE_READ_PRIV_V1: &str = r#"{
"member": ["00000000-0000-0000-0000-000000000029"] "member": ["00000000-0000-0000-0000-000000000029"]
} }
}"#; }"#;
/// Builtin IDM Group for granting elevated high privilege people (personal data) write permissions.
pub const JSON_IDM_HP_PEOPLE_WRITE_PRIV_V1: &str = r#"{ pub const JSON_IDM_HP_PEOPLE_WRITE_PRIV_V1: &str = r#"{
"attrs": { "attrs": {
"class": ["group", "object"], "class": ["group", "object"],
@ -125,6 +140,8 @@ pub const JSON_IDM_HP_PEOPLE_WRITE_PRIV_V1: &str = r#"{
] ]
} }
}"#; }"#;
/// Builtin IDM Group for extending high privilege accounts to be people.
pub const JSON_IDM_HP_PEOPLE_EXTEND_PRIV_V1: &str = r#"{ pub const JSON_IDM_HP_PEOPLE_EXTEND_PRIV_V1: &str = r#"{
"attrs": { "attrs": {
"class": ["group", "object"], "class": ["group", "object"],
@ -139,6 +156,7 @@ pub const JSON_IDM_HP_PEOPLE_EXTEND_PRIV_V1: &str = r#"{
// * group write manager (no read, everyone has read via the anon, etc) // * group write manager (no read, everyone has read via the anon, etc)
// IDM_GROUP_CREATE_PRIV // IDM_GROUP_CREATE_PRIV
/// Builtin IDM Group for granting elevated group write and lifecycle permissions.
pub const JSON_IDM_GROUP_MANAGE_PRIV_V1: &str = r#"{ pub const JSON_IDM_GROUP_MANAGE_PRIV_V1: &str = r#"{
"attrs": { "attrs": {
"class": ["group", "object"], "class": ["group", "object"],
@ -217,6 +235,7 @@ pub const JSON_IDM_ACCOUNT_UNIX_EXTEND_PRIV_V1: &str = r#"{
}"#; }"#;
// * RADIUS servers // * RADIUS servers
/// Builtin IDM Group for RADIUS secret write for all non-hp accounts.
pub const JSON_IDM_RADIUS_SECRET_WRITE_PRIV_V1: &str = r#"{ pub const JSON_IDM_RADIUS_SECRET_WRITE_PRIV_V1: &str = r#"{
"attrs": { "attrs": {
"class": ["group", "object"], "class": ["group", "object"],
@ -227,6 +246,7 @@ pub const JSON_IDM_RADIUS_SECRET_WRITE_PRIV_V1: &str = r#"{
} }
}"#; }"#;
/// Builtin IDM Group for RADIUS secret reading for all non-hp accounts.
pub const JSON_IDM_RADIUS_SECRET_READ_PRIV_V1: &str = r#"{ pub const JSON_IDM_RADIUS_SECRET_READ_PRIV_V1: &str = r#"{
"attrs": { "attrs": {
"class": ["group", "object"], "class": ["group", "object"],
@ -237,6 +257,7 @@ pub const JSON_IDM_RADIUS_SECRET_READ_PRIV_V1: &str = r#"{
} }
}"#; }"#;
/// Builtin IDM Group for RADIUS server access delegation.
pub const JSON_IDM_RADIUS_SERVERS_V1: &str = r#"{ pub const JSON_IDM_RADIUS_SERVERS_V1: &str = r#"{
"attrs": { "attrs": {
"class": ["group", "object"], "class": ["group", "object"],
@ -258,6 +279,7 @@ pub const JSON_IDM_HP_ACCOUNT_READ_PRIV_V1: &str = r#"{
] ]
} }
}"#; }"#;
// * high priv account write manager // * high priv account write manager
pub const JSON_IDM_HP_ACCOUNT_MANAGE_PRIV_V1: &str = r#"{ pub const JSON_IDM_HP_ACCOUNT_MANAGE_PRIV_V1: &str = r#"{
"attrs": { "attrs": {
@ -270,6 +292,8 @@ pub const JSON_IDM_HP_ACCOUNT_MANAGE_PRIV_V1: &str = r#"{
] ]
} }
}"#; }"#;
/// Builtin IDM Group for granting elevated account write permissions over high privilege accounts.
pub const JSON_IDM_HP_ACCOUNT_WRITE_PRIV_V1: &str = r#"{ pub const JSON_IDM_HP_ACCOUNT_WRITE_PRIV_V1: &str = r#"{
"attrs": { "attrs": {
"class": ["group", "object"], "class": ["group", "object"],
@ -281,6 +305,8 @@ pub const JSON_IDM_HP_ACCOUNT_WRITE_PRIV_V1: &str = r#"{
] ]
} }
}"#; }"#;
/// Builtin IDM Group for granting account unix extend permissions for high privilege accounts.
pub const JSON_IDM_HP_ACCOUNT_UNIX_EXTEND_PRIV_V1: &str = r#"{ pub const JSON_IDM_HP_ACCOUNT_UNIX_EXTEND_PRIV_V1: &str = r#"{
"attrs": { "attrs": {
"class": ["group", "object"], "class": ["group", "object"],
@ -290,6 +316,7 @@ pub const JSON_IDM_HP_ACCOUNT_UNIX_EXTEND_PRIV_V1: &str = r#"{
"member": ["00000000-0000-0000-0000-000000000019"] "member": ["00000000-0000-0000-0000-000000000019"]
} }
}"#; }"#;
// * Schema write manager // * Schema write manager
pub const JSON_IDM_SCHEMA_MANAGE_PRIV_V1: &str = r#"{ pub const JSON_IDM_SCHEMA_MANAGE_PRIV_V1: &str = r#"{
"attrs": { "attrs": {
@ -302,6 +329,7 @@ pub const JSON_IDM_SCHEMA_MANAGE_PRIV_V1: &str = r#"{
] ]
} }
}"#; }"#;
// * ACP read/write manager // * ACP read/write manager
pub const JSON_IDM_ACP_MANAGE_PRIV_V1: &str = r#"{ pub const JSON_IDM_ACP_MANAGE_PRIV_V1: &str = r#"{
"attrs": { "attrs": {
@ -312,7 +340,8 @@ pub const JSON_IDM_ACP_MANAGE_PRIV_V1: &str = r#"{
"member": ["00000000-0000-0000-0000-000000000019"] "member": ["00000000-0000-0000-0000-000000000019"]
} }
}"#; }"#;
// * HP Group Management
// Builtin IDM Group for granting elevated group write and lifecycle privileges for high privilege groups.
pub const JSON_IDM_HP_GROUP_MANAGE_PRIV_V1: &str = r#"{ pub const JSON_IDM_HP_GROUP_MANAGE_PRIV_V1: &str = r#"{
"attrs": { "attrs": {
"class": ["group", "object"], "class": ["group", "object"],
@ -322,6 +351,8 @@ pub const JSON_IDM_HP_GROUP_MANAGE_PRIV_V1: &str = r#"{
"member": ["00000000-0000-0000-0000-000000000019"] "member": ["00000000-0000-0000-0000-000000000019"]
} }
}"#; }"#;
/// Builtin IDM Group for granting elevated group write privileges for high privilege groups.
pub const JSON_IDM_HP_GROUP_WRITE_PRIV_V1: &str = r#"{ pub const JSON_IDM_HP_GROUP_WRITE_PRIV_V1: &str = r#"{
"attrs": { "attrs": {
"class": ["group", "object"], "class": ["group", "object"],
@ -333,6 +364,8 @@ pub const JSON_IDM_HP_GROUP_WRITE_PRIV_V1: &str = r#"{
] ]
} }
}"#; }"#;
/// Builtin IDM Group for granting unix group extension permissions for high privilege groups.
pub const JSON_IDM_HP_GROUP_UNIX_EXTEND_PRIV_V1: &str = r#"{ pub const JSON_IDM_HP_GROUP_UNIX_EXTEND_PRIV_V1: &str = r#"{
"attrs": { "attrs": {
"class": ["group", "object"], "class": ["group", "object"],
@ -344,7 +377,8 @@ pub const JSON_IDM_HP_GROUP_UNIX_EXTEND_PRIV_V1: &str = r#"{
] ]
} }
}"#; }"#;
// Who can configure this domain?
/// Builtin IDM Group for granting local domain administration rights and trust administration rights
pub const JSON_DOMAIN_ADMINS: &str = r#"{ pub const JSON_DOMAIN_ADMINS: &str = r#"{
"attrs": { "attrs": {
"class": ["group", "object"], "class": ["group", "object"],
@ -356,6 +390,7 @@ pub const JSON_DOMAIN_ADMINS: &str = r#"{
] ]
} }
}"#; }"#;
pub const JSON_IDM_HP_OAUTH2_MANAGE_PRIV_V1: &str = r#"{ pub const JSON_IDM_HP_OAUTH2_MANAGE_PRIV_V1: &str = r#"{
"attrs": { "attrs": {
"class": ["group", "object"], "class": ["group", "object"],
@ -368,7 +403,7 @@ pub const JSON_IDM_HP_OAUTH2_MANAGE_PRIV_V1: &str = r#"{
} }
}"#; }"#;
// This must be the last group to init to include the UUID of the other high priv groups. /// This must be the last group to init to include the UUID of the other high priv groups.
pub const JSON_IDM_HIGH_PRIVILEGE_V1: &str = r#"{ pub const JSON_IDM_HIGH_PRIVILEGE_V1: &str = r#"{
"attrs": { "attrs": {
"class": ["group", "object"], "class": ["group", "object"],

View file

@ -13,7 +13,7 @@ pub use crate::constants::system_config::*;
pub use crate::constants::uuids::*; pub use crate::constants::uuids::*;
// Increment this as we add new schema types and values!!! // Increment this as we add new schema types and values!!!
pub const SYSTEM_INDEX_VERSION: i64 = 23; pub const SYSTEM_INDEX_VERSION: i64 = 24;
// On test builds, define to 60 seconds // On test builds, define to 60 seconds
#[cfg(test)] #[cfg(test)]
pub const PURGE_FREQUENCY: u64 = 60; pub const PURGE_FREQUENCY: u64 = 60;

View file

@ -214,6 +214,38 @@ pub const JSON_SCHEMA_ATTR_DOMAIN_NAME: &str = r#"{
] ]
} }
}"#; }"#;
pub const JSON_SCHEMA_ATTR_DOMAIN_DISPLAY_NAME: &str = r#"{
"attrs": {
"class": [
"object",
"system",
"attributetype"
],
"description": [
"The user-facing display name of the Kanidm domain."
],
"index": [
"EQUALITY"
],
"unique": [
"false"
],
"multivalue": [
"false"
],
"attributename": [
"domain_display_name"
],
"syntax": [
"UTF8STRING"
],
"uuid": [
"00000000-0000-0000-0000-ffff00000098"
]
}
}"#;
pub const JSON_SCHEMA_ATTR_DOMAIN_UUID: &str = r#"{ pub const JSON_SCHEMA_ATTR_DOMAIN_UUID: &str = r#"{
"attrs": { "attrs": {
"class": [ "class": [
@ -1045,6 +1077,7 @@ pub const JSON_SCHEMA_CLASS_DOMAIN_INFO: &str = r#"
"name", "name",
"domain_uuid", "domain_uuid",
"domain_name", "domain_name",
"domain_display_name",
"fernet_private_key_str", "fernet_private_key_str",
"es256_private_key_der" "es256_private_key_der"
], ],
@ -1107,7 +1140,7 @@ pub const JSON_SCHEMA_CLASS_POSIXACCOUNT: &str = r#"
} }
"#; "#;
pub const JSON_SCHEMA_CLASS_SYSTEM_CONFIG: &str = r#" pub const JSON_SCHEMA_CLASS_SYSTEM_CONFIG: &str = &r#"
{ {
"attrs": { "attrs": {
"class": [ "class": [

View file

@ -1,4 +1,5 @@
// This is seperated because the password badlist section may become very long /// Default entries for system_config
/// This is seperated because the password badlist section may become very long
pub const JSON_SYSTEM_CONFIG_V1: &str = r####"{ pub const JSON_SYSTEM_CONFIG_V1: &str = r####"{
"attrs": { "attrs": {
"class": ["object", "system_config", "system"], "class": ["object", "system_config", "system"],

View file

@ -119,7 +119,7 @@ pub const _UUID_SCHEMA_ATTR_GIDNUMBER: Uuid = uuid!("00000000-0000-0000-0000-fff
pub const _UUID_SCHEMA_CLASS_POSIXACCOUNT: Uuid = uuid!("00000000-0000-0000-0000-ffff00000057"); pub const _UUID_SCHEMA_CLASS_POSIXACCOUNT: Uuid = uuid!("00000000-0000-0000-0000-ffff00000057");
pub const _UUID_SCHEMA_CLASS_POSIXGROUP: Uuid = uuid!("00000000-0000-0000-0000-ffff00000058"); pub const _UUID_SCHEMA_CLASS_POSIXGROUP: Uuid = uuid!("00000000-0000-0000-0000-ffff00000058");
pub const _UUID_SCHEMA_ATTR_BADLIST_PASSWORD: Uuid = uuid!("00000000-0000-0000-0000-ffff00000059"); pub const _UUID_SCHEMA_ATTR_BADLIST_PASSWORD: Uuid = uuid!("00000000-0000-0000-0000-ffff00000059");
pub const _UUID_SCHEMA_CLASS_SYSTEM_CONFIG: Uuid = uuid!("00000000-0000-0000-0000-ffff00000060"); pub const UUID_SCHEMA_CLASS_SYSTEM_CONFIG: Uuid = uuid!("00000000-0000-0000-0000-ffff00000060");
pub const _UUID_SCHEMA_ATTR_LOGINSHELL: Uuid = uuid!("00000000-0000-0000-0000-ffff00000061"); pub const _UUID_SCHEMA_ATTR_LOGINSHELL: Uuid = uuid!("00000000-0000-0000-0000-ffff00000061");
pub const _UUID_SCHEMA_ATTR_UNIX_PASSWORD: Uuid = uuid!("00000000-0000-0000-0000-ffff00000062"); pub const _UUID_SCHEMA_ATTR_UNIX_PASSWORD: Uuid = uuid!("00000000-0000-0000-0000-ffff00000062");
pub const UUID_SCHEMA_ATTR_LAST_MOD_CID: Uuid = uuid!("00000000-0000-0000-0000-ffff00000063"); pub const UUID_SCHEMA_ATTR_LAST_MOD_CID: Uuid = uuid!("00000000-0000-0000-0000-ffff00000063");
@ -169,6 +169,8 @@ pub const _UUID_SCHEMA_ATTR_CREDENTIAL_UPDATE_INTENT_TOKEN: Uuid =
uuid!("00000000-0000-0000-0000-ffff00000096"); uuid!("00000000-0000-0000-0000-ffff00000096");
pub const _UUID_SCHEMA_CLASS_OAUTH2_CONSENT_SCOPE_MAP: Uuid = pub const _UUID_SCHEMA_CLASS_OAUTH2_CONSENT_SCOPE_MAP: Uuid =
uuid!("00000000-0000-0000-0000-ffff00000097"); uuid!("00000000-0000-0000-0000-ffff00000097");
pub const _UUID_SCHEMA_ATTR_DOMAIN_DISPLAY_NAME: Uuid =
uuid!("00000000-0000-0000-0000-ffff00000098");
// System and domain infos // System and domain infos
// I'd like to strongly criticise william of the past for making poor choices about these allocations. // I'd like to strongly criticise william of the past for making poor choices about these allocations.
@ -212,7 +214,10 @@ pub const _UUID_IDM_ACP_HP_GROUP_MANAGE_PRIV_V1: Uuid =
uuid!("00000000-0000-0000-0000-ffffff000024"); uuid!("00000000-0000-0000-0000-ffffff000024");
// Skip 25 - see domain info. // Skip 25 - see domain info.
pub const _UUID_IDM_ACP_DOMAIN_ADMIN_PRIV_V1: Uuid = uuid!("00000000-0000-0000-0000-ffffff000026"); pub const _UUID_IDM_ACP_DOMAIN_ADMIN_PRIV_V1: Uuid = uuid!("00000000-0000-0000-0000-ffffff000026");
pub const STR_UUID_SYSTEM_CONFIG: &str = "00000000-0000-0000-0000-ffffff000027";
pub const UUID_SYSTEM_CONFIG: Uuid = uuid!("00000000-0000-0000-0000-ffffff000027"); pub const UUID_SYSTEM_CONFIG: Uuid = uuid!("00000000-0000-0000-0000-ffffff000027");
pub const _UUID_IDM_ACP_SYSTEM_CONFIG_PRIV_V1: Uuid = uuid!("00000000-0000-0000-0000-ffffff000028"); pub const _UUID_IDM_ACP_SYSTEM_CONFIG_PRIV_V1: Uuid = uuid!("00000000-0000-0000-0000-ffffff000028");
pub const _UUID_IDM_ACP_ACCOUNT_UNIX_EXTEND_PRIV_V1: Uuid = pub const _UUID_IDM_ACP_ACCOUNT_UNIX_EXTEND_PRIV_V1: Uuid =
uuid!("00000000-0000-0000-0000-ffffff000029"); uuid!("00000000-0000-0000-0000-ffffff000029");

View file

@ -316,6 +316,8 @@ impl Entry<EntryInit, EntryNew> {
} }
// str -> Proto entry // str -> Proto entry
let pe: ProtoEntry = serde_json::from_str(es).map_err(|e| { let pe: ProtoEntry = serde_json::from_str(es).map_err(|e| {
// We probably shouldn't print ES here because that would allow users
// to inject content into our logs :)
admin_error!(?e, "SerdeJson Failure"); admin_error!(?e, "SerdeJson Failure");
OperationError::SerdeJsonError OperationError::SerdeJsonError
})?; })?;
@ -420,7 +422,7 @@ impl Entry<EntryInit, EntryNew> {
) )
) )
} }
"displayname" | "description" => { "displayname" | "description" | "domain_display_name" => {
valueset::from_value_iter( valueset::from_value_iter(
vs.into_iter().map(|v| Value::new_utf8(v)) vs.into_iter().map(|v| Value::new_utf8(v))
) )
@ -569,8 +571,8 @@ impl<STATE> Entry<EntryInvalid, STATE> {
self.attrs.get("uuid").and_then(|vs| vs.to_uuid_single()) self.attrs.get("uuid").and_then(|vs| vs.to_uuid_single())
} }
/// Validate that this entry and it's attribute-value sets are conformant to the systems /// Validate that this entry and its attribute-value sets are conformant to the system's'
/// schema and the releant syntaxes. /// schema and the relevant syntaxes.
pub fn validate( pub fn validate(
self, self,
schema: &dyn SchemaTransaction, schema: &dyn SchemaTransaction,
@ -618,9 +620,15 @@ impl<STATE> Entry<EntryInvalid, STATE> {
match entry_classes.as_iutf8_iter() { match entry_classes.as_iutf8_iter() {
Some(cls_iter) => cls_iter.for_each(|s| match schema_classes.get(s) { Some(cls_iter) => cls_iter.for_each(|s| match schema_classes.get(s) {
Some(x) => classes.push(x), Some(x) => classes.push(x),
None => invalid_classes.push(s.to_string()), None => {
admin_debug!("invalid class: {:?}", s);
invalid_classes.push(s.to_string())
}
}), }),
None => invalid_classes.push("corrupt class attribute".to_string()), None => {
admin_debug!("corrupt class attribute in: {:?}", entry_classes);
invalid_classes.push("corrupt class attribute".to_string())
}
}; };
if !invalid_classes.is_empty() { if !invalid_classes.is_empty() {
@ -664,23 +672,21 @@ impl<STATE> Entry<EntryInvalid, STATE> {
}); });
if !missing_must.is_empty() { if !missing_must.is_empty() {
admin_warn!("Validation error, the following required (must) attributes are missing - {:?}", missing_must);
return Err(SchemaError::MissingMustAttribute(missing_must)); return Err(SchemaError::MissingMustAttribute(missing_must));
} }
if extensible { if extensible {
// ladmin_warning!("Extensible Object In Use!");
ne.attrs.iter().try_for_each(|(attr_name, avas)| { ne.attrs.iter().try_for_each(|(attr_name, avas)| {
match schema_attributes.get(attr_name) { match schema_attributes.get(attr_name) {
Some(a_schema) => { Some(a_schema) => {
// Now, for each type we do a *full* check of the syntax // Now, for each type we do a *full* check of the syntax
// and validity of the ava. // and validity of the ava.
if a_schema.phantom { if a_schema.phantom {
/* admin_warn!(
lrequest_error!( "Rejecting attempt to add phantom attribute to extensible object: {}",
"Attempt to add phantom attribute to extensible: {}",
attr_name attr_name
); );
*/
Err(SchemaError::PhantomAttribute(attr_name.to_string())) Err(SchemaError::PhantomAttribute(attr_name.to_string()))
} else { } else {
a_schema.validate_ava(attr_name.as_str(), avas) a_schema.validate_ava(attr_name.as_str(), avas)
@ -688,9 +694,13 @@ impl<STATE> Entry<EntryInvalid, STATE> {
} }
} }
None => { None => {
// lrequest_error!("Invalid Attribute {} for extensible object", attr_name); admin_error!(
trace!(?attr_name, "extensible -> SchemaError::InvalidAttribute"); "Invalid Attribute {}, undefined in schema_attributes",
Err(SchemaError::InvalidAttribute(attr_name.to_string())) attr_name.to_string()
);
Err(SchemaError::InvalidAttribute(
attr_name.to_string()
))
} }
} }
})?; })?;
@ -699,9 +709,9 @@ impl<STATE> Entry<EntryInvalid, STATE> {
// not allowed to exist in the class, which means a phantom attribute can't // not allowed to exist in the class, which means a phantom attribute can't
// be in the may/must set, and would FAIL our normal checks anyway. // be in the may/must set, and would FAIL our normal checks anyway.
// We clone string here, but it's so we can check all // The set of "may" is a combination of may and must, since we have already
// the values in "may" ar here - so we can't avoid this look up. What we // asserted that all must requirements are fufilled. This allows us to
// could do though, is have &String based on the schemaattribute though?; // perform extended attribute checking in a single pass.
let may: Result<Map<&AttrString, &SchemaAttribute>, _> = classes let may: Result<Map<&AttrString, &SchemaAttribute>, _> = classes
.iter() .iter()
// Join our class systemmmust + must + systemmay + may into one. // Join our class systemmmust + must + systemmay + may into one.
@ -738,9 +748,14 @@ impl<STATE> Entry<EntryInvalid, STATE> {
// .map_err(|e| lrequest_error!("Failed to validate: {}", attr_name); // .map_err(|e| lrequest_error!("Failed to validate: {}", attr_name);
} }
None => { None => {
// lrequest_error!("Invalid Attribute {} for may+must set", attr_name); admin_error!(
trace!(?attr_name, "SchemaError::InvalidAttribute"); "{} - not found in the list of valid attributes for this set of classes - valid attributes are {:?}",
Err(SchemaError::InvalidAttribute(attr_name.to_string())) attr_name.to_string(),
may.keys().collect::<Vec<_>>()
);
Err(SchemaError::AttributeNotValidForClass(
attr_name.to_string()
))
} }
} }
})?; })?;

View file

@ -53,6 +53,7 @@ pub struct CredentialUpdateSessionToken {
pub token_enc: String, pub token_enc: String,
} }
/// The current state of MFA registration
enum MfaRegState { enum MfaRegState {
None, None,
TotpInit(Totp), TotpInit(Totp),
@ -73,6 +74,7 @@ impl fmt::Debug for MfaRegState {
} }
pub(crate) struct CredentialUpdateSession { pub(crate) struct CredentialUpdateSession {
issuer: String,
// Current credentials - these are on the Account! // Current credentials - these are on the Account!
account: Account, account: Account,
// //
@ -125,6 +127,7 @@ impl fmt::Debug for MfaRegStateStatus {
#[derive(Debug)] #[derive(Debug)]
pub struct CredentialUpdateSessionStatus { pub struct CredentialUpdateSessionStatus {
spn: String, spn: String,
// The target user's display name
displayname: String, displayname: String,
// ttl: Duration, // ttl: Duration,
// //
@ -165,7 +168,7 @@ impl From<&CredentialUpdateSession> for CredentialUpdateSessionStatus {
mfaregstate: match &session.mfaregstate { mfaregstate: match &session.mfaregstate {
MfaRegState::None => MfaRegStateStatus::None, MfaRegState::None => MfaRegStateStatus::None,
MfaRegState::TotpInit(token) => MfaRegStateStatus::TotpCheck( MfaRegState::TotpInit(token) => MfaRegStateStatus::TotpCheck(
token.to_proto(session.account.name.as_str(), session.account.spn.as_str()), token.to_proto(session.account.name.as_str(), session.issuer.as_str()),
), ),
MfaRegState::TotpTryAgain(_) => MfaRegStateStatus::TotpTryAgain, MfaRegState::TotpTryAgain(_) => MfaRegStateStatus::TotpTryAgain,
MfaRegState::TotpInvalidSha1(_, _) => MfaRegStateStatus::TotpInvalidSha1, MfaRegState::TotpInvalidSha1(_, _) => MfaRegStateStatus::TotpInvalidSha1,
@ -293,9 +296,13 @@ impl<'a> IdmServerProxyWriteTransaction<'a> {
) -> Result<(CredentialUpdateSessionToken, CredentialUpdateSessionStatus), OperationError> { ) -> Result<(CredentialUpdateSessionToken, CredentialUpdateSessionStatus), OperationError> {
// - stash the current state of all associated credentials // - stash the current state of all associated credentials
let primary = account.primary.clone(); let primary = account.primary.clone();
// Stash the issuer for some UI elements
let issuer = self.qs_write.get_domain_display_name().to_string();
// - store account policy (if present) // - store account policy (if present)
let session = CredentialUpdateSession { let session = CredentialUpdateSession {
account, account,
issuer,
intent_token_id, intent_token_id,
primary, primary,
mfaregstate: MfaRegState::None, mfaregstate: MfaRegState::None,
@ -413,7 +420,7 @@ impl<'a> IdmServerProxyWriteTransaction<'a> {
pub fn exchange_intent_credential_update( pub fn exchange_intent_credential_update(
&mut self, &mut self,
token: CredentialUpdateIntentToken, token: CredentialUpdateIntentToken,
ct: Duration, current_time: Duration,
) -> Result<(CredentialUpdateSessionToken, CredentialUpdateSessionStatus), OperationError> { ) -> Result<(CredentialUpdateSessionToken, CredentialUpdateSessionStatus), OperationError> {
let CredentialUpdateIntentToken { intent_id } = token; let CredentialUpdateIntentToken { intent_id } = token;
@ -476,7 +483,7 @@ impl<'a> IdmServerProxyWriteTransaction<'a> {
"Rejecting Update Session - Intent Token does not exist (replication delay?)", "Rejecting Update Session - Intent Token does not exist (replication delay?)",
); );
return Err(OperationError::Wait( return Err(OperationError::Wait(
OffsetDateTime::unix_epoch() + (ct + Duration::from_secs(150)), OffsetDateTime::unix_epoch() + (current_time + Duration::from_secs(150)),
)); ));
} }
}; };
@ -501,7 +508,7 @@ impl<'a> IdmServerProxyWriteTransaction<'a> {
session_id, session_id,
session_ttl, session_ttl,
}) => { }) => {
if ct > *session_ttl { if current_time > *session_ttl {
// The former session has expired, continue. // The former session has expired, continue.
security_info!( security_info!(
?entry, ?entry,
@ -522,8 +529,8 @@ impl<'a> IdmServerProxyWriteTransaction<'a> {
} }
Some(IntentTokenState::Valid { max_ttl }) => { Some(IntentTokenState::Valid { max_ttl }) => {
// Check the TTL // Check the TTL
if ct >= *max_ttl { if current_time >= *max_ttl {
trace!(?ct, ?max_ttl); trace!(?current_time, ?max_ttl);
security_info!(%account.uuid, "intent has expired"); security_info!(%account.uuid, "intent has expired");
return Err(OperationError::SessionExpired); return Err(OperationError::SessionExpired);
} else { } else {
@ -550,7 +557,7 @@ impl<'a> IdmServerProxyWriteTransaction<'a> {
// We need to pin the id from the intent token into the credential to ensure it's not re-used // We need to pin the id from the intent token into the credential to ensure it's not re-used
// Need to change this to the expiry time, so we can purge up to. // Need to change this to the expiry time, so we can purge up to.
let session_id = uuid_from_duration(ct + MAXIMUM_CRED_UPDATE_TTL, self.sid); let session_id = uuid_from_duration(current_time + MAXIMUM_CRED_UPDATE_TTL, self.sid);
let mut modlist = ModifyList::new(); let mut modlist = ModifyList::new();
@ -565,7 +572,7 @@ impl<'a> IdmServerProxyWriteTransaction<'a> {
IntentTokenState::InProgress { IntentTokenState::InProgress {
max_ttl, max_ttl,
session_id, session_id,
session_ttl: ct + MAXIMUM_CRED_UPDATE_TTL, session_ttl: current_time + MAXIMUM_CRED_UPDATE_TTL,
}, },
), ),
)); ));
@ -584,7 +591,7 @@ impl<'a> IdmServerProxyWriteTransaction<'a> {
// ========== // ==========
// Okay, good to exchange. // Okay, good to exchange.
self.create_credupdate_session(session_id, Some(intent_id), account, ct) self.create_credupdate_session(session_id, Some(intent_id), account, current_time)
} }
pub fn init_credential_update( pub fn init_credential_update(
@ -594,9 +601,7 @@ impl<'a> IdmServerProxyWriteTransaction<'a> {
) -> Result<(CredentialUpdateSessionToken, CredentialUpdateSessionStatus), OperationError> { ) -> Result<(CredentialUpdateSessionToken, CredentialUpdateSessionStatus), OperationError> {
spanned!("idm::server::credupdatesession<Init>", { spanned!("idm::server::credupdatesession<Init>", {
let account = self.validate_init_credential_update(event.target, &event.ident)?; let account = self.validate_init_credential_update(event.target, &event.ident)?;
// ==== AUTHORISATION CHECKED === // ==== AUTHORISATION CHECKED ===
// This is the expiry time, so that our cleanup task can "purge up to now" rather // This is the expiry time, so that our cleanup task can "purge up to now" rather
// than needing to do calculations. // than needing to do calculations.
let sessionid = uuid_from_duration(ct + MAXIMUM_CRED_UPDATE_TTL, self.sid); let sessionid = uuid_from_duration(ct + MAXIMUM_CRED_UPDATE_TTL, self.sid);
@ -1034,12 +1039,12 @@ impl<'a> IdmServerCredUpdateTransaction<'a> {
Ok(session.deref().into()) Ok(session.deref().into())
} else { } else {
// What if it's a broken authenticator app? Google authenticator // What if it's a broken authenticator app? Google authenticator
// and authy both force sha1 and ignore the algo we send. So lets // and Authy both force SHA1 and ignore the algo we send. So let's
// check that just in case. // check that just in case.
let token_sha1 = totp_token.clone().downgrade_to_legacy(); let token_sha1 = totp_token.clone().downgrade_to_legacy();
if token_sha1.verify(totp_chal, &ct) { if token_sha1.verify(totp_chal, &ct) {
// Greeeaaaaaatttt it's a broken app. Let's check the user // Greeeaaaaaatttt. It's a broken app. Let's check the user
// knows this is broken, before we proceed. // knows this is broken, before we proceed.
session.mfaregstate = session.mfaregstate =
MfaRegState::TotpInvalidSha1(totp_token.clone(), token_sha1); MfaRegState::TotpInvalidSha1(totp_token.clone(), token_sha1);

View file

@ -58,26 +58,30 @@ pub(crate) struct MfaRegSession {
pub account: Account, pub account: Account,
// What state is the reg process in? // What state is the reg process in?
state: MfaRegState, state: MfaRegState,
// Human-facing name of the Domain
issuer: String,
} }
impl MfaRegSession { impl MfaRegSession {
pub fn totp_new( pub fn totp_new(
origin: IdentityId, origin: IdentityId,
account: Account, account: Account,
issuer: String,
) -> Result<(Self, MfaRegNext), OperationError> { ) -> Result<(Self, MfaRegNext), OperationError> {
// Based on the req, init our session, and the return the next step. // Based on the req, init our session, and the return the next step.
// Store the ID of the event that start's the attempt // Store the ID of the event that start's the attempt
let token = Totp::generate_secure(TOTP_DEFAULT_STEP); let token = Totp::generate_secure(TOTP_DEFAULT_STEP);
let accountname = account.name.as_str(); let accountname = account.name.as_str();
let issuer = account.spn.as_str();
let next = MfaRegNext::TotpCheck(token.to_proto(accountname, issuer)); let next = MfaRegNext::TotpCheck(token.to_proto(accountname, issuer.as_str()));
let state = MfaRegState::TotpInit(token); let state = MfaRegState::TotpInit(token);
let s = MfaRegSession { let s = MfaRegSession {
origin, origin,
account, account,
state, state,
issuer,
}; };
Ok((s, next)) Ok((s, next))
} }
@ -107,23 +111,24 @@ impl MfaRegSession {
} }
} else { } else {
// What if it's a broken authenticator app? Google authenticator // What if it's a broken authenticator app? Google authenticator
// and authy both force sha1 and ignore the algo we send. So lets // and authy both force sha1 and ignore the algo we send. So let's
// check that just in case. // check that just in case.
let token_sha1 = token.clone().downgrade_to_legacy(); let token_sha1 = token.clone().downgrade_to_legacy();
if token_sha1.verify(chal, ct) { if token_sha1.verify(chal, ct) {
// Greeeaaaaaatttt it's a broken app. Let's check the user // Greeeaaaaaatttt. It's a broken app. Let's check the user
// knows this is broken, before we proceed. // knows this is broken, before we proceed.
let mut nstate = MfaRegState::TotpInvalidSha1(token_sha1); let mut nstate = MfaRegState::TotpInvalidSha1(token_sha1);
mem::swap(&mut self.state, &mut nstate); mem::swap(&mut self.state, &mut nstate);
Ok((MfaRegNext::TotpInvalidSha1, None)) Ok((MfaRegNext::TotpInvalidSha1, None))
} else { } else {
// Prooobbably a bad code or typo then. Lte them try again. // Probably a bad code or typo then. Let them try again.
let accountname = self.account.name.as_str(); let accountname = self.account.name.as_str();
let issuer = self.account.spn.as_str();
Ok(( Ok((
MfaRegNext::TotpCheck(token.to_proto(accountname, issuer)), MfaRegNext::TotpCheck(
token.to_proto(accountname, self.issuer.as_str()),
),
None, None,
)) ))
} }
@ -165,6 +170,7 @@ impl MfaRegSession {
account: Account, account: Account,
label: String, label: String,
webauthn: &Webauthn<WebauthnDomainConfig>, webauthn: &Webauthn<WebauthnDomainConfig>,
issuer: String,
) -> Result<(Self, MfaRegNext), OperationError> { ) -> Result<(Self, MfaRegNext), OperationError> {
// Setup the registration. // Setup the registration.
let (chal, reg_state) = webauthn let (chal, reg_state) = webauthn
@ -179,6 +185,8 @@ impl MfaRegSession {
origin, origin,
account, account,
state, state,
// this isn't used in webauthn... yet?
issuer,
}; };
let next = MfaRegNext::WebauthnChallenge(chal); let next = MfaRegNext::WebauthnChallenge(chal);
Ok((s, next)) Ok((s, next))

View file

@ -45,6 +45,7 @@ lazy_static! {
static ref CLASS_OAUTH2_BASIC: PartialValue = static ref CLASS_OAUTH2_BASIC: PartialValue =
PartialValue::new_class("oauth2_resource_server_basic"); PartialValue::new_class("oauth2_resource_server_basic");
static ref URL_SERVICE_DOCUMENTATION: Url = static ref URL_SERVICE_DOCUMENTATION: Url =
#[allow(clippy::expect_used)]
Url::parse("https://kanidm.github.io/kanidm/master/integrations/oauth2.html") Url::parse("https://kanidm.github.io/kanidm/master/integrations/oauth2.html")
.expect("Failed to parse oauth2 service documentation url"); .expect("Failed to parse oauth2 service documentation url");
} }

View file

@ -84,7 +84,7 @@ type CredSoftLockMutex = Arc<Mutex<CredSoftLock>>;
pub struct IdmServer { pub struct IdmServer {
// There is a good reason to keep this single thread - it // There is a good reason to keep this single thread - it
// means that limits to sessions can be easily applied and checked to // means that limits to sessions can be easily applied and checked to
// variaous accounts, and we have a good idea of how to structure the // various accounts, and we have a good idea of how to structure the
// in memory caches related to locking. // in memory caches related to locking.
session_ticket: Semaphore, session_ticket: Semaphore,
sessions: BptreeMap<Uuid, AuthSessionMutex>, sessions: BptreeMap<Uuid, AuthSessionMutex>,
@ -299,11 +299,13 @@ impl IdmServer {
} }
} }
/// Perform a blocking read transaction on the database.
#[cfg(test)] #[cfg(test)]
pub fn proxy_read<'a>(&'a self) -> IdmServerProxyReadTransaction<'a> { pub fn proxy_read<'a>(&'a self) -> IdmServerProxyReadTransaction<'a> {
task::block_on(self.proxy_read_async()) task::block_on(self.proxy_read_async())
} }
/// Read from the database, in a transaction.
pub async fn proxy_read_async(&self) -> IdmServerProxyReadTransaction<'_> { pub async fn proxy_read_async(&self) -> IdmServerProxyReadTransaction<'_> {
IdmServerProxyReadTransaction { IdmServerProxyReadTransaction {
qs_read: self.qs.read_async().await, qs_read: self.qs.read_async().await,
@ -1688,13 +1690,19 @@ impl<'a> IdmServerProxyWriteTransaction<'a> {
let origin = (&wre.ident.origin).into(); let origin = (&wre.ident.origin).into();
let label = wre.label.clone(); let label = wre.label.clone();
let (session, next) = MfaRegSession::webauthn_new(origin, account, label, self.webauthn)?; let issuer = self
.qs_write
.get_domain_display_name()
.to_string();
let next = next.to_proto(sessionid); let (session, mfa_reg_next) =
MfaRegSession::webauthn_new(origin, account, label, self.webauthn, issuer)?;
let next = mfa_reg_next.to_proto(sessionid);
// Add session to tree // Add session to tree
self.mfareg_sessions.insert(sessionid, session); self.mfareg_sessions.insert(sessionid, session);
trace!(?sessionid, "Start mfa reg session"); trace!(?sessionid, "Started mfa reg session for webauthn");
Ok(next) Ok(next)
} }
@ -1793,7 +1801,13 @@ impl<'a> IdmServerProxyWriteTransaction<'a> {
let sessionid = uuid_from_duration(ct, self.sid); let sessionid = uuid_from_duration(ct, self.sid);
let origin = (&gte.ident.origin).into(); let origin = (&gte.ident.origin).into();
let (session, next) = MfaRegSession::totp_new(origin, account).map_err(|e| {
let issuer = self
.qs_write
.get_domain_display_name()
.to_string();
let (session, next) = MfaRegSession::totp_new(origin, account, issuer).map_err(|e| {
admin_error!("Unable to start totp MfaRegSession {:?}", e); admin_error!("Unable to start totp MfaRegSession {:?}", e);
e e
})?; })?;
@ -1802,7 +1816,7 @@ impl<'a> IdmServerProxyWriteTransaction<'a> {
// Add session to tree // Add session to tree
self.mfareg_sessions.insert(sessionid, session); self.mfareg_sessions.insert(sessionid, session);
trace!(?sessionid, "Start mfa reg session"); trace!(?sessionid, "Started totp mfa reg session");
Ok(next) Ok(next)
} }
@ -1815,7 +1829,7 @@ impl<'a> IdmServerProxyWriteTransaction<'a> {
let origin = (&vte.ident.origin).into(); let origin = (&vte.ident.origin).into();
let chal = vte.chal; let chal = vte.chal;
trace!(?sessionid, "Attempting to find mfareg_session"); trace!(?sessionid, "Attempting to find totp mfareg_session");
let (next, opt_cred) = self let (next, opt_cred) = self
.mfareg_sessions .mfareg_sessions
@ -1834,7 +1848,7 @@ impl<'a> IdmServerProxyWriteTransaction<'a> {
.remove(&sessionid) .remove(&sessionid)
.ok_or(OperationError::InvalidState) .ok_or(OperationError::InvalidState)
.map_err(|e| { .map_err(|e| {
admin_error!("Session within transaction vanished!"); admin_error!("Session within totp reg transaction vanished!");
e e
})?; })?;
// reg the token // reg the token

View file

@ -44,6 +44,13 @@ impl Plugin for Domain {
e.set_ava("domain_name", once(n)); e.set_ava("domain_name", once(n));
trace!("plugin_domain: Applying domain_name transform"); trace!("plugin_domain: Applying domain_name transform");
} }
// create the domain_display_name if it's missing
if !e.attribute_pres("domain_display_name") {
let domain_display_name = Value::new_utf8(format!("Kanidm {}", qs.get_domain_name()));
security_info!("plugin_domain: setting default domain_display_name to {:?}", domain_display_name);
e.set_ava("domain_display_name", once(domain_display_name));
}
if !e.attribute_pres("fernet_private_key_str") { if !e.attribute_pres("fernet_private_key_str") {
security_info!("regenerating domain token encryption key"); security_info!("regenerating domain token encryption key");
let k = fernet::Fernet::generate_key(); let k = fernet::Fernet::generate_key();

View file

@ -1,4 +1,4 @@
//! plugins allow an `Event` to be inspected and transformed during the write //! Plugins allow an `Event` to be inspected and transformed during the write
//! paths of the server. This allows richer expression of some concepts and //! paths of the server. This allows richer expression of some concepts and
//! helps to ensure that data is always in specific known states within the //! helps to ensure that data is always in specific known states within the
//! `QueryServer` //! `QueryServer`

View file

@ -19,6 +19,7 @@ lazy_static! {
let mut m = HashSet::with_capacity(8); let mut m = HashSet::with_capacity(8);
// Allow modification of some schema class types to allow local extension // Allow modification of some schema class types to allow local extension
// of schema types. // of schema types.
//
m.insert("must"); m.insert("must");
m.insert("may"); m.insert("may");
// Allow modification of some domain info types for local configuration. // Allow modification of some domain info types for local configuration.
@ -26,6 +27,7 @@ lazy_static! {
m.insert("fernet_private_key_str"); m.insert("fernet_private_key_str");
m.insert("es256_private_key_der"); m.insert("es256_private_key_der");
m.insert("badlist_password"); m.insert("badlist_password");
m.insert("domain_display_name");
m m
}; };
static ref PVCLASS_SYSTEM: PartialValue = PartialValue::new_class("system"); static ref PVCLASS_SYSTEM: PartialValue = PartialValue::new_class("system");
@ -194,10 +196,16 @@ mod tests {
], ],
"acp_search_attr": ["name", "class", "uuid", "classname", "attributename"], "acp_search_attr": ["name", "class", "uuid", "classname", "attributename"],
"acp_modify_class": ["system", "domain_info"], "acp_modify_class": ["system", "domain_info"],
"acp_modify_removedattr": ["class", "displayname", "may", "must", "domain_name", "domain_uuid", "domain_ssid", "fernet_private_key_str", "es256_private_key_der"], "acp_modify_removedattr": [
"acp_modify_presentattr": ["class", "displayname", "may", "must", "domain_name", "domain_uuid", "domain_ssid", "fernet_private_key_str", "es256_private_key_der"], "class", "displayname", "may", "must", "domain_name", "domain_display_name", "domain_uuid", "domain_ssid", "fernet_private_key_str", "es256_private_key_der"
],
"acp_modify_presentattr": [
"class", "displayname", "may", "must", "domain_name", "domain_display_name", "domain_uuid", "domain_ssid", "fernet_private_key_str", "es256_private_key_der"
],
"acp_create_class": ["object", "person", "system", "domain_info"], "acp_create_class": ["object", "person", "system", "domain_info"],
"acp_create_attr": ["name", "class", "description", "displayname", "domain_name", "domain_uuid", "domain_ssid", "uuid", "fernet_private_key_str", "es256_private_key_der"] "acp_create_attr": [
"name", "class", "description", "displayname", "domain_name", "domain_display_name", "domain_uuid", "domain_ssid", "uuid", "fernet_private_key_str", "es256_private_key_der"
]
} }
}"#; }"#;
@ -328,8 +336,9 @@ mod tests {
"name": ["domain_example.net.au"], "name": ["domain_example.net.au"],
"uuid": ["96fd1112-28bc-48ae-9dda-5acb4719aaba"], "uuid": ["96fd1112-28bc-48ae-9dda-5acb4719aaba"],
"domain_uuid": ["96fd1112-28bc-48ae-9dda-5acb4719aaba"], "domain_uuid": ["96fd1112-28bc-48ae-9dda-5acb4719aaba"],
"description": ["Demonstration of a remote domain's info being created for uuid generation"], "description": ["Demonstration of a remote domain's info being created for uuid generation in test_modify_domain"],
"domain_name": ["example.net.au"], "domain_name": ["example.net.au"],
"domain_display_name": ["example.net.au"],
"domain_ssid": ["Example_Wifi"], "domain_ssid": ["Example_Wifi"],
"fernet_private_key_str": ["ABCD"], "fernet_private_key_str": ["ABCD"],
"es256_private_key_der" : ["MTIz"] "es256_private_key_der" : ["MTIz"]
@ -367,8 +376,9 @@ mod tests {
"name": ["domain_example.net.au"], "name": ["domain_example.net.au"],
"uuid": ["96fd1112-28bc-48ae-9dda-5acb4719aaba"], "uuid": ["96fd1112-28bc-48ae-9dda-5acb4719aaba"],
"domain_uuid": ["96fd1112-28bc-48ae-9dda-5acb4719aaba"], "domain_uuid": ["96fd1112-28bc-48ae-9dda-5acb4719aaba"],
"description": ["Demonstration of a remote domain's info being created for uuid generation"], "description": ["Demonstration of a remote domain's info being created for uuid generation in test_ext_create_domain"],
"domain_name": ["example.net.au"], "domain_name": ["example.net.au"],
"domain_display_name": ["example.net.au"],
"domain_ssid": ["Example_Wifi"], "domain_ssid": ["Example_Wifi"],
"fernet_private_key_str": ["ABCD"], "fernet_private_key_str": ["ABCD"],
"es256_private_key_der" : ["MTIz"] "es256_private_key_der" : ["MTIz"]
@ -397,8 +407,9 @@ mod tests {
"name": ["domain_example.net.au"], "name": ["domain_example.net.au"],
"uuid": ["96fd1112-28bc-48ae-9dda-5acb4719aaba"], "uuid": ["96fd1112-28bc-48ae-9dda-5acb4719aaba"],
"domain_uuid": ["96fd1112-28bc-48ae-9dda-5acb4719aaba"], "domain_uuid": ["96fd1112-28bc-48ae-9dda-5acb4719aaba"],
"description": ["Demonstration of a remote domain's info being created for uuid generation"], "description": ["Demonstration of a remote domain's info being created for uuid generation in test_delete_domain"],
"domain_name": ["example.net.au"], "domain_name": ["example.net.au"],
"domain_display_name": ["example.net.au"],
"domain_ssid": ["Example_Wifi"], "domain_ssid": ["Example_Wifi"],
"fernet_private_key_str": ["ABCD"], "fernet_private_key_str": ["ABCD"],
"es256_private_key_der" : ["MTIz"] "es256_private_key_der" : ["MTIz"]

View file

@ -1934,7 +1934,9 @@ mod tests {
assert_eq!( assert_eq!(
e_attr_invalid_may.validate(&schema), e_attr_invalid_may.validate(&schema),
Err(SchemaError::InvalidAttribute("zzzzz".to_string())) Err(SchemaError::AttributeNotValidForClass(
"zzzzz".to_string()
))
); );
let e_attr_invalid_syn: Entry<EntryInvalid, EntryNew> = unsafe { let e_attr_invalid_syn: Entry<EntryInvalid, EntryNew> = unsafe {

View file

@ -58,6 +58,7 @@ lazy_static! {
struct DomainInfo { struct DomainInfo {
d_uuid: Uuid, d_uuid: Uuid,
d_name: String, d_name: String,
d_display: String,
} }
#[derive(Clone)] #[derive(Clone)]
@ -143,6 +144,8 @@ pub trait QueryServerTransaction<'a> {
fn get_domain_name(&self) -> &str; fn get_domain_name(&self) -> &str;
fn get_domain_display_name(&self) -> &str;
#[allow(clippy::mut_from_ref)] #[allow(clippy::mut_from_ref)]
fn get_resolve_filter_cache( fn get_resolve_filter_cache(
&self, &self,
@ -306,7 +309,7 @@ pub trait QueryServerTransaction<'a> {
.map(|v| v.unwrap_or_else(|| format!("uuid={}", uuid.as_hyphenated()))) .map(|v| v.unwrap_or_else(|| format!("uuid={}", uuid.as_hyphenated())))
} }
// From internal, generate an exists event and dispatch /// From internal, generate an "exists" event and dispatch
fn internal_exists(&self, filter: Filter<FilterInvalid>) -> Result<bool, OperationError> { fn internal_exists(&self, filter: Filter<FilterInvalid>) -> Result<bool, OperationError> {
spanned!("server::internal_exists", { spanned!("server::internal_exists", {
// Check the filter // Check the filter
@ -345,7 +348,7 @@ pub trait QueryServerTransaction<'a> {
}) })
} }
// this applys ACP to filter result entries. /// Applies ACP to filter result entries.
fn impersonate_search_ext_valid( fn impersonate_search_ext_valid(
&self, &self,
f_valid: Filter<FilterValid>, f_valid: Filter<FilterValid>,
@ -389,8 +392,8 @@ pub trait QueryServerTransaction<'a> {
}) })
} }
// Get a single entry by it's UUID. This is heavily relied on for internal /// Get a single entry by its UUID. This is used heavily for internal
// server operations, especially in login and acp checks for acp. /// server operations, especially in login and ACP checks.
fn internal_search_uuid( fn internal_search_uuid(
&self, &self,
uuid: &Uuid, uuid: &Uuid,
@ -714,6 +717,7 @@ pub trait QueryServerTransaction<'a> {
} }
} }
/// Pull the domain name from the database
fn get_db_domain_name(&self) -> Result<String, OperationError> { fn get_db_domain_name(&self) -> Result<String, OperationError> {
self.internal_search_uuid(&UUID_DOMAIN_INFO) self.internal_search_uuid(&UUID_DOMAIN_INFO)
.and_then(|e| { .and_then(|e| {
@ -822,6 +826,10 @@ impl<'a> QueryServerTransaction<'a> for QueryServerReadTransaction<'a> {
fn get_domain_name(&self) -> &str { fn get_domain_name(&self) -> &str {
&self.d_info.d_name &self.d_info.d_name
} }
fn get_domain_display_name(&self) -> &str {
&self.d_info.d_display
}
} }
impl<'a> QueryServerReadTransaction<'a> { impl<'a> QueryServerReadTransaction<'a> {
@ -900,13 +908,18 @@ impl<'a> QueryServerTransaction<'a> for QueryServerWriteTransaction<'a> {
self.d_info.d_uuid self.d_info.d_uuid
} }
/// Gets the in-memory domain_name element
fn get_domain_name(&self) -> &str { fn get_domain_name(&self) -> &str {
&self.d_info.d_name &self.d_info.d_name
} }
fn get_domain_display_name(&self) -> &str {
&self.d_info.d_display
}
} }
impl QueryServer { impl QueryServer {
pub fn new(be: Backend, schema: Schema, d_name: String) -> Self { pub fn new(be: Backend, schema: Schema, domain_name: String) -> Self {
let (s_uuid, d_uuid) = { let (s_uuid, d_uuid) = {
let wr = be.write(); let wr = be.write();
let res = (wr.get_db_s_uuid(), wr.get_db_d_uuid()); let res = (wr.get_db_s_uuid(), wr.get_db_d_uuid());
@ -918,11 +931,17 @@ impl QueryServer {
let pool_size = be.get_pool_size(); let pool_size = be.get_pool_size();
info!("Server UUID -> {:?}", s_uuid); debug!("Server UUID -> {:?}", s_uuid);
info!("Domain UUID -> {:?}", d_uuid); debug!("Domain UUID -> {:?}", d_uuid);
info!("Domain Name -> {:?}", d_name); debug!("Domain Name -> {:?}", domain_name);
let d_info = Arc::new(CowCell::new(DomainInfo { d_uuid, d_name })); let d_info = Arc::new(CowCell::new(DomainInfo {
d_uuid,
d_name: domain_name.clone(),
// we set the domain_display_name to the configuration file's domain_name
// here because the database is not started, so we cannot pull it from there.
d_display: domain_name,
}));
// log_event!(log, "Starting query worker ..."); // log_event!(log, "Starting query worker ...");
QueryServer { QueryServer {
@ -938,7 +957,7 @@ impl QueryServer {
.set_size(RESOLVE_FILTER_CACHE_MAX, RESOLVE_FILTER_CACHE_LOCAL) .set_size(RESOLVE_FILTER_CACHE_MAX, RESOLVE_FILTER_CACHE_LOCAL)
.set_reader_quiesce(true) .set_reader_quiesce(true)
.build() .build()
.expect("Failer to build resolve_filter_cache"), .expect("Failed to build resolve_filter_cache"),
), ),
} }
} }
@ -1180,6 +1199,8 @@ impl<'a> QueryServerWriteTransaction<'a> {
// NOTE: This is how you map from Vec<Result<T>> to Result<Vec<T>> // NOTE: This is how you map from Vec<Result<T>> to Result<Vec<T>>
// remember, that you only get the first error and the iter terminates. // remember, that you only get the first error and the iter terminates.
// eprintln!("{:?}", candidates);
// Now, normalise AND validate! // Now, normalise AND validate!
let res: Result<Vec<Entry<EntrySealed, EntryNew>>, OperationError> = candidates let res: Result<Vec<Entry<EntrySealed, EntryNew>>, OperationError> = candidates
@ -1187,7 +1208,7 @@ impl<'a> QueryServerWriteTransaction<'a> {
.map(|e| { .map(|e| {
e.validate(&self.schema) e.validate(&self.schema)
.map_err(|e| { .map_err(|e| {
admin_error!("Schema Violation -> {:?}", e); admin_error!("Schema Violation in create validate {:?}", e);
OperationError::SchemaViolation(e) OperationError::SchemaViolation(e)
}) })
.map(|e| { .map(|e| {
@ -1333,7 +1354,7 @@ impl<'a> QueryServerWriteTransaction<'a> {
e.into_recycled() e.into_recycled()
.validate(&self.schema) .validate(&self.schema)
.map_err(|e| { .map_err(|e| {
admin_error!(err = ?e, "Schema Violation"); admin_error!(err = ?e, "Schema Violation in delete validate");
OperationError::SchemaViolation(e) OperationError::SchemaViolation(e)
}) })
// seal if it worked. // seal if it worked.
@ -1464,7 +1485,7 @@ impl<'a> QueryServerWriteTransaction<'a> {
e.to_tombstone(self.cid.clone()) e.to_tombstone(self.cid.clone())
.validate(&self.schema) .validate(&self.schema)
.map_err(|e| { .map_err(|e| {
admin_error!("Schema Violationi {:?}", e); admin_error!("Schema Violation in purge_recycled validate: {:?}", e);
OperationError::SchemaViolation(e) OperationError::SchemaViolation(e)
}) })
// seal if it worked. // seal if it worked.
@ -1504,7 +1525,10 @@ impl<'a> QueryServerWriteTransaction<'a> {
)]); )]);
let m_valid = modlist.validate(self.get_schema()).map_err(|e| { let m_valid = modlist.validate(self.get_schema()).map_err(|e| {
admin_error!("revive recycled modlist Schema Violation {:?}", e); admin_error!(
"Schema Violation in revive recycled modlist validate: {:?}",
e
);
OperationError::SchemaViolation(e) OperationError::SchemaViolation(e)
})?; })?;
@ -1647,7 +1671,7 @@ impl<'a> QueryServerWriteTransaction<'a> {
// Pre mod plugins // Pre mod plugins
// We should probably supply the pre-post cands here. // We should probably supply the pre-post cands here.
Plugins::run_pre_modify(self, &mut candidates, me).map_err(|e| { Plugins::run_pre_modify(self, &mut candidates, me).map_err(|e| {
admin_error!("Modify operation failed (plugin), {:?}", e); admin_error!("Pre-Modify operation failed (plugin), {:?}", e);
e e
})?; })?;
@ -1663,7 +1687,10 @@ impl<'a> QueryServerWriteTransaction<'a> {
.map(|e| { .map(|e| {
e.validate(&self.schema) e.validate(&self.schema)
.map_err(|e| { .map_err(|e| {
admin_error!("Schema Violation {:?}", e); admin_error!(
"Schema Violation in validation of modify_pre_apply {:?}",
e
);
OperationError::SchemaViolation(e) OperationError::SchemaViolation(e)
}) })
.map(Entry::seal) .map(Entry::seal)
@ -1701,7 +1728,7 @@ impl<'a> QueryServerWriteTransaction<'a> {
// memberOf actually wants the pre cand list and the norm_cand list to see what // memberOf actually wants the pre cand list and the norm_cand list to see what
// changed. Could be optimised, but this is correct still ... // changed. Could be optimised, but this is correct still ...
Plugins::run_post_modify(self, &pre_candidates, &norm_cand, me).map_err(|e| { Plugins::run_post_modify(self, &pre_candidates, &norm_cand, me).map_err(|e| {
admin_error!("Modify operation failed (plugin), {:?}", e); admin_error!("Post-Modify operation failed (plugin), {:?}", e);
e e
})?; })?;
@ -1834,7 +1861,10 @@ impl<'a> QueryServerWriteTransaction<'a> {
.map(|e| { .map(|e| {
e.validate(&self.schema) e.validate(&self.schema)
.map_err(|e| { .map_err(|e| {
admin_error!("Schema Violation {:?}", e); admin_error!(
"Schema Violation in internal_batch_modify validate: {:?}",
e
);
OperationError::SchemaViolation(e) OperationError::SchemaViolation(e)
}) })
.map(Entry::seal) .map(Entry::seal)
@ -2271,6 +2301,8 @@ impl<'a> QueryServerWriteTransaction<'a> {
// Load in all the "core" schema, that we already have in "memory". // Load in all the "core" schema, that we already have in "memory".
let entries = self.schema.to_entries(); let entries = self.schema.to_entries();
// admin_debug!("Dumping schemas: {:?}", entries);
// internal_migrate_or_create. // internal_migrate_or_create.
let r: Result<_, _> = entries.into_iter().try_for_each(|e| { let r: Result<_, _> = entries.into_iter().try_for_each(|e| {
trace!(?e, "init schema entry"); trace!(?e, "init schema entry");
@ -2297,6 +2329,7 @@ impl<'a> QueryServerWriteTransaction<'a> {
JSON_SCHEMA_ATTR_PRIMARY_CREDENTIAL, JSON_SCHEMA_ATTR_PRIMARY_CREDENTIAL,
JSON_SCHEMA_ATTR_RADIUS_SECRET, JSON_SCHEMA_ATTR_RADIUS_SECRET,
JSON_SCHEMA_ATTR_DOMAIN_NAME, JSON_SCHEMA_ATTR_DOMAIN_NAME,
JSON_SCHEMA_ATTR_DOMAIN_DISPLAY_NAME,
JSON_SCHEMA_ATTR_DOMAIN_UUID, JSON_SCHEMA_ATTR_DOMAIN_UUID,
JSON_SCHEMA_ATTR_DOMAIN_SSID, JSON_SCHEMA_ATTR_DOMAIN_SSID,
JSON_SCHEMA_ATTR_DOMAIN_TOKEN_KEY, JSON_SCHEMA_ATTR_DOMAIN_TOKEN_KEY,
@ -2666,14 +2699,29 @@ impl<'a> QueryServerWriteTransaction<'a> {
}) })
} }
fn get_db_domain_display_name(&self) -> Result<String, OperationError> {
self.internal_search_uuid(&UUID_DOMAIN_INFO)
.and_then(|e| {
trace!(?e);
e.get_ava_single_utf8("domain_display_name")
.map(str::to_string)
.ok_or(OperationError::InvalidEntryState)
})
.map_err(|e| {
admin_error!(?e, "Error getting domain display name");
e
})
}
/// Pulls the domain name from the database and updates the DomainInfo data in memory
fn reload_domain_info(&mut self) -> Result<(), OperationError> { fn reload_domain_info(&mut self) -> Result<(), OperationError> {
spanned!("server::reload_domain_info", { spanned!("server::reload_domain_info", {
let domain_name = self.get_db_domain_name()?; let domain_name = self.get_db_domain_name()?;
let display_name = self.get_db_domain_display_name()?;
let mut_d_info = self.d_info.get_mut(); let mut_d_info = self.d_info.get_mut();
if mut_d_info.d_name != domain_name { if mut_d_info.d_name != domain_name {
admin_warn!( admin_warn!(
"Using database configured domain name {} - was {}", "Using domain name from the database {} - was {} in memory",
domain_name, domain_name,
mut_d_info.d_name, mut_d_info.d_name,
); );
@ -2682,10 +2730,24 @@ impl<'a> QueryServerWriteTransaction<'a> {
); );
mut_d_info.d_name = domain_name; mut_d_info.d_name = domain_name;
} }
mut_d_info.d_display = display_name;
Ok(()) Ok(())
}) })
} }
/// Initiate a domain display name change process. This isn't particularly scary
/// because it's just a wibbly human-facing thing, not used for secure
/// activities (yet)
pub fn set_domain_display_name(&self, new_domain_name: &str) -> Result<(), OperationError> {
let modl = ModifyList::new_purge_and_set(
"domain_display_name",
Value::new_utf8(new_domain_name.to_string()),
);
let udi = PVUUID_DOMAIN_INFO.clone();
let filt = filter_all!(f_eq("uuid", udi));
self.internal_modify(&filt, &modl)
}
/// Initiate a domain rename process. This is generally an internal function but it's /// Initiate a domain rename process. This is generally an internal function but it's
/// exposed to the cli for admins to be able to initiate the process. /// exposed to the cli for admins to be able to initiate the process.
pub fn domain_rename(&self) -> Result<(), OperationError> { pub fn domain_rename(&self) -> Result<(), OperationError> {

View file

@ -115,7 +115,8 @@ impl<State: Clone + Send + Sync + 'static> Middleware<State> for TreeMiddleware
TreeIo::Stdout => "console stdout", TreeIo::Stdout => "console stdout",
TreeIo::Stderr => "console stderr", TreeIo::Stderr => "console stderr",
TreeIo::File(ref path) => path.to_str().unwrap_or_else(|| { TreeIo::File(ref path) => path.to_str().unwrap_or_else(|| {
panic!("File path isn't UTF-8, cannot write to file: {:#?}", path) eprintln!("File path isn't UTF-8, cannot write logs to: {:#?}", path);
std::process::exit(1);
// warn!( // warn!(
// "File path isn't UTF-8, logging to stderr instead: {:#?}", // "File path isn't UTF-8, logging to stderr instead: {:#?}",
// path // path

View file

@ -730,7 +730,9 @@ impl PartialValue {
#[derive(Clone, Debug)] #[derive(Clone, Debug)]
pub enum Value { pub enum Value {
Utf8(String), Utf8(String),
// Case insensitive string
Iutf8(String), Iutf8(String),
/// Case insensitive Name for a thing?
Iname(String), Iname(String),
Uuid(Uuid), Uuid(Uuid),
Bool(bool), Bool(bool),

View file

@ -0,0 +1,106 @@
///! Middleware for the tide web server
//TODO: decide if this is still needed
// #[derive(Default)]
// /// Injects the domain_display_name where it needs to
// pub struct KanidmDisplayNameMiddleware {
// domain_display_name: String,
// }
// // // TODO: finish this for #860
// // #[async_trait::async_trait]
// // impl<State: Clone + Send + Sync + 'static> tide::Middleware<State> for KanidmDisplayNameMiddleware {
// // async fn handle(
// // &self,
// // request: tide::Request<State>,
// // next: tide::Next<'_, State>,
// // ) -> tide::Result {
// // let mut response = next.run(request).await;
// // // grab the body we're intending to return at this point
// // let body_str = response.take_body().into_string().await?;
// // // update it with the hash
// // // TODO: #860 make this a const so we can change it and not typo it later
// // response.set_body(body_str.replace(
// // "===DOMAIN_DISPLAY_NAME===",
// // self.domain_display_name.as_str(),
// // ));
// // Ok(response)
// // }
// // }
// impl KanidmDisplayNameMiddleware {
// /// Pulls the domain_display_name from the qs on web server start, so we can
// /// set it in pages
// pub fn new(domain_display_name: String) -> Self {
// KanidmDisplayNameMiddleware {
// // TODO: #860 work out how to get this out :D
// domain_display_name: domain_display_name,
// }
// }
// }
#[derive(Default)]
/// This tide MiddleWare adds headers like Content-Security-Policy
/// and similar families. If it keeps adding more things then
/// probably rename the middlewre :)
pub struct UIContentSecurityPolicyResponseMiddleware {
// The sha384 hash of /pkg/wasmloader.js
pub integrity_wasmloader: String,
}
impl UIContentSecurityPolicyResponseMiddleware {
pub fn new(integrity_wasmloader: String) -> Self {
return Self {
integrity_wasmloader,
};
}
}
#[async_trait::async_trait]
impl<State: Clone + Send + Sync + 'static> tide::Middleware<State>
for UIContentSecurityPolicyResponseMiddleware
{
// This updates the UI body with the integrity hash value for the wasmloader.js file, and adds content-security-policy headers.
async fn handle(
&self,
request: tide::Request<State>,
next: tide::Next<'_, State>,
) -> tide::Result {
let mut response = next.run(request).await;
// grab the body we're intending to return at this point
let body_str = response.take_body().into_string().await?;
// update it with the hash
response.set_body(body_str.replace("==WASMHASH==", self.integrity_wasmloader.as_str()));
response.insert_header(
/* content-security-policy headers tell the browser what to trust
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy
In this case we're only trusting the same server that the page is
loaded from, and adding a hash of wasmloader.js, which is the main script
we should be loading, and should be really secure about that!
*/
// TODO: consider scraping the other js files that wasm-pack builds and including them too
"content-security-policy",
vec![
"default-src 'self'",
// we need unsafe-eval because of WASM things
format!(
"script-src 'self' 'sha384-{}' 'unsafe-eval'",
self.integrity_wasmloader.as_str()
)
.as_str(),
"form-action https: 'self'", // to allow for OAuth posts
// we are not currently using workers so it can be blocked
"worker-src 'none'",
// TODO: Content-Security-Policy-Report-Only https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy-Report-Only
// "report-to 'none'", // unsupported by a lot of things still, but mozilla's saying report-uri is deprecated?
"report-uri 'none'",
"base-uri 'self'",
]
.join(";"),
);
Ok(response)
}
}

View file

@ -1,6 +1,8 @@
mod middleware;
mod oauth2; mod oauth2;
mod v1; mod v1;
use self::middleware::*;
use self::oauth2::*; use self::oauth2::*;
use self::v1::*; use self::v1::*;
@ -182,17 +184,23 @@ pub fn to_tide_response<T: Serialize>(
} }
// Handle the various end points we need to expose // Handle the various end points we need to expose
async fn index_view(_req: tide::Request<AppState>) -> tide::Result { async fn index_view(req: tide::Request<AppState>) -> tide::Result {
let (eventid, hvalue) = req.new_eventid();
let domain_display_name = req.state().qe_r_ref.get_domain_display_name(eventid).await;
let mut res = tide::Response::new(200); let mut res = tide::Response::new(200);
res.insert_header("X-KANIDM-OPID", hvalue);
res.set_content_type("text/html;charset=utf-8"); res.set_content_type("text/html;charset=utf-8");
res.set_body(r#"
res.set_body(format!(r#"
<!DOCTYPE html> <!DOCTYPE html>
<html lang="en"> <html lang="en">
<head> <head>
<meta charset="utf-8"/> <meta charset="utf-8"/>
<meta name="viewport" content="width=device-width"> <meta name="viewport" content="width=device-width">
<title>Kanidm</title> <title>{}</title>
<link rel="stylesheet" href="/pkg/external/bootstrap.min.css" integrity="sha384-EVSTQN3/azprG1Anm3QDgpJLIm9Nao0Yz1ztcQTwFspd3yD65VohhpuuCOmLASjC"/> <link rel="stylesheet" href="/pkg/external/bootstrap.min.css" integrity="sha384-EVSTQN3/azprG1Anm3QDgpJLIm9Nao0Yz1ztcQTwFspd3yD65VohhpuuCOmLASjC"/>
<link rel="stylesheet" href="/pkg/style.css"/> <link rel="stylesheet" href="/pkg/style.css"/>
<script src="/pkg/external/bootstrap.bundle.min.js" integrity="sha384-MrcW6ZMFYlzcLA8Nl+NtUVF0sA7MsXsP1UyJoMp4YLEuNSfAP+JcXn/tWtIaxVXM"></script> <script src="/pkg/external/bootstrap.bundle.min.js" integrity="sha384-MrcW6ZMFYlzcLA8Nl+NtUVF0sA7MsXsP1UyJoMp4YLEuNSfAP+JcXn/tWtIaxVXM"></script>
@ -204,7 +212,7 @@ async fn index_view(_req: tide::Request<AppState>) -> tide::Result {
<body> <body>
</body> </body>
</html> </html>
"#, "#, domain_display_name.as_str())
); );
Ok(res) Ok(res)
@ -244,6 +252,7 @@ impl<State: Clone + Send + Sync + 'static> tide::Middleware<State> for Cacheable
} }
#[derive(Default)] #[derive(Default)]
/// Sets Cache-Control headers on static content endpoints
struct StaticContentMiddleware; struct StaticContentMiddleware;
#[async_trait::async_trait] #[async_trait::async_trait]
@ -260,6 +269,12 @@ impl<State: Clone + Send + Sync + 'static> tide::Middleware<State> for StaticCon
} }
#[derive(Default)] #[derive(Default)]
/// Adds the folloing headers to responses
/// - x-frame-options
/// - x-content-type-options
/// - cross-origin-resource-policy
/// - cross-origin-embedder-policy
/// - cross-origin-opener-policy
struct StrictResponseMiddleware; struct StrictResponseMiddleware;
#[async_trait::async_trait] #[async_trait::async_trait]
@ -270,79 +285,14 @@ impl<State: Clone + Send + Sync + 'static> tide::Middleware<State> for StrictRes
next: tide::Next<'_, State>, next: tide::Next<'_, State>,
) -> tide::Result { ) -> tide::Result {
let mut response = next.run(request).await; let mut response = next.run(request).await;
response.insert_header("x-frame-options", "deny");
response.insert_header("x-content-type-options", "nosniff");
response.insert_header("cross-origin-resource-policy", "same-origin");
response.insert_header("cross-origin-embedder-policy", "require-corp"); response.insert_header("cross-origin-embedder-policy", "require-corp");
response.insert_header("cross-origin-opener-policy", "same-origin"); response.insert_header("cross-origin-opener-policy", "same-origin");
response.insert_header("cross-origin-resource-policy", "same-origin");
response.insert_header("x-content-type-options", "nosniff");
response.insert_header("x-frame-options", "deny");
Ok(response) Ok(response)
} }
} }
#[derive(Default)]
struct UIContentSecurityPolicyResponseMiddleware {
// The sha384 hash of /pkg/wasmloader.js
pub integrity_wasmloader: String,
}
impl UIContentSecurityPolicyResponseMiddleware {
fn new(integrity_wasmloader: String) -> Self {
return Self {
integrity_wasmloader,
};
}
}
#[async_trait::async_trait]
impl<State: Clone + Send + Sync + 'static> tide::Middleware<State>
for UIContentSecurityPolicyResponseMiddleware
{
async fn handle(
&self,
request: tide::Request<State>,
next: tide::Next<'_, State>,
) -> tide::Result {
// This updates the UI body with the integrity hash value for the wasmloader.js file, and adds content-security-policy headers.
let mut response = next.run(request).await;
// grab the body we're intending to return at this point
let body_str = response.take_body().into_string().await?;
// update it with the hash
response.set_body(body_str.replace("==WASMHASH==", self.integrity_wasmloader.as_str()));
response.insert_header(
/* content-security-policy headers tell the browser what to trust
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy
In this case we're only trusting the same server that the page is
loaded from, and adding a hash of wasmloader.js, which is the main script
we should be loading, and should be really secure about that!
*/
// TODO: consider scraping the other js files that wasm-pack builds and including them too
"content-security-policy",
vec![
"default-src 'self'",
// we need unsafe-eval because of WASM things
format!(
"script-src 'self' 'sha384-{}' 'unsafe-eval'",
self.integrity_wasmloader.as_str()
)
.as_str(),
"form-action https: 'self'", // to allow for OAuth posts
// we are not currently using workers so it can be blocked
"worker-src 'none'",
// TODO: Content-Security-Policy-Report-Only https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy-Report-Only
// "report-to 'none'", // unsupported by a lot of things still, but mozilla's saying report-uri is deprecated?
"report-uri 'none'",
"base-uri 'self'",
]
.join(";"),
);
Ok(response)
}
}
struct StrictRequestMiddleware; struct StrictRequestMiddleware;
impl Default for StrictRequestMiddleware { impl Default for StrictRequestMiddleware {
@ -488,10 +438,12 @@ pub fn create_https_server(
let mut static_tserver = tserver.at(""); let mut static_tserver = tserver.at("");
static_tserver.with(StaticContentMiddleware::default()); static_tserver.with(StaticContentMiddleware::default());
static_tserver.with(UIContentSecurityPolicyResponseMiddleware::new( static_tserver.with(UIContentSecurityPolicyResponseMiddleware::new(
generate_integrity_hash(env!("KANIDM_WEB_UI_PKG_PATH").to_owned() + "/wasmloader.js") generate_integrity_hash(env!("KANIDM_WEB_UI_PKG_PATH").to_owned() + "/wasmloader.js")
.unwrap(), .unwrap(),
)); ));
// The compression middleware needs to be the last one added before routes // The compression middleware needs to be the last one added before routes
static_tserver.with(compress_middleware.clone()); static_tserver.with(compress_middleware.clone());

View file

@ -886,11 +886,10 @@ pub async fn auth(mut req: tide::Request<AppState>) -> tide::Result {
// out of the req cookie. // out of the req cookie.
let (eventid, hvalue) = req.new_eventid(); let (eventid, hvalue) = req.new_eventid();
let maybe_sessionid = req.get_current_auth_session_id(); let maybe_sessionid: Option<Uuid> = req.get_current_auth_session_id();
debug!("🍿 {:?}", maybe_sessionid);
let obj: AuthRequest = req.body_json().await.map_err(|e| { let obj: AuthRequest = req.body_json().await.map_err(|e| {
debug!("wat? {:?}", e); debug!("Failed get body JSON? {:?}", e);
e e
})?; })?;

View file

@ -428,6 +428,11 @@ pub fn domain_rename_core(config: &Configuration) {
admin_info!("Domain name not changing, stopping."); admin_info!("Domain name not changing, stopping.");
return; return;
} }
admin_debug!(
"Domain name is changing from {:?} to {:?}",
old_domain_name,
new_domain_name
);
} }
Err(e) => { Err(e) => {
admin_error!("Failed to query domain name, quitting! -> {:?}", e); admin_error!("Failed to query domain name, quitting! -> {:?}", e);

View file

@ -530,6 +530,7 @@ async fn test_default_entries_rbac_admins_schema_entries() {
"primary_credential", "primary_credential",
"radius_secret", "radius_secret",
"domain_name", "domain_name",
"domain_display_name",
"domain_uuid", "domain_uuid",
"domain_ssid", "domain_ssid",
"gidnumber", "gidnumber",

View file

@ -591,6 +591,20 @@ async fn test_server_rest_domain_lifecycle() {
// check get and get the ssid and domain info // check get and get the ssid and domain info
let nssid = rsclient.idm_domain_get_ssid().await.unwrap(); let nssid = rsclient.idm_domain_get_ssid().await.unwrap();
assert!(nssid == "new_ssid"); assert!(nssid == "new_ssid");
// Change the domain display name
rsclient
.idm_domain_set_display_name("Super Cool Crabz")
.await
.unwrap();
let dlocal = rsclient.idm_domain_get().await.unwrap();
assert!(
dlocal
.attrs
.get("domain_display_name")
.and_then(|v| v.get(0))
== Some(&"Super Cool Crabz".to_string())
);
} }
#[tokio::test] #[tokio::test]

View file

@ -40,6 +40,8 @@ pub enum Msg {
emsg: String, emsg: String,
kopid: Option<String>, kopid: Option<String>,
}, },
// TODO: use this? :)
#[allow(dead_code)]
Ignore, Ignore,
} }
@ -68,6 +70,8 @@ enum State {
pub struct CredentialResetApp { pub struct CredentialResetApp {
state: State, state: State,
// TODO: I'm sure past-William had a plan for this 🚌
#[allow(dead_code)]
eventbus: Box<dyn Bridge<EventBus>>, eventbus: Box<dyn Bridge<EventBus>>,
} }

View file

@ -764,6 +764,8 @@ impl Component for LoginApp {
// May need to set these classes? // May need to set these classes?
// <body class="html-body form-body"> // <body class="html-body form-body">
// TODO: add the domain_display_name here
html! { html! {
<main class="form-signin"> <main class="form-signin">
<div class="container"> <div class="container">