Add auditing capability via CADF based notification events

This proposal is to add auditing capability in Barbican using CADF (Cloud Audit Data Federation) specification. The idea is to identify auditable attributes and construct audit data record as per CADF event specification and provide delivery option via OpenStack notification framework to interested services and consumers.

DMTF CADF standard provides auditing capabilities for compliance with security, operational and business processes. See References section in end for more details around CADF specification.

Problem Description

Large number of enterprise faces challenge when moving to OpenStack cloud. And most of these challenges have nothing to do with OpenStack service capabilities. Instead, the challenges are in terms of auditing and monitoring their workload and data in accordance with their strict corporate, industry or regional policies and compliance requirements (FIPS-140-2, PCI-DSS, SoX, ISO 27017 etc). Barbican, being a security component in a cloud, has similar auditing requirement.

In addition, Barbican has this concept of soft deletes where it keeps resources in database even after they are marked deleted and are no longer used in live systems. The idea is to have a sense of audit trail of its resources state. This approach does not provide much value in terms of audit other than when it was marked deleted.

Proposed Change

To add auditing capability in Barbican, proposal is to leverage a existing standard instead of creating own semantics and audit data model. In OpenStack ecosystem, number of services (Ceilometer, Keystone) are using CADF based event data model to report activities on its resources. Using CADF standard for auditing will allow consistent reporting across services and will allow customers to use common audit tools and processes for their audit data.

The CADF model can answer critical questions about activity or event happening on rest resources in a normative manner using CADF’s seven W’s of auditing (What, When, Who, On What, Where, From Where, To Where). See Auditing Event Details section below.

The overhead of adding audit data interaction can be minimized by publishing audit event as OpenStack notifications where event publisher does not have to wait for response/ acknowledgment. This way audit event delivery is decoupled and can still benefit from messaging infrastructure durability and delivery guarantees.

Auditing Event Details

Audit event data is constructed using who initiated request, request outcome, resource operated on, resource identifier and event type information. Following is sample of seven W's of auditing values for Barbican rest resources.

W Component


CADF Properties

Possible Values


POST/PUT/ DELETE v1/secrets

event.type event.outcome

activity/monitor/control success/failure/pending



{event generation timestamp}

Who initiator.type initiator.project_id

{token user id} service/security/account/user {scoped token project id}

From Where initiator.agent

environemnt REMOTE_ADDR environment HTTP_USER_AGENT

On What target.type

resource id (secret/container) data/keymgr/secret

Where observer.type

target service/keymgr

To Where

CADF extension properties (mentioned below) can be used to capture domain specific data available as part of rest request. Currently there is no plan to add these properties as this may need per resource awareness in common decorator logic.

  • event.attachments (structured or unstructured data)

  • event.tags (domain specific identifiers/ classifications)

pyCADF changes

Creation of audit event requires oslo pyCADF library. pyCADF library needs to be updated to reflect Barbican specific resource types. As per pyCADF contributors, either use existing types or create very few new generic types. Following Barbican specific resource types are going to be added in pyCADF resource taxonomy.

  1. data/keymgr/secret

  2. data/keymgr/container

  3. data/keymgr/order

  4. data/keymgr - general placeholder for all other remaining resourcs.

  5. service/keymgr

Above types are going to be added in pyCADF taxonomy as mentioned in link below.

Audit Event Generation

For API requests, audit event are going to be generated using audit middleware approach. This middleware is now available as part of keystonemiddleware (> 1.5) library. This library has Keystone middleware which is used by Barbican for Keystone token validation.

Audit middleware is going to create two events per Barbican REST API invocation. One with information extracted from request data and the second correlated one with request outcome (response). The mapping information is going to be managed via barbican specific configuration file which is going to be similar to files described in following pycadf samples link.

For asynchronous task processing workers, audit event is constructed using task related data. It can be implemented as a decorator which can be added to each task common methods (handle_processing, handle_success and handle_error) or can be added on base task process method. This audit event is going to be published as notification to same queue which is used by audit middleware as well.

Audit Event Delivery

Internally audit middleware uses an oslo messaging based notifier to publish CADF events to configured messaging infrastructure. Audit decorators will use similar approach.

Audit middleware is added to Barbican request pipeline via additional filter in barbican-api-paste.ini where related audit mapping file path is defined. Delivery of audit event via decorator needs to be configurable as developer boxes may not necessarily have the needed setup. By default, audit delivery as notification is going to disabled.

The oslo messaging framework supports publish of this audit data to messaging queue via ‘messagingv2’ driver or can be written to log files via its ‘log’ driver.


We could rely on Barbican existing logging but that does not provide a complete and consistent picture of service audit data. Non-standard logging means that cloud provider need service specific audit tools to aggregate and analyze the logs.

Security impact

This improves security in the stack by providing audit capability. This will help in addressing some of compliance requirements expected from Barbican service.

Notifications & Audit Impact

Will have additional notification capability to publish audit events.

Other end user impact


Performance Impact

Audit events are published as notifications to queue and does not have to wait for response/acknowledgment so overall associated overhead should be very minimal. When notifications are written to log files, related overhead should still be low and can be comparable to logic of adding 2 log statements.

Other deployer impact

To enable audit event delivery via notifications, deployer will need to change default configuration.

Developer impact




Primary assignee:


Work Items

  • Add new blueprint and update oslo pyCADF library.

  • Define pipeline filter with audit map configuration file barbican_api_audit_map.conf

  • Add new decorator to create CADF event data for asynchronous worker processing logic. Add decorator to related worker task methods.

  • Add ability to turn on audit event generation. By default it needs to be off.


  • pyCADF library (with barbican specific updated taxonomy).


The new unit tests are going to be added for middleware and order processing flow. For middleware unit tests, configuration override is needed for paste and api ini files. Oslo test driver can be used to verify message content in addition to mock patched methods approach.

Documentation Impact

CADF event usage should be documented. For order processing flow, document what audit related changes, if any, are needed to add audit support for new order types.