Data & Identity Sourcing

Data & Identity Sourcing describes how the Enterprise Stack issuer obtains the attributes that go into a credential – for example name, date of birth, identifiers, roles, or any other claims about the subject.

Instead of forcing you into a single model, the Enterprise Stack lets you decide where subject data lives and when it is fetched, so you can plug into existing systems (CRM, registries, IdPs, etc.) and design flows that fit your business, security, and UX requirements.

At a high level, this capability answers one question:

"Where do the attributes in my credential actually come from – and at what point in the issuance flow do we fetch them?"

What's included

The Enterprise Stack supports three complementary ways of sourcing data for credentials. You can use them individually or combine them in a single flow.

1. Use your own backend as the source of truth (before offer creation)

Your backend collects all required subject attributes up front and passes them to the Enterprise Stack Issuer when creating a credential offer.

This means:

  • You stay in control of where identity data lives (databases, registries, etc.).
  • The Issuer service receives a complete, ready-to-sign data structure and focuses on protocol handling, signing, and lifecycle management.

2. Enrich credentials via data functions & webhooks (after offer, before signing)

You can mark specific credential attributes as “to be filled later” and let the Issuer call data functions before signing – including webhooks into your systems. Returned values are mapped into the credential right before issuance.

This means:

  • Fetch dynamic or time-sensitive data at the last possible moment (e.g. timestamps, status flags, IDs, expiration dates).
  • Call external APIs via webhooks for custom logic (e.g. fraud checks, entitlement checks, policy decisions) and use the result directly in the credential.
  • Combine static attributes (provided up front) with dynamic attributes (via data functions) in a single issuance flow.

3. Fetch identity data via authorization servers (OID4VCI authorization code flow)

When using the OID4VCI authorization code flow, the user is required to authenticate with an external Authorization Server / Identity Provider (IdP) before claiming a credential. After successful authentication, the Issuer service can use the resulting tokens to fetch user claims and map them into credential attributes. This capability is available in the Enterprise Stack.

  • Use your existing identity infrastructure (e.g. Keycloak, Azure AD, custom OIDC IdP) as the trusted identity source.
  • Ensure that attributes are tied to a fresh, interactive authentication event (step-up, MFA, etc.).
  • Align with established IAM policies (scopes, consents, attribute release policies) while issuing verifiable credentials.

Tenant-Aware Data Sourcing

Each tenant or issuer service in the Enterprise Stack can define its own data sourcing strategy (which backends, which webhooks, which authorization server) to reflect organizational boundaries and business models.

This means you can operate multiple issuers with completely different data sourcing approaches within the same Enterprise Stack deployment – for example, one tenant fetching data from a CRM via webhooks, another using an OIDC authorization server, and a third receiving all attributes from their backend up front.

FAQs

  • Can I mix different data sources in a single credential issuance flow? — Yes. A single issuance can combine attributes that come from your backend, attributes returned via webhooks/data functions, and attributes obtained from an Authorization Server after user authentication.
  • Do I need an Identity Provider to use Data & Identity Sourcing? — No. You can issue credentials entirely based on data from your backend or via webhooks. Using an IdP/Authorization Server becomes relevant when you want to tie credential issuance to a fresh authentication event or reuse existing IAM policies and claims.
  • Can I keep sensitive attributes out of the credential while still using them in decisions? — Yes. You can use data functions and webhooks to call external systems that perform risk or eligibility checks. Only the decision (e.g. "eligible = true") has to be written into the credential; sensitive raw data can remain in your own systems.
  • Can I start simple and evolve my data sourcing strategy later? — Yes. Many deployments start with a simple backend-only model and gradually introduce data functions, webhooks, and Authorization Servers as requirements become more sophisticated – without changing the underlying credential formats or wallet integration.

A Note on PII

Regardless of which sourcing approach you use, issuance sessions in the Enterprise Stack can contain personally identifiable information (PII) – for example, subject attributes passed during offer creation, data fetched via webhooks, or claims obtained from an authorization server.

By default, the Issuer service stores session data (offer payloads, subject attributes, interaction logs) to enable features like audit trails, debugging, and session management. However, this means PII may persist in your database longer than necessary.

What you can do: The Enterprise Stack provides a configurable data retention capability that lets you define how long issuer session data should be kept and when it should be automatically purged. This allows you to balance operational needs (logging, troubleshooting) with compliance and privacy requirements (GDPR, data minimization principles).

For details on how to configure retention windows, purge schedules, and safety guardrails, see Data Retention and Auto-Purge.

Get Started

Ready to implement data sourcing in your issuance flows? Here are the next steps:

Last updated on December 15, 2025