Distributed Architecture

When the Community Stack is cloned and run out-of-the-box as described here there is only one instance of each service started. i.e One instance of the Issuer API, one instance of the Verifier API and one instance of the Wallet API is started.

docker ps
CONTAINER ID   IMAGE                            COMMAND                  CREATED        STATUS                    PORTS                                                                                                                            NAMES
3172c547a1c9   waltid/waltid-web-wallet:0.5.0   "bun run server/inde…"   25 hours ago   Up 30 seconds             7101/tcp                                                                                                                         docker-compose-waltid-web-wallet-1
a109275e76b6   waltid/wallet-api:0.5.0          "/waltid-wallet-api/…"   25 hours ago   Up 31 seconds             7001/tcp                                                                                                                         docker-compose-wallet-api-1
03cf14477b29   waltid/verifier-api:0.5.0        "/waltid-verifier-ap…"   25 hours ago   Up 37 seconds             7003/tcp                                                                                                                         docker-compose-verifier-api-1
993557a8de84   waltid/issuer-api:0.5.0          "/waltid-issuer-api/…"   25 hours ago   Up 36 seconds             7002/tcp                                                                                                                         docker-compose-issuer-api-1
a7eea13f67bd   waltid/portal:0.5.0              "docker-entrypoint.s…"   25 hours ago   Up 37 seconds             7102/tcp                                                                                                                         docker-compose-web-portal-1
fcb83570a141   waltid/vc-repository:latest      "/bin/sh -c 'node se…"   25 hours ago   Up 37 seconds             3000/tcp                                                                                                                         docker-compose-vc-repo-1
9f1cfcc83bfc   caddy:2                          "caddy run --config …"   25 hours ago   Up 37 seconds             80/tcp, 443/tcp, 0.0.0.0:7001-7003->7001-7003/tcp, 0.0.0.0:7101-7103->7101-7103/tcp, 2019/tcp, 443/udp, 0.0.0.0:8080->8080/tcp   docker-compose-caddy-1
8b54fbba10cd   postgres                         "docker-entrypoint.s…"   3 weeks ago    Up 37 seconds (healthy)   0.0.0.0:5432->5432/tcp

Multiple Instances

However, when it comes to a production setup, in order to achieve better scalability and high availability, multiple services instances are required.

As an example, let's do a very basic setup, with 2 instances of the Verifier API.

Clone the project and change to root directory:

git clone https://github.com/walt-id/waltid-identity.git && cd waltid-identity

Build the Verifier API service Docker image:

docker build -t waltid/verifier-api -f waltid-services/waltid-verifier-api/Dockerfile .

Before starting the instances, it might be a good idea to create a Docker Network to ease the inter-container communication.

docker network create waltid

And now, starting two instances of the Verifier API...

One instance, named verifier1 will be listening at the port 7003.

docker run --rm --name verifier1 --net waltid -p 7003:7003 waltid/verifier-api -- --webPort=7003 --baseUrl=http://caddy:9093

The other one (verifier2), listening on the 8003.

docker run --rm --name verifier2 --net waltid -p 8003:8003 waltid/verifier-api -- --webPort=8003 --baseUrl=http://caddy:9093

Now, we have two Verifier API instances up and running.

$ docker ps
CONTAINER ID   IMAGE                 COMMAND                  CREATED          STATUS          PORTS                              NAMES
317015641bae   waltid/verifier-api   "/waltid-verifier-ap…"   7 seconds ago    Up 6 seconds    7003/tcp, 0.0.0.0:8003->8003/tcp   verifier2
7ad9fee7cd3d   waltid/verifier-api   "/waltid-verifier-ap…"   15 seconds ago   Up 14 seconds   0.0.0.0:7003->7003/tcp             verifier1

Load Balancing

To hide the complexity of the distributed architecture from the end user, it is necessary to add a network asset, usally a load balancer or a reverse proxy (software or hardware) that intermediates user calls to the multiple instances of redundant services.

                                     ┌─────────┐
                                     │         │
                                ┌--─►│verifier1│
                                │    │         │
                                │    └─────────┘
 ┌─────────────────┐            │
 │  Load Balancer  │            │
 │                 ├────────────┘
 │       or        │
 │                 ├────────────┐
 │  Reverse Proxy  │            │
 └─────────────────┘            │
                                │    ┌─────────┐
                                │    │         │
                                └--─►│verifier2│
                                     │         │
                                     └─────────┘

Nowadays, there are several options that could play this role, like the Apache Web Server, nginx, lighthttpd, Traefik etc.

Caddy setup

For the sake of simplicity, let's use Caddy in our example. Using the oficial Docker image, start it up.

docker run --rm --net waltid --name caddy -p 9093:9093 -v $(pwd)/waltid-services/waltid-verifier-api/distributed-test.Caddyfile:/etc/caddy/Caddyfile caddy

The Caddy config file is quite simple:

http://localhost:9093 {
    reverse_proxy {
        to verifier1:7003
        to verifier2:8003
        lb_policy round_robin
    }
}

In a nutshell, it defines one endpoint: localhost:9093.

It works as a reverse proxy that balances the load in a round robin strategy between 2 Verifier instances (verifier1 and verifier2).

                                     ┌───────────────-┐
                                     │                │
                                ┌--─►│ verifier1:7003 │
                                │    │                │
                                │    └─────────-───-──┘
 ┌─────────────────┐            │
 │                 │            │
 │ localhost:9093  ├────────────┘
 │ (Verifier API)  ├────────────┐
 │                 |            │
 └─────────────────┘            │
                                │    ┌───────-────────┐
                                │    │                │
                                └--─►│ verifier2:8003 │
                                     │                │
                                     └────────────────┘

Now, give it a try and open http://localhost:9093. The Swagger page containing the Verifier API documentation will be opened.

With the instances log terminals visible, refresh the page a couple of times and observe how the requests are distributed between both instances. Cool, isn't it? However, there is a problem here.

Session Management

Isolated sessions

When an OIDC credential verification flow is started with the /openid4vc/authorize endpoint, for example, an authorization request URL like the one below is generated.

openid4vp://caddy:9093/?authorization_request_uri=http%3A%2F%2Fcaddy%3A9093%2Fopenid4vc%2FauthorizationRequest%3Fid%3D1765770d-2682-4e4e-95e9-183875d4a9de

This URL links to a verification session initiated in one of the instances of the Verifier API, for example, the verifier1 in the diagram above.

This authorization request URL is supposed to be used later on by the wallet to submit credentials for verification.

However, when the Wallet calls the URL encoded in the authorization_request_uri parameter to submit credentials, the request may not reach the same Verifier API instance where the verification session was initiated and the result will be:

{"exception":true,"status":"Not Found","code":"404","message":"No active verification session found by the given id"}

This happens because each Verifier API has an independent (in memory) session management system. They don't share information between them. So, if a verification session is initiated in one instance, the other instances doesn't know about it.

                                     ┌─────────────-┐
                                     │              │      ┌───────────┐
                                ┌--─►│verifier1:7003│─────►| Session 1 |
                                │    │              │      └───────────┘
                                │    └──────────────┘
 ┌─────────────────┐            │
 │                 │            │
 │ localhost:9093  ├────────────┘
 │ (Verifier API)  ├────────────┐
 │                 |            │
 └─────────────────┘            │    ┌───────-─────-┐
                                │    │              │      ┌───────────┐
                                └--─►│verifier2:8003│─────►| Session 2 |
                                     │              │      └───────────┘
                                     └──────────────┘

Shared session

There are multiple strategies to solve this issue in the distributed computing context. For now, the only way to exchange session data in a distributed deployment architecture of the Walt.id Community Stack is through a data persistence layer that is shared between the various instances of the cluster.

                                     ┌───────────────┐
                                     │               │
                                ┌--─►│ verifier1:7003│
                                │    │               │
                                │    └───────┬───────┘
                                │            │
                                │            ▼
 ┌─────────────────┐            │    ┌────────────────┐
 │                 │            │    │                │
 │ localhost:9093  ├────────────┘    │ Shared Session │
 │ (Verifier API)  ├────────────┐    │                │
 │                 │            │    └────────────────┘
 └─────────────────┘            │            ▲
                                │            │
                                │    ┌───────┴───────┐
                                │    │               │
                                └--─►│ verifier2:8003│
                                     │               │
                                     └───────────────┘

Enabling shared session management

This feature is disabled by default in the Community Stack. So, the very first thing to do is to enable it in the waltid-services/waltid-verifier-api/config/_features.conf file.

enabledFeatures = [
    persistence
]
disabledFeatures = [
    # ...
]

The settings for the persistence layer can be found in the waltid-services/waltid-verifier-api/config/persistence.conf config file. Look at its content:

type = "memory"
// type = "redis"
// nodes = [{host = "127.0.0.1", port = 6379}]
// user: ""
// password: ""

By default, the session persistence is managed in memory by each instance independently, as shown before. Let's change it to a shared data store.

Shared data store setup

For the moment, we only support the use of Redis as a shared database to store distributed session data. The easiest way to get it up and running is with an official Docker image.

docker run --rm --net waltid --name redis-stack-server -p 6379:6379 redis/redis-stack-server:latest

Change the persistence.conf file to use the "redis" persistence mechanism and configure the other parameters accordingly.

// type = "memory"
type = "redis"
nodes = [{host = "redis-stack-server", port = 6379}]
user: "default"
password: ""

Worth observing the redis-stack-server container name was also used as the Redis server hostname in the config file.

To update the configuration changes just made,

  1. Stop the running Verifier API instances
docker stop $(docker ps  | grep verifier-api | awk '{print $1}')

The plain old Ctrl+C is probably enough :-)

  1. Rebuild the Verifier API image
docker build -t waltid/verifier-api -f waltid-services/waltid-verifier-api/Dockerfile .
  1. Restart the instances.
docker run --rm --name verifier1 --net waltid -p 7003:7003 waltid/verifier-api -- --webPort=7003 --baseUrl=http://caddy:9093
docker run --rm --name verifier2 --net waltid -p 8003:8003 waltid/verifier-api -- --webPort=8003 --baseUrl=http://caddy:9093

Let's check if it works?

Testing

  1. Go back to the Verifier API Swagger page at http://localhost:9093.
  2. Use the /openid4vc/verify endpoint to start an OIDC credential verification flow. Get the result presentation request URL.
  3. Start a local instance of the Wallet API.
docker run --rm --name wallet --net waltid -p 7001:7001 -it -v $(pwd)/waltid-services/waltid-wallet-api/config:/waltid-wallet-api/config -v $(pwd)/waltid-services/waltid-wallet-api/data:/waltid-wallet-api/data -t waltid/wallet-api
  1. Open the Wallet API Swagger page at http://localhost:7001.
  2. Use the /wallet-api/auth/login endpoint to authenticate with the pre-registered credentials.
  3. Get the wallet id of the authenticated user with the /wallet-api/wallet/accounts/wallets endpoint.
  4. Use the obtained wallet id and the presentation request URL to submit credentials for verification using the /wallet-api/wallet/{wallet}/exchange/usePresentationRequest endpoint.

Voilá. You've just created your first distributed architecture of the Walt.id Community Stack for verifying credentials.

Last updated on December 29, 2025