Distributed Architecture

When the Community Stack is cloned and run out-of-the-box as described here there is only one instance of each service started. i.e One instance of the Issuer API, one instance of the Verifier API and one instance of the Wallet API is started.

docker ps
CONTAINER ID   IMAGE                            COMMAND                  CREATED        STATUS                    PORTS                                                                                                                            NAMES
3172c547a1c9   waltid/waltid-web-wallet:0.5.0   "bun run server/inde…"   25 hours ago   Up 30 seconds             7101/tcp                                                                                                                         docker-compose-waltid-web-wallet-1
a109275e76b6   waltid/wallet-api:0.5.0          "/waltid-wallet-api/…"   25 hours ago   Up 31 seconds             7001/tcp                                                                                                                         docker-compose-wallet-api-1
03cf14477b29   waltid/verifier-api:0.5.0        "/waltid-verifier-ap…"   25 hours ago   Up 37 seconds             7003/tcp                                                                                                                         docker-compose-verifier-api-1
993557a8de84   waltid/issuer-api:0.5.0          "/waltid-issuer-api/…"   25 hours ago   Up 36 seconds             7002/tcp                                                                                                                         docker-compose-issuer-api-1
a7eea13f67bd   waltid/portal:0.5.0              "docker-entrypoint.s…"   25 hours ago   Up 37 seconds             7102/tcp                                                                                                                         docker-compose-web-portal-1
fcb83570a141   waltid/vc-repository:latest      "/bin/sh -c 'node se…"   25 hours ago   Up 37 seconds             3000/tcp                                                                                                                         docker-compose-vc-repo-1
9f1cfcc83bfc   caddy:2                          "caddy run --config …"   25 hours ago   Up 37 seconds             80/tcp, 443/tcp, 0.0.0.0:7001-7003->7001-7003/tcp, 0.0.0.0:7101-7103->7101-7103/tcp, 2019/tcp, 443/udp, 0.0.0.0:8080->8080/tcp   docker-compose-caddy-1
8b54fbba10cd   postgres                         "docker-entrypoint.s…"   3 weeks ago    Up 37 seconds (healthy)   0.0.0.0:5432->5432/tcp

Multiple Instances

However, when it comes to a production setup, in order to achieve better scalability and high availability, multiple services instances are required.

As an example, let's do a very basic setup, with 2 instances of the Issuer API.

Clone the project and change to root directory:

git clone https://github.com/walt-id/waltid-identity.git && cd waltid-identity

Build the Issuer API service Docker image:

docker build -t waltid/issuer-api -f waltid-services/waltid-issuer-api/Dockerfile .

Before starting the instances, it might be a good idea to create a Docker Network to ease the inter-container communication.

docker network create waltid

And now, starting two instances of the Issuer API...

One instance, named issuer1 will be listening at the port 7002.

docker run --rm --name issuer1 --net waltid -p 7002:7002 waltid/issuer-api -- --webPort=7002 --baseUrl=http://caddy:9092

The other one (issuer2), listening on the 8002.

docker run --rm --name issuer2 --net waltid -p 8002:8002 waltid/issuer-api -- --webPort=8002 --baseUrl=http://caddy:9092

Now, we have two Issuer API instances up and running.

$ docker ps
CONTAINER ID   IMAGE               COMMAND                  CREATED          STATUS          PORTS                              NAMES
317015641bae   waltid/issuer-api   "/waltid-issuer-api/…"   7 seconds ago    Up 6 seconds    7002/tcp, 0.0.0.0:8002->8002/tcp   issuer2
7ad9fee7cd3d   waltid/issuer-api   "/waltid-issuer-api/…"   15 seconds ago   Up 14 seconds   0.0.0.0:7002->7002/tcp             issuer1

Load Balancing

To hide the complexity of the distributed architecture from the end user, it is necessary to add a network asset, usally a load balancer or a reverse proxy (software or hardware) that intermediates user calls to the multiple instances of redundant services.

                                     ┌─────────┐
                                     │         │
                                ┌--─►│ issuer1 │
                                │    │         │
                                │    └─────────┘
 ┌─────────────────┐            │
 │  Load Balancer  │            │
 │                 ├────────────┘
 │       or        │
 │                 ├────────────┐
 │  Reverse Proxy  │            │
 └─────────────────┘            │
                                │    ┌─────────┐
                                │    │         │
                                └--─►│ issuer2 │
                                     │         │
                                     └─────────┘

Nowadays, there are several options that could play this role, like the Apache Web Server, nginx, lighthttpd, Traefik etc.

Caddy setup

For the sake of simplicity, let's use Caddy in our example. Using the oficial Docker image, start it up.

docker run --rm --net waltid --name caddy -p 9092:9092 -v $(pwd)/waltid-services/waltid-issuer-api/distributed-test.Caddyfile:/etc/caddy/Caddyfile caddy

The Caddy config file is quite simple:

http://localhost:9092 {
    reverse_proxy {
        to issuer1:7002
        to issuer2:8002
        lb_policy round_robin
    }
}
http://localhost:9093 {
    reverse_proxy {
        to verifier1:7003
        to verifier2:8003
        lb_policy round_robin
    }
}

In a nutshell, it defines two endpoints: localhost:9092 and localhost:9093.

Both work as a reverse proxy that balances the load in a round robin strategy between 2 Issuer instances (issuer1 and issuer2) and 2 Verifier instances (verifier1 and verifier2).

                                     ┌──────────────┐
                                     │              │
                                ┌--─►│ issuer1:7002 │
                                │    │              │
                                │    └───────-───-──┘
 ┌─────────────────┐            │
 │                 │            │
 │ localhost:9092  ├────────────┘
 │  (Issuer API)   ├────────────┐
 │                 |            │
 └─────────────────┘            │    ┌───────-──────┐
                                │    │              │
                                └--─►│ issuer2:8002 │
                                     │              │
                                     └──────────────┘


                                     ┌───────────────-┐
                                     │                │
                                ┌--─►│ verifier1:7003 │
                                │    │                │
                                │    └─────────-───-──┘
 ┌─────────────────┐            │
 │                 │            │
 │ localhost:9093  ├────────────┘
 │ (Verifier API)  ├────────────┐
 │                 |            │
 └─────────────────┘            │
                                │    ┌───────-────────┐
                                │    │                │
                                └--─►│ verifier2:8003 │
                                     │                │
                                     └────────────────┘

Now, give it a try and open http://localhost:9092. The Swagger page containing the Issuer API documentation will be opened.

With the instances log terminals visible, refresh the page a couple of times and observe how the requests are distributed between both instances. Cool, isn't it? However, there is a problem here.

Session Management

Isolated sessions

When a OIDC credencial exchange flow is started with the /openid4vc/jwt/issue endpoint, for example, a credential offer URL like the one bellow is generated.

openid-credential-offer://caddy:9092/?credential_offer_uri=http%3A%2F%2Fcaddy%3A9092%2Fopenid4vc%2FcredentialOffer%3Fid%3D1765770d-2682-4e4e-95e9-183875d4a9de

This URL links to an issuance session initiated in one of the instances of the Issuer API, for example, the issuer1 in the diagram above.

This credential offer URL is supposed to be used later on by the /wallet-api/wallet/{wallet}/exchange/useOfferRequest Wallet API endpoint.

However, when the Wallet calls the URL encoded in the credential_offer_uri parameter to request the offered credential, the request may not reach the same Issuer API instance where the issuance session was initiated and the result will be:

{"exception":true,"status":"Not Found","code":"404","message":"No active issuance session found by the given id"}

This happens because each Issuer API has an independent (in memory) session management system. They don't share information between them. So, if an issuance session is initiated in one instance, the other instances doesn't know about it.

                                     ┌─────────────-┐
                                     │              │      ┌───────────┐
                                ┌--─►│ issuer1:7002 │─────►| Session 1 |
                                │    │              │      └───────────┘
                                │    └──────────────┘
 ┌─────────────────┐            │
 │                 │            │
 │ localhost:9092  ├────────────┘
 │  (Issuer API)   ├────────────┐
 │                 |            │
 └─────────────────┘            │    ┌───────-─────-┐
                                │    │              │      ┌───────────┐
                                └--─►│ issuer2:8002 │─────►| Session 2 |
                                     │              │      └───────────┘
                                     └──────────────┘

Shared session

There are multiple strategies to solve this issue in the distributed computing context. For now, the only way to exchange session data in a distributed deployment architecture of the Walt.id Community Stack is through a data persistence layer that is shared between the various instances of the cluster.

                                     ┌───────────────┐
                                     │               │
                                ┌--─►│  issuer1:7002 │
                                │    │               │
                                │    └───────┬───────┘
                                │            │
                                │            ▼
 ┌─────────────────┐            │    ┌────────────────┐
 │                 │            │    │                │
 │ localhost:9092  ├────────────┘    │ Shared Session │
 │  (Issuer API)   ├────────────┐    │                │
 │                 │            │    └────────────────┘
 └─────────────────┘            │            ▲
                                │            │
                                │    ┌───────┴───────┐
                                │    │               │
                                └--─►│  issuer2:8002 │
                                     │               │
                                     └───────────────┘

Enabling shared session management

This feature is disabled by default in the Community Stack. So, the very first thing to do is to enable it in the waltid-services/waltid-issuer-api/config/_features.conf file.

enabledFeatures = [
    persistence
]
disabledFeatures = [
    # ...
]

The settings for the persistence layer can be found in the waltid-services/waltid-issuer-api/config/persistence.conf config file. Look at its content:

type = "memory"
// type = "redis"
// nodes = [{host = "127.0.0.1", port = 6379}]
// user: ""
// password: ""

By default, the session persistence is managed in memory by each instance independently, as shown before. Let's change it to a shared data store.

Shared data store setup

For the moment, we only support the use of Redis as a shared database to store distributed session data. The easiest way to get it up and running is with an official Docker image.

docker run --rm --net waltid --name redis-stack-server -p 6379:6379 redis/redis-stack-server:latest

Change the persistence.conf file to use the "redis" persistence mechanism and configure the other parameters accordingly.

// type = "memory"
type = "redis"
nodes = [{host = "redis-stack-server", port = 6379}]
user: "default"
password: ""

Worth observing the redis-stack-server container name was also used as the Redis server hostname in the config file.

To update the configuration changes just made,

  1. Stop the running Issuer API instances
docker stop $(docker ps  | grep issuer-api | awk '{print $1}')

The plain old Ctrl+C is probably enough :-)

  1. Rebuild the Issuer API image
docker build -t waltid/issuer-api -f waltid-services/waltid-issuer-api/Dockerfile .
  1. Restart the instances.
docker run --rm --name issuer1 --net waltid -p 7002:7002 waltid/issuer-api -- --webPort=7002 --baseUrl=http://caddy:9092
docker run --rm --name issuer2 --net waltid -p 8002:8002 waltid/issuer-api -- --webPort=8002 --baseUrl=http://caddy:9092

Let's check if it works?

Testing

  1. Go back to the Issuer API Swagger page at http://localhost:9092.
  2. Use the /openid4vc/jwt/issue endpoint to start an OIDC credential exchange. Get the result Offer URL.
  3. Start a local instance of the Wallet API.
docker run --rm --name wallet --net waltid -p 7001:7001 -it -v $(pwd)/waltid-services/waltid-wallet-api/config:/waltid-wallet-api/config -v $(pwd)/waltid-services/waltid-wallet-api/data:/waltid-wallet-api/data -t waltid/wallet-api
  1. Open the Wallet API Swagger page at http://localhost:7001.
  2. Use the /wallet-api/auth/login endpoint to authenticate with the pre-registered credentials.
  3. Get the wallet id of the authenticated user with the /wallet-api/wallet/accounts/wallets endpoint.
  4. Use the obtained wallet id and the offer URL in the /wallet-api/wallet/{wallet}/exchange/useOfferRequest enpoint to claim for the offered credential.

Voilá. You've just created your first distributed architecture of the Walt.id Community Stack for issuing credentials.

The same instructions apply for the Verifier.