Platform Architecture

How the Tawa platform is built, what each service does, and why they are separate.

The Big Picture

Tawa is a deployment platform made up of six core services. Each one owns a single responsibility and communicates with the others over internal APIs. No service shares a database with another.

graph TD
  Dev["Developer"] -->|tawa deploy| Builder
  Builder --> Koko["Koko (registry)"]
  Koko --> Janus["Janus (gateway)"]
  Janus --> Service["Your Service"]
  Janus --> BioID["Bio-ID (identity)"]
  Builder --> Wallet["Wallet (gas gate)"]
  Builder --> CF["Cloudflare (DNS)"]

Core Services

Builder

The build pipeline. When you run tawa deploy, the builder clones your repo, generates a Dockerfile (if needed), builds and pushes the image, then orchestrates the full deploy: database provisioning, OAuth setup, Helm deploy, Koko registration, DNS configuration.

DetailValue
URLbuilder.tawa.insureco.io
OwnsBuild records, deploy logs, managed secrets
Talks toKoko, Wallet, Bio-ID, Cloudflare, K8s API

Koko

The service registry, configuration store, and database proxy. Every deployed service is registered in Koko with its upstream URL, routes, health endpoint, and metadata. Janus reads from Koko to know how to route traffic.

DetailValue
URLInternal only (cluster DNS)
OwnsService registry, config, feature flags, domain bindings
Talks toMongoDB, Redis (pub/sub to Janus)

Janus

The API gateway. All external API traffic enters through Janus, which handles authentication enforcement, request routing, and gas metering.

DetailValue
URLapi.tawa.insureco.io
OwnsGas metering events, request logs
Talks toKoko (routes), Bio-ID (token verification), Wallet (gas charges)

Bio-ID

The identity provider. Handles user registration, login, OAuth2 flows, MFA, and session management. Every service gets an OAuth client auto-provisioned on deploy.

DetailValue
URLbio.tawa.insureco.io
OwnsUser identities, OAuth clients, sessions, MFA secrets
Talks toMongoDB (user data), Redis (sessions)

Wallet

The token economy. Every org has a wallet with a gas balance. The builder checks the wallet before each deploy (the "deploy gate") to ensure 3 months of hosting reserve.

DetailValue
URLiec-wallet.tawa.insureco.io
OwnsWallet balances, gas ledger, token purchases
Talks toMongoDB (ledger)

tawa-web

The platform console. The developer portal, docs, PlugINS store, and service dashboard. It reads from other services but owns no platform state — if tawa-web goes down, deploys and routing still work.

Deploy Flow

When you run tawa deploy --prod:

  1. CLI authenticates with Bio-ID and sends a build request to the Builder
  2. Builder checks the deploy gate — calls Wallet to verify 3-month hosting reserve
  3. Builder clones your repo, reads catalog-info.yaml, generates a Dockerfile if needed
  4. Docker image is built and pushed to the container registry
  5. Builder provisions database secrets as K8s Secrets
  6. Builder creates/updates an OAuth client in Bio-ID
  7. Builder resolves internal dependencies to cluster DNS URLs via Koko
  8. Helm deploys the service to Kubernetes with all env vars injected
  9. Builder registers the service in Koko (routes, upstream URL)
  10. Builder creates a DNS record in Cloudflare
  11. Janus picks up the new routes from Koko

Every step is idempotent — deploying again updates, never creates duplicates.

Traffic Flow

Direct traffic

User → Cloudflare → NGINX Ingress → Your Service Pod

Your service gets its own subdomain (my-svc.tawa.insureco.io) and Cloudflare routes directly to the cluster ingress.

API gateway traffic (opt-in via routes)

Client → api.tawa.insureco.io → Janus → Your Service Pod
                                 (auth, gas metering)

If your catalog-info.yaml defines routes with auth: required, those routes go through Janus for token verification and gas metering.

Environments

EnvironmentNamespaceURLPurpose
Sandboxmy-svc-sandboxmy-svc.sandbox.tawa.insureco.ioDevelopment and testing
UATmy-svc-uatmy-svc.uat.tawa.insureco.ioUser acceptance testing
Productionmy-svc-prodmy-svc.tawa.insureco.ioLive traffic

Key Principles

  • Convention over configuration. The builder infers almost everything from catalog-info.yaml and your framework. No Dockerfiles, Helm charts, or K8s manifests needed.
  • Everything is idempotent. Deploying again updates. Database provisioning is additive. OAuth clients are upserted.
  • Fail independently. If Janus goes down, direct service URLs still work. If tawa-web goes down, deploys still work. If Wallet is unreachable, the deploy gate is fail-open.
  • No shared databases. Services communicate via APIs only.
  • Secrets are managed, not manual. Database URIs, OAuth credentials, and custom secrets are all injected by the builder.

Last updated: February 28, 2026