The Shape of the System
You've decided to record history instead of overwriting it. Events flow in, state flows out. But that decision opens doors you might not have expected.
Throughout this documentation, we use poker as the running example domain. Angzarr does not support, endorse, or target the gaming or gambling industry.
We chose poker because it's an immediately understandable problem domain for most adults, yet complex enough to demonstrate real-world patterns: concurrent state changes, multi-party coordination, compensation flows, and temporal reasoning. When you see HandStarted or BetPlaced, translate to your domain—OrderCreated, PaymentProcessed, or whatever fits.
Code snippets are illustrative, not API reference. They show how concepts work, not exact function signatures. For working code, see the SDK documentation and example implementations.
The Code You Write
Your aggregate handler is fifty lines. Your saga is twenty. The framework handles the other ten thousand.
This isn't accidental. The separation is surgical: you write business rules, the framework handles persistence, messaging, concurrency, and recovery. Your code stays small enough for a new team member to read in an afternoon—or for an AI assistant to hold in a single context window.
The small footprint principle →
When Reality Arrives
Not everything is a request. A payment confirmation from Stripe. A package scan from FedEx. A ruling from the floor manager. These aren't commands asking permission—they're facts announcing what has already happened.
The system must record them. It cannot reject reality. But it can ensure they're processed exactly once, and that the distinction between "request" and "fact" remains clear in your audit trail.
When Things Go Wrong
The player's bet was accepted. The pot updated. Then the hand discovered they'd already folded.
In a traditional system, you'd be unwinding database transactions, hoping nothing slipped through. Here, when the table rejects the bet, the framework routes the rejection back to the hand aggregate, which emits a compensation event: BetReverted. The audit trail shows exactly what happened and why.
Graceful failure and compensation →
Branching Time
"What if we had approved that claim?" "What would happen if we changed the fee structure?" "Train a model on ten thousand alternate game outcomes."
Editions let you branch history. The main timeline continues undisturbed while you explore alternates—simulations, what-if analysis, ML training data. When you're done, delete the branch.
Temporal branching with editions →
Training the Machine
Your event store is already a dataset. Each event carries the state before, the decision made, and the outcome that followed. For a reinforcement learning model, this is gold.
Editions multiply the value: generate counterfactual scenarios, replay them through your aggregates, collect the results. PyTorch sees the same proto messages your production code handles. The training loop and the production loop speak the same language.
Events Age Gracefully
Your PlayerRegistered event from 2019 has three fields. The 2025 version has twelve. Both must work. No migration scripts. No downtime. No "please replay all events this weekend."
Upcasters transform old events into new shapes at read time. The original bytes remain untouched—the audit trail is immutable. But your aggregates always see the current schema.
Schema evolution with upcasting →
The Right View for the Job
Your dashboard needs millisecond reads from Redis. Your analytics team needs columnar storage in Snowflake. Your search needs Elasticsearch. Your mobile app needs a different shape entirely.
One event stream. Many projections. Each optimized for its query pattern. Rebuild any of them by replaying history. Scale reads independently of writes.
Speaking to the Outside World
Your events, wrapped in CloudEvents 1.0—the envelope every webhook consumer already understands. HTTP or Kafka, batched or streaming, with correlation IDs threaded through.
External systems integrate without custom adapters. Your events join a larger ecosystem.
One Event Stream, Six Languages
Your Python team writes the ML projector. Your Rust team writes the latency-critical aggregate. Your Java team maintains the legacy integration. Same events. Same behavior. Verified by the same Gherkin specifications.
The framework doesn't care what's behind the gRPC endpoint. It only cares that you speak proto.
At Scale
An aggregate with a hundred thousand events. State rebuilt in fifty milliseconds.
Snapshots cache aggregate state at intervals. Async processing trades immediate consistency for throughput. Merge strategies let you choose how to handle concurrent commands. The knobs are there when you need them.
Nothing Disappears
Every event carries a correlation ID linking it to the workflow that spawned it. Every failed event lands in a dead letter queue, tagged with the error and ready for replay. Every saga rejection routes to compensation handlers.
The system doesn't silently swallow failures. It surfaces them, traces them, and gives you the tools to recover.
Next
If this shape fits your problem, the next step is understanding the patterns underneath.
CQRS and Event Sourcing explained →
Or jump straight to building: