Projects are where ideas prove whether they can survive contact with use.

Back to projects

PayFlow: Building an Event-Driven Payment System with Spring Modulith

Featured project

How I designed and built a modular monolith payment system with HMAC authentication, event-driven architecture, and Spring Modulith — as a junior Java engineer learning by doing.

12 min read
Java
Spring Boot
Spring Modulith
Event-Driven
Payments
PostgreSQL
Backend

PayFlow: Building an Event-Driven Payment System with Spring Modulith

I build backend systems with a strong bias toward clarity, reliability, and clean boundaries. PayFlow is one of the projects where that mattered most, because every design decision had to hold up across authentication, payment state, persistence, event handling, and auditability.

I built it as a modular monolith with Spring Boot, Spring Modulith, and PostgreSQL, using events to coordinate fraud checks, ledger posting, notifications, and audit history without collapsing the modules into each other.

This is a breakdown of what I built, the decisions behind it, and what the project made concrete for me in practice. Not a tutorial. Just a direct account of designing and implementing a payment system from the ground up.

What PayFlow does

PayFlow is a payment processing backend. The full flow looks like this:

  1. A merchant registers and gets a raw API key
  2. The merchant signs every request with HMAC-SHA256
  3. A payment is submitted and stored as PENDING
  4. The fraud module evaluates it and returns a decision
  5. The payment transitions to AUTHORISED or DECLINED
  6. An authorised payment triggers a ledger entry
  7. The merchant receives an email notification
  8. Every event is persisted in an audit log

The entire thing runs as a single deployable Spring Boot application. One process. One database. But internally, it is structured like separate services: each module owns its schema, its logic, and its data. Cross-module coordination happens through domain events, not direct table access or tightly coupled service calls.

That is the core of what a modular monolith is.

Why a Modular Monolith

The honest reason I chose this architecture is because I wanted to understand it before I ever touched a real microservices setup.

Everyone talks about microservices. Fewer people talk about the fact that most companies start with a monolith and that a poorly structured monolith is what leads to the "we need microservices" conversation in the first place.

A modular monolith forces you to draw clear boundaries before you have the operational complexity of distributed systems. You learn to think in terms of bounded contexts, event-driven communication, and module contracts — without juggling multiple deployments, service discovery, or distributed transactions.

The architecture also has a practical appeal. Running a single process is simple. One docker compose up. One JVM. One database connection pool. And if the modules stay disciplined enough, you already have the seams you would need later for extraction, because the contracts are events and module APIs instead of implicit calls scattered through the codebase.

Spring Modulith: Boundaries With Teeth

The most important tool in this project was Spring Modulith.

The idea is simple. You organise your code into top-level packages — one per module. Spring Modulith treats each of those packages as a bounded context. It then gives you the ability to verify, at test time, that no module is violating another module's boundaries.

Here is what the package structure looks like:

com.example.payflow
ā”œā”€ā”€ payments/
ā”œā”€ā”€ fraud/
ā”œā”€ā”€ ledger/
ā”œā”€ā”€ notifications/
ā”œā”€ā”€ merchant/
ā”œā”€ā”€ audit/
└── shared/

Each of these is a module. The shared package carries the common contracts that let them talk without collapsing into one giant package. Spring Modulith verifies those boundaries at test time, which is important because modularity only matters when it is enforced.

The verification happens through a simple test:

@Test
void verifyModularStructure() {
    ApplicationModules.of(PayFlowApplication.class).verify();
}

That one line runs at build time and tells you exactly which module boundary you broke and where. It is like having a linter for your architecture, which is the kind of guardrail that stops a clean design from slowly turning into convenience-driven coupling.

ApplicationModuleListener

The other thing Spring Modulith gave me is @ApplicationModuleListener.

Plain @EventListener in Spring works synchronously inside the same transaction. If the listener throws, it rolls back your original transaction. For a payment system, that is a problem. You do not want the fraud assessment failing to roll back the payment that was already saved.

@ApplicationModuleListener changes that. It gives the listener its own transactional boundary, integrates with Spring Modulith's event publication infrastructure, and makes the event flow observable and retryable. That is the difference between "we publish events" as a code style and "we publish events" as something the runtime can actually manage.

Here is what it looks like in the fraud module:

@ApplicationModuleListener
public void on(PaymentTransactionInitiated event) {
    var result = ruleEngine.evaluate(event.getAmount());
    var assessment = FraudAssessment.create(event.getPaymentId(), event.getMerchantId(), result.score(), result.decision());
    repository.save(assessment);

    publisher.publish(new FraudAssessmentCompleted(
        event.getCorrelationId(),
        assessment.getTransactionId(),
        assessment.getMerchantId(),
        assessment.getScore(),
        assessment.getDecision().name()
    ));
}

The fraud module does not know about payments. It listens to an event from shared, does its work, and publishes back to shared. The payments module listens to that result and updates the payment status. Ledger, notifications, and audit all do the same thing on their side. Each module responds to the stream of events it cares about and ignores the rest.

The Event System

Every event in the system extends DomainEvent:

public abstract class DomainEvent {
    private final String eventId = UUID.randomUUID().toString();
    private final String correlationId;
    private final Instant occurredAt = Instant.now();

    protected DomainEvent(String correlationId) {
        this.correlationId = correlationId;
    }
}

Every event gets a unique eventId, carries a correlationId that traces the entire payment flow from start to finish, and timestamps itself at creation. When you look at the audit log for a single payment, you see every event that ever touched it in the order they happened, all sharing the same correlationId.

The events I built:

EventPublished ByConsumed By
PaymentTransactionInitiatedPaymentsFraud, Audit
FraudAssessmentCompletedFraudPayments, Ledger, Audit
PaymentTransactionAuthorizedPaymentsLedger, Notifications, Audit
PaymentTransactionDeclinedPaymentsNotifications, Audit
LedgerEntryPostedLedgerAudit

The audit module listens to every event and records it. That gives you a full event history per payment without any module needing to know the audit module exists. The same event objects also carry the merchant and payment identifiers needed for downstream consumers, which keeps the listeners self-sufficient.

Security Model

The security story in PayFlow is more than "put an API key in a header."

Merchants register once and receive a raw key. Every protected request must include X-Merchant-ID, X-Timestamp, and X-Signature. The server rebuilds the payload as timestamp + "." + body, recomputes the HMAC with the merchant's active key, and rejects the request if the signature is missing, stale, or invalid.

That gives the API three useful properties at once:

  1. Requests are authenticated
  2. Payload tampering is detectable
  3. Replay attempts are constrained by the timestamp window

The filter does not trust the transport layer alone. It wraps the request body, verifies the signature before the request reaches the business logic, and only then places a merchant principal into the security context. Public access is limited to merchant registration. Everything else goes through the signing path.

The Payment Domain Model

One of the things I was most deliberate about was keeping the domain model honest.

Payment is not an anemic data class. It has state and it protects that state:

@Entity
@Table(schema = "payments", name = "payment")
@NoArgsConstructor(access = AccessLevel.PROTECTED)
public class Payment {

    @Id
    @GeneratedValue(strategy = GenerationType.UUID)
    private UUID id;

    @Version
    private Long version;

    @Enumerated(EnumType.STRING)
    private PaymentStatus status;

    public static Payment create(UUID correlationId, String idempotencyKey, ...) {
        var payment = new Payment();
        payment.status = PaymentStatus.PENDING;
        // ...
        return payment;
    }

    public void authorise() {
        this.status = PaymentStatus.AUTHORISED;
        this.updatedAt = Instant.now();
    }

    public void decline() {
        this.status = PaymentStatus.DECLINED;
        this.updatedAt = Instant.now();
    }
}

You cannot construct a Payment with a public constructor. You cannot set the status directly. You go through authorise() or decline(), which are the only valid transitions. This is a rich domain model and it makes the business rules impossible to bypass.

The @Version field handles optimistic locking. If two processes try to update the same payment at the same time, only one wins. The other gets a OptimisticLockingFailureException. This matters in payment systems where duplicate processing is a real risk.

Idempotency

Idempotency was one of the first things I thought about when designing the payments module.

A client submitting a payment might lose the network connection after sending the request. They will retry. If your system is not idempotent, you create two payments for one intent.

The fix is simple but needs to be done correctly. The payment request carries an idempotencyKey, and before creating a new payment the service checks if that key already exists:

public PaymentResponse submit(UUID merchantId, SubmitPaymentRequest request) {
    var existing = paymentRepository.findByIdempotencyKey(request.idempotencyKey());
    if (existing.isPresent()) {
        return toResponse(existing.get());
    }
    // create the payment
}

The idempotency key has a unique constraint at the database level too. So even if two requests arrive simultaneously, the second one will fail at the DB constraint and the first one wins. You always return the same response for the same key.

Security: HMAC Request Signing

I wanted merchants to authenticate with something more interesting than a plain API key in a header. HMAC request signing was the answer.

The concept: the merchant has a secret key. For each request, they compute a signature over the timestamp and request body using HMAC-SHA256. The server independently computes the same signature and compares them. If they match, the request is authentic and untampered.

public boolean verify(String secret, String timestamp, String body, String signature) {
    var expected = computeSignature(secret, timestamp, body);
    return MessageDigest.isEqual(
        expected.getBytes(StandardCharsets.UTF_8),
        signature.getBytes(StandardCharsets.UTF_8)
    );
}

private String computeSignature(String secret, String timestamp, String body) {
    var mac = Mac.getInstance("HmacSHA256");
    mac.init(new SecretKeySpec(secret.getBytes(StandardCharsets.UTF_8), "HmacSHA256"));
    var payload = timestamp + "." + body;
    return HexFormat.of().formatHex(mac.doFinal(payload.getBytes(StandardCharsets.UTF_8)));
}

The filter also checks that the timestamp is not more than 5 minutes old. This blocks replay attacks — someone intercepting a valid signed request and resending it later.

MessageDigest.isEqual does constant-time comparison, which prevents timing attacks where an attacker can guess characters of the expected signature by measuring how long the comparison takes.

To generate the signature on the client side:

TIMESTAMP=$(date +%s)
BODY='{"payeeAccountId":"...","amount":"150.00",...}'
SIGNATURE=$(printf '%s.%s' "$TIMESTAMP" "$BODY" | openssl dgst -sha256 -hmac "$API_KEY" -hex | sed 's/^.* //')

API Key Management

The merchant module handles the full lifecycle of API keys.

When a merchant registers, a 32-byte cryptographically random key is generated with SecureRandom, encrypted before persistence, and returned once to the merchant. The stored form uses AES-GCM with a random IV prepended to the ciphertext, which means the encrypted payload is versioned and safe to decrypt even as the implementation evolves.

Keys expire after 90 days. There is a rotation endpoint for when merchants need a new key. A scheduled job runs every night at 2am and purges all expired or revoked keys from the database.

@Scheduled(cron = "0 0 2 * * *")
@Transactional
public void purgeExpiredAndRevokedKeys() {
    apiKeyRepository.deleteByActiveFalseOrExpiresAtBefore(Instant.now());
}

Every authentication call records lastUsedAt on the key. This gives you visibility into which keys are active, which ones are stale, and whether a rotation policy is actually being used in practice instead of just existing on paper.

Database Design

Each module has its own PostgreSQL schema. payments, fraud, ledger, merchant, audit — each isolated. No joins across module boundaries at the database level. Cross-module coordination happens in the application layer through events, which keeps the persistence model aligned with the module model.

The migrations are managed with Flyway and placed under db/migration/postgresql/. Every migration is versioned and written with proper constraints:

CREATE TABLE IF NOT EXISTS payments.payment
(
    id                   UUID           NOT NULL DEFAULT gen_random_uuid() PRIMARY KEY,
    correlation_id       UUID           NOT NULL,
    idempotency_key      VARCHAR(128)   NOT NULL,
    amount               DECIMAL(18, 4) NOT NULL,
    status               VARCHAR(20)    NOT NULL,
    version              BIGINT         NOT NULL DEFAULT 0,
    created_at           TIMESTAMPTZ    NOT NULL,
    updated_at           TIMESTAMPTZ    NOT NULL,

    CONSTRAINT uq_payment_idempotency_key UNIQUE (idempotency_key),
    CONSTRAINT chk_payment_status CHECK (status IN ('PENDING', 'AUTHORISED', 'DECLINED')),
    CONSTRAINT chk_payment_amount CHECK (amount > 0)
);

TIMESTAMPTZ instead of TIMESTAMP means timestamps are always stored in UTC. The CHECK constraints enforce valid states at the database level, not just the application level. The unique constraint on idempotency_key is the last line of defence against duplicate payments.

Testing

The test suite covers module structure, service behaviour, controllers, event listeners, and the security-critical pieces like API key encryption and request verification.

Unit tests use JUnit 5 with Mockito and AssertJ. Controller tests use @SpringBootTest with MockMvc. The test configuration uses H2 in PostgreSQL compatibility mode so the in-memory database behaves close to production, while Modulith tests verify that the package boundaries still hold.

Here is a representative test showing how I verified the event chain in the fraud module:

@Test
void on_persistsApproveDecisionAndPublishesCompletedEvent() {
    when(repository.save(any(FraudAssessment.class))).thenAnswer(i -> i.getArgument(0));
    var event = initiatedEvent(new BigDecimal("150.00"));

    listener.on(event);

    var assessmentCaptor = ArgumentCaptor.forClass(FraudAssessment.class);
    verify(repository).save(assessmentCaptor.capture());
    assertThat(assessmentCaptor.getValue().getDecision()).isEqualTo(FraudDecision.APPROVE);

    var eventCaptor = ArgumentCaptor.forClass(FraudAssessmentCompleted.class);
    verify(publisher).publish(eventCaptor.capture());
    assertThat(eventCaptor.getValue().getDecision()).isEqualTo("APPROVE");
}

The test verifies two things at once: the assessment was persisted with the right decision, and the correct event was published with the right data. ArgumentCaptors let you inspect exactly what was passed to your dependencies without needing the real implementations.

That same style shows up across the codebase. Payment tests assert idempotency and status transitions. Notification tests verify that merchant emails are sent from the event payload. Audit tests verify that the persisted event log can be queried in merchant scope and in payment scope. The result is not just "there are tests," but tests that map to the actual business flow.

What Actually Runs

The full running system:

docker compose up -d   # starts PostgreSQL
./mvnw spring-boot:run # starts the app

Flyway runs the migrations automatically on startup. Virtual threads are enabled with spring.threads.virtual.enabled=true — one line and the JVM uses Project Loom under the hood.

A real payment flow through the system end to end:

# Register
curl -X POST http://localhost:8080/api/v1/merchants/register \
  -H "Content-Type: application/json" \
  -d '{"name": "Acme Corp", "email": "merchant@example.com"}'

# Sign and submit payment
TIMESTAMP=$(date +%s)
BODY='{"payeeAccountId":"...","idempotencyKey":"key-001","amount":"250.00","currency":"USD","paymentMethod":{"type":"CARD","token":"tok_test"}}'
SIGNATURE=$(printf '%s.%s' "$TIMESTAMP" "$BODY" | openssl dgst -sha256 -hmac "$API_KEY" -hex | sed 's/^.* //')

curl -X POST http://localhost:8080/api/v1/payments \
  -H "Content-Type: application/json" \
  -H "X-Merchant-ID: $MERCHANT_ID" \
  -H "X-Timestamp: $TIMESTAMP" \
  -H "X-Signature: $SIGNATURE" \
  -d "$BODY"

# Check the audit trail
curl http://localhost:8080/api/v1/payments/{id}/events \
  -H "X-Merchant-ID: $MERCHANT_ID" \
  -H "X-Timestamp: $TIMESTAMP" \
  -H "X-Signature: $SIGNATURE"

The audit trail response shows every event in the order they occurred:

{
  "data": [
    { "eventType": "Payment.Transaction.Initiated", "occurredAt": "..." },
    { "eventType": "Fraud.Assessment.Completed", "occurredAt": "..." },
    { "eventType": "Payment.Transaction.Authorised", "occurredAt": "..." },
    { "eventType": "Ledger.Entry.Posted", "occurredAt": "..." }
  ]
}

Why The Design Holds Up

The part I like most about PayFlow is that the architecture is not just decorative.

The modules are verified. The payment lifecycle is explicit. Fraud, ledger posting, notifications, and audit all hang off the same event flow. Merchant notifications do not need to reach back into another module for data they should already have. Audit does not need custom plumbing from each feature area. Security is enforced before business code runs. API keys are encrypted at rest, expire automatically, and stale keys are cleaned up on a schedule.

That combination makes the project feel coherent. Not because it is huge, but because the responsibilities line up cleanly from HTTP entry point to database write to downstream event consumers.

What Still Interests Me

The fraud module is intentionally small right now, which I actually think is the right tradeoff for this version. The rule engine already has a clear entry point, score, and decision model, so it is easy to imagine extending it into weighted rules, merchant-specific policies, or more advanced risk signals without rewriting the surrounding flow.

That is the kind of growth path I wanted from the start: a system that already has the right shape, so the next level of complexity has somewhere sensible to go.

Closing Thoughts

What I value most about PayFlow is that it forced the important decisions to stay concrete. Module boundaries had to be enforceable. Payment state transitions had to be explicit. Security had to start before the controller layer. Auditability had to come from the same event flow that drives the rest of the system, not from side-channel bookkeeping added later.

That changed how I think about backend design. It is one thing to talk about decoupling, idempotency, and reliable event handling in the abstract. It is another thing to wire those concerns into a system where persistence, security, and downstream consumers all depend on the same decisions being sound.

The result is a project that feels structurally consistent from the API boundary down to the database and event stream. That is the standard I care about now: systems that are clear under load, deliberate in their boundaries, and honest about the work they are doing.

The code is on GitHub if you want to look at it. It runs, it is tested, and it does what it says it does.