In the Fintech landscape of 2026, user expectation is immediacy. It is no longer acceptable to wait days for preliminary feedback on a financing request. This is where Event-Driven Architecture: Real-Time Management of application processing comes into play, a paradigm that transforms monolithic and slow banking processes into reactive and scalable data streams. This technical article explores how to engineer a distributed system capable of managing the mortgage lifecycle, ensuring resilience and data consistency.
1. The Problem: Limits of Request/Response Architectures in Mortgages
Traditionally, the orchestration of a mortgage file took place via monolithic architectures or microservices coupled via HTTP (REST/gRPC). This approach presents structural criticalities:
- Temporal Coupling: If the Credit Scoring service is slow, the entire request process blocks, leaving the user waiting in front of a spinner.
- Inefficient Polling: Downstream systems must continuously query the central database to know if there are new files to work on (“Are we there yet?”), wasting computational resources.
- Error Management: In a chain of synchronous calls, the failure of a peripheral service (e.g., the PDF generator) can cause the entire transaction to fail.
2. The Solution: Event-Driven Architecture (EDA)

In an event-driven architecture, microservices do not speak directly to each other. Instead, they produce and consume events. An event is an immutable fact that happened in the past (e.g., MortgageApplicationSubmitted).
Key Architecture Components
For our use case, we will compare two dominant technological backbones:
- Apache Kafka: Ideal for high throughput and when Log Retention is needed to reprocess events (Replayability). It is the preferred choice for banks requiring an immutable on-premise or hybrid audit trail.
- Amazon EventBridge: Serverless solution perfect for intelligent event routing on AWS cloud. Reduces operational overhead but has limits on payload size and retention compared to Kafka.
Architectural Decision: For a complex mortgage system requiring rigorous history and audits, we will use Apache Kafka as the central Event Bus, integrating Schema Registry patterns (e.g., Avro or Protobuf) to ensure data contract compatibility.
3. Flow Design: Choreography vs Orchestration

Managing a mortgage application is a Long-Running Process. We must decide how to coordinate services:
Involved Microservices
- Application Service: Receives the request from the user.
- Scoring Service: Evaluates credit risk (Crif, Experian).
- Document Service: Manages upload and OCR validation of documents.
- Bank Gateway: Communicates with legacy banking systems for the final decision.
- Notification Service: Sends email/SMS/Push to the user.
We will use a hybrid approach: Choreography for state events (pub/sub) and Orchestration (via the Saga pattern) for transactional consistency management.
4. Managing Consistency: The Saga Pattern
In a distributed system, we cannot use local database ACID transactions for processes spanning multiple services. We must embrace Eventual Consistency. But what happens if the Bank Gateway rejects the application after the Scoring Service had approved it?
We must implement the Saga Pattern to manage rollbacks (compensating transactions).
Example of Saga Flow (Choreography)
Let’s imagine the happy path and the failure path:
Step 1: Transaction Start
The user submits the request. The Application Service publishes the event:
{
"eventId": "uuid-1234",
"eventType": "MortgageApplicationSubmitted",
"payload": {
"applicationId": "M-999",
"amount": 200000,
"applicant": "Mario Rossi"
}
}
Step 2: Parallel Processing
The Scoring Service and the Document Service listen for the event.
The Scoring Service approves and publishes CreditScoreApproved.
The Document Service validates the PDFs and publishes DocumentsValidated.
Step 3: Aggregation and Decision
The Bank Gateway awaits both events. Once received, it attempts to finalize the application on the banking mainframe.
Step 4: Failure and Compensation (Rollback)
If the mainframe responds with an error (e.g., “Insufficient funds” or “Timeout”), the Bank Gateway publishes the event MortgageFinalizationFailed.
At this point, Compensating Transactions are triggered:
- The Scoring Service listens for the failure and releases any “locks” on the user’s credit rating.
- The Application Service listens for the failure and updates the application status from “Processing” to “Rejected”, notifying the user.
5. Technical Details and Best Practices
Idempotency
In Kafka, exactly-once delivery is complex. It is safer to design consumers to be idempotent. If the Notification Service receives the MortgageApproved event twice, it must be able to understand (via a unique event ID saved on Redis or DB) that it has already sent the email and discard the duplicate.
Dead Letter Queues (DLQ)
What happens if an event is malformed and crashes the consumer? We cannot block the queue. The problematic event must be moved to a Dead Letter Queue after X failed attempts, allowing the engineering team to analyze it manually without stopping the flow of other applications.
Schema Evolution
Mortgage applications change over time (new regulations, new data fields). Using a Schema Registry is fundamental. Producers and consumers must agree on the schema (e.g., Avro). If we add the field discounted_interest_rate, old consumers must not break (backward compatibility).
6. Implementation: Kafka Configuration Snippet (Java/Spring Boot)
Here is an example of how to configure a consumer supporting transaction management in a Spring Cloud Stream context:
@Bean
public Consumer<MortgageEvent> mortgageProcessor() {
return event -> {
if (event.getType().equals("MortgageApplicationSubmitted")) {
try {
scoringService.calculate(event.getPayload());
} catch (Exception e) {
// Logic for sending to DLQ or automatic retry
throw new AmqpRejectAndDontRequeueException(e);
}
}
};
}
7. Conclusions
Switching to an event architecture for mortgage management is not just a technological style exercise, but a business necessity. It allows decoupling development teams (the “Documents” team can release updates without coordinating with the “Bank” team), scaling services independently (more resources to Scoring during request peaks), and offering the end user a fluid and transparent experience.
The complexity introduced by managing eventual consistency and compensation patterns is the price to pay for obtaining a resilient system, capable of handling high volumes without the bottlenecks of centralized relational databases.
Frequently Asked Questions

This architecture overcomes the limits of monolithic systems by eliminating temporal coupling and inefficient polling. It allows transforming slow processes into reactive flows, ensuring independent service scalability and providing immediate feedback to the user, instead of leaving them waiting in front of an endless loading screen.
The Saga Pattern manages data consistency through a series of coordinated local transactions. If a step fails, such as a rejection from the banking gateway, the system executes compensating transactions to undo previous operations, ensuring the final system state remains coherent without blocking resources.
Apache Kafka is preferable when rigorous history logging and the ability to reprocess past events are required, which are essential features for banking audit trails. Unlike EventBridge, which is excellent for serverless routing, Kafka handles high payloads better and ensures immutable data persistence on-premise or in hybrid environments.
Idempotency is the ability of a system to handle the same event multiple times without producing duplicate side effects. It is crucial in architectures like Kafka, where exactly-once delivery is complex; consumers must recognize already processed events to avoid, for example, sending double notifications to the client.
To prevent an erroneous event from blocking the entire processing queue, Dead Letter Queues (DLQ) are used. After a defined number of failed attempts, the problematic event is moved to this special queue to be manually analyzed by engineers, allowing the main flow of applications to continue without interruptions.
Still have doubts about Event-Driven Architecture: Real-Time Mortgage Application Management?
Type your specific question here to instantly find the official reply from Google.






Did you find this article helpful? Is there another topic you’d like to see me cover?
Write it in the comments below! I take inspiration directly from your suggestions.