Questa è una versione PDF del contenuto. Per la versione completa e aggiornata, visita:
https://blog.tuttosemplice.com/en/banking-event-sourcing-crm-architecture-and-audit-trail/
Verrai reindirizzato automaticamente...
In the Fintech landscape of 2026, data management is no longer just about preserving the current state, but about the ability to mathematically demonstrate how that state was reached. Banking event sourcing represents the paradigm shift necessary to meet stringent compliance regulations (such as PSD3 and Basel evolutions) and operational transparency needs. In this technical guide, we will explore why the CRUD (Create, Read, Update, Delete) architecture is now obsolete for critical financial CRMs and how to implement a system based on immutable events for managing mortgage applications.
For decades, software development has relied on the CRUD model. In a classic relational database, when a mortgage application moves from “Underwriting” to “Approved”, we execute an UPDATE command that overwrites the previous value. While efficient for storage, this approach entails a critical loss of information: history is lost.
In a banking context, overwriting data is an unacceptable risk. To guarantee the audit trail, developers often resort to separate log tables or complex database triggers. However, this approach has two fatal flaws:
Banking event sourcing inverts this model. Instead of storing the current state of a mortgage application, we store the sequence of events that led to that state. The database no longer contains a modifiable row, but an immutable append-only log.
According to principles defined by experts like Martin Fowler, the current state of the application is purely a mathematical derivation: it is the sum of all past events replayed in sequence.
Let’s imagine the lifecycle of a mortgage. In a CRUD system, we would have an Applications table. In Event Sourcing, we define precise domain events:
MortgageApplicationCreated (Contains customer ID, requested amount, date)IncomeDocumentationUploaded (Contains references to PDFs, metadata)CreditScoringCalculated (Contains the credit bureau score at the time of calculation)InterestRateLocked (Contains the IRS rate of the day)DecisionIssued (Contains the ID of the approver)Each event is an immutable historical fact. It cannot be deleted or modified, only compensated by a subsequent event (e.g., DecisionCancelled).
Implementing a banking event sourcing system almost always requires adopting the CQRS (Command Query Responsibility Segregation) pattern. Since reading a sequence of 100 events to reconstruct the state of an application every time an operator opens the dashboard is inefficient, we separate writing from reading.
The heart of the system is the Event Store. Technologies like Apache Kafka or Amazon Kinesis are ideal for this purpose due to their distributed log nature and durable persistence.
When a CRM agent clicks on “Approve Income”, the system:
IncomeVerified event.mortgage-events-v1).To display data in the CRM, we use “Consumers” that listen to the Kafka topic and update read-optimized databases (Projections). We can have different projections for the same data stream:
ActiveApplications table on PostgreSQL or MongoDB containing the current state for the agent’s UI.In event sourcing, the audit trail is not an add-on feature: it is the database itself. It is impossible to modify the state without leaving an indelible trace. This natively satisfies the non-repudiation requirements demanded by supervisory bodies.
This is perhaps the most powerful feature for developers and auditors. Imagine a customer disputes an interest rate applied six months ago. In a CRUD system, you would only see the current rate. With event sourcing, you can:
This allows answering questions like: “Why did the system reject the application on that day?” by reconstructing the exact context, including any bugs present in the code on that past date.
Here is how a structured JSON event for a banking system might look:
{
"eventId": "550e8400-e29b-41d4-a716-446655440000",
"eventType": "RiskAssessmentCompleted",
"aggregateId": "MORTGAGE-2026-8899",
"timestamp": "2026-01-11T10:15:30Z",
"version": 1,
"metadata": {
"userId": "agent_rossi",
"ipAddress": "192.168.1.50",
"correlationId": "req-123-abc"
},
"payload": {
"riskScore": "LOW",
"maxLTV": 0.80,
"interestRateSpread": 1.25,
"rulesVersion": "v2025.12"
}
}
Note the rulesVersion field in the payload: historicizing the version of the business rules used is also fundamental to justify automated decisions during an audit.
Adopting banking event sourcing is not without costs. Architectural complexity increases and requires careful management of:
Despite these challenges, for core banking systems and modern financial CRMs, the benefits in terms of security, traceability, and resilience far outweigh the implementation costs. Moving to the event model means stopping data loss and starting to build a historical information asset of inestimable value.
Banking event sourcing is an architectural paradigm that stores data as an immutable sequence of historical events rather than overwriting the current state. This approach is crucial in modern fintech because it guarantees total transparency and allows for the mathematical reconstruction of every step of an application, perfectly meeting regulatory requirements such as PSD3 and Basel.
Using the CRUD model in banking systems is risky because the update operation overwrites previous data, erasing the history and the intent behind every change. This leads to the loss of critical information for the audit trail and creates potential misalignments between the main database and system logs, compromising financial data security.
The CQRS pattern clearly separates write operations from read operations to optimize CRM performance. In the banking context, events are written to a high-reliability distributed log like Apache Kafka, while information is read from dedicated projections on fast databases, allowing operators to view the status of applications in real-time without slowdowns.
With event sourcing, the audit trail is not an accessory feature but constitutes the very structure of the database. Since every action is recorded as an immutable event that cannot be modified or deleted, the system natively offers the proof of non-repudiation and complete traceability required by supervisory bodies for every operational decision.
Time-Travel Debugging is a powerful feature that allows replaying the sequence of events up to a precise moment in the past. This enables banks to reconstruct exactly the context, data, and business rules active at the moment a decision was made, providing precise answers in case of disputes over rates or decisions that occurred months prior.
To reconcile the immutability of the event log with the GDPR right to be forgotten, the crypto-shredding technique is often adopted. Sensitive data is saved in encrypted form, and in the event of a deletion request, only the decryption key is permanently deleted, rendering the historical information unreadable without having to alter the physical sequence of the log.