Menu

Loosely Coupled by Design

The decisions that shape a platform's architecture determine its capacity to evolve. Ludopoly Analytics adopts a loosely coupled microservice model organised into five horizontal layers. Each layer owns a clearly bounded responsibility, communicates with adjacent layers through well-defined contracts, and can be scaled, updated, or replaced without propagating changes to its neighbours. This is not an incidental property of the codebase — it is enforced by the system's deployment topology, where every layer runs as an independent Kubernetes service group with its own scaling policies and failure boundaries.

The practical consequence is that the platform can absorb two categories of change that routinely disrupt monolithic analytics systems: new blockchain protocols and new regulatory requirements. Adding support for a new EVM-compatible chain requires deploying a single adapter in the capture layer. Adapting to a new FATF recommendation requires updating rule definitions in the AML service module. Neither change crosses a layer boundary.

Blockchain Capture LayerEthereumPolygonBSCArbitrum200+ chainsEvent Processing LayerABI ResolutionValidationEnrichmentClassificationIndexingMessage Distribution — Apache KafkaTopic-based pub/sub · independent consumer groups per modulePolyglot Storage LayerPostgreSQLMongoDBRedisNeo4jInfluxDBService ModulesAML/CFT EngineZK-KYCdApp AnalyticsLLM Risk EngineData flows downward through capture, processing, and distribution into storage and service modules

The Cross-Feeding Mechanism

The most distinctive aspect of the architecture is not its layered decomposition — many platforms adopt similar patterns — but the bi-directional data exchange between service modules. Traditional analytics stacks treat each module as a terminal consumer of pipeline data. Ludopoly Analytics allows modules to publish enriched events back onto the Kafka backbone, where other modules can consume them.

A practical example illustrates the value. The AML engine detects a wallet address exhibiting structuring behaviour — a series of transactions deliberately kept below reporting thresholds. It publishes a risk event onto a dedicated Kafka topic. The dApp analytics module, which subscribes to risk events, annotates the corresponding user segment in its cohort analysis with the elevated risk score. Simultaneously, the ZK-KYC module checks whether the flagged address has a verified identity credential on-chain. The result — verified or unverified — is published back, and the AML engine uses it as a weighting factor in its composite risk score.

This cycle happens in near real-time and without any manual intervention. The modules do not call each other directly; they communicate exclusively through the event stream, preserving loose coupling while enabling deep functional integration. The pattern is sometimes called choreography-based integration, and it is the reason Ludopoly Analytics can combine the strengths of three distinct analytical domains without incurring the brittleness of direct service-to-service dependencies.

Cross-feeding is not optional — it is fundamental to the platform's accuracy. Risk scores that incorporate identity verification status and behavioural analytics consistently outperform single-dimension scoring models.

Polyglot Persistence

A single data model cannot efficiently serve the diverse query patterns that a blockchain analytics platform encounters. Relational queries (subscription management, user accounts) demand the transactional guarantees of PostgreSQL. Document-oriented analytics results — cohort snapshots, metric summaries — suit MongoDB's flexible schema. Frequently accessed hot data lives in Redis for sub-millisecond retrieval. Transaction relationship graphs, the foundation of AML path-finding and community-detection algorithms, are stored and queried in Neo4j. Time-series metrics — gas costs, block intervals, throughput measurements — flow into InfluxDB, optimised for append-heavy, time-windowed workloads.

Each database is chosen for the workload it handles best rather than forced into a one-size-fits-all schema. The tradeoff is operational complexity — five database systems require five monitoring surfaces and five backup strategies — but the performance gains at scale justify the investment. A compliance analyst running a multi-hop path query on a graph database will receive results in seconds rather than the minutes that an equivalent recursive SQL query would require.