The feedback loop AML has been missing
Syntheticr is an AML performance testing platform for teams to measure, compare, and improve detection using objective scorecards.
OVERVIEW
Syntheticr provides a structured way to evaluate AML system performance.
Teams run their existing AML systems, models, or workflows against controlled datasets and receive a quantified scorecard showing what was detected, what was missed, and how performance varies across different scenarios.
This makes it possible to:
establish a performance baseline
validate model and rule changes
compare vendors or system configurations
track performance over time
What Syntheticr does
How it works
Syntheticr follows a simple evaluation workflow
Access
Download a Syntheticr dataset designed for AML system evaluation.
Test
Run the data through your AML system or model to generate, or rank, alerts.
Submit
Upload your results for comparison against known ground truth.
Review
Receive a scorecard detailing the performance of your AML system or model.
OUTPUTS
Objective Performance Scorecards
Each evaluation produces a scorecard that shows how your system performed against known financial crime activity, including:
Overall performance
Alert/ranking precision
Performance by:
Institution
Typology
Transaction type
Network type
Performance relative to risk intelligence
Scorecards provide direct evidence of detection capability, rather than relying on operational outputs as proxy metrics.
Precisely-Engineered Synthetic Data
Syntheticr provides synthetic datasets designed and built specifically for objective AML evaluation.
These datasets include:
Realistic transaction and entity behaviour
Embedded financial crime activity
Known ground truth for objective assessment
Multiple institutions and network structures
This enables consistent, repeatable testing across systems, models, and configurations.
Flexible Evaluation Workflows
Syntheticr can be used in different ways depending on your requirements.
Ad hoc testing - Run a single baseline or point-in-time evaluation
Comparative testing - Compare systems, models, or vendors on a like-for-like basis
Continuous testing - Track performance over time at the appropriate frequency
Workflow integration - Embed performance evaluation into development, validation, or governance processes
Teams can begin with a single assessment and expand into more repeatable, workflow-based testing as needs grow.
GET STARTED
Start with a baseline
For most teams, the right starting point is a performance baseline.
Run a Syntheticr dataset through your current AML system or model and receive a quantified scorecard showing what your system detects, what it misses, and any performance gaps.