Objective Performance Measurement

Syntheticr scorecards provide a structured, objective view of how your AML system performs across scenarios, segments, and typologies.

WHAT YOU GET

A quantified view of detection performance

Syntheticr scorecards support real decisions

Scorecards provide:

  • Overall performance

  • Alert/ranking precision

  • Performance by:

    • Institution

    • Typology

    • Transaction type

    • Network type

  • Performance relative to risk intelligence

How performance is measured

Evaluation against known ground truth

Syntheticr evaluates your system outputs against known financial crime activity within the dataset to calculate:

  • True positives

  • False positives

  • False negatives

  • Detection coverage

These measurements are simply not possible with production data, which lacks reliable ground-truth at the scale needed to accurately test AML systems and models.

Why it matters

Designed for real AML decisions

Scorecards are used to support decisions such as:

  • Validating model or rule changes

  • Comparing vendors on a like-for-like basis

  • Identifying performance gaps

  • Tracking performance changes over time

Each scorecard provides evidence that can be used to support internal and external stakeholders.

FORMATS

Built for analysis and workflows

Syntheticr scorecards can be provided in PDF, or machine-readable formats via API, making it easy to support critical decision-making.

PDF Scorecards

Structured reports that can easily be shared for review and decision-making

Machine-Readable Scorecards

Designed for integration into development, validation, and monitoring workflows.

RESULTS

From baseline to benchmark

Understand performance trade-offs.

Compare, track, and improve. Continuously.

Syntheticr scorecards show both effectiveness and trade-offs:

  • High detection with low precision indicates over-alerting

  • High precision with low detection indicates under-detection

  • Typology-level results highlight specific weaknesses

  • Network-level results show how complex behaviours are detected

Each scorecard provides evidence of AML system and model performance.

Teams use Syntheticr scorecards to:

  • Establish a baseline

  • Compare systems and vendors

  • Track performance across releases

  • Detect performance drift

  • Validate improvements

This enables targeted, measurable improvement in your AML systems and models.

GET STARTED

Start with a baseline

A laptop with a glowing, colorful digital wave pattern on the screen.

For most teams, the right starting point is a performance baseline.

Run a Syntheticr dataset through your current AML system or model and receive a quantified scorecard showing what your system detects, what it misses, and any performance gaps.