Stop Optimizing
Correlations.
Measure True
Causal Lift.
CausalLTV applies structural causal models and do-calculus to identify the true incremental impact of every offer — per customer, with confidence intervals. Built for CapitalOne, Amex, and Mastercard-scale marketing teams.
E[ltv_12m | do(apr_reduction=1)] − E[ltv_12m | do(apr_reduction=0)]
Adjustment set Z = {credit_score, tenure_months, churn_risk_score}
apr_reduction
$187.50 incremental LTV
CI: $140 – $235 · conf: 0.847
Traditional Uplift Models
Are Lying to You
High-risk customers receive more retention offers. Your uplift model sees the correlation: offers → lower LTV. It recommends fewer interventions. Revenue leaks. This is Simpson's Paradox — and it's silently destroying your targeting ROI.
Raw Mean Difference
−$48
Treatment looks HARMFUL
← uplift model sees this
adjustment
Causal ATE (DoWhy)
+$145
True incremental effect
← what actually happens
confounding: high-risk customers receive 3.8× more retention calls → they churn more → naive correlation is negative
You're Measuring Conditional Outcomes
Your model learns: customers who received cashback had lower average LTV. So it suppresses the offer. But treatment was confounded — you were sending offers to high-risk customers first. The correlation is real. The causal interpretation is wrong.
Confounders Create Spurious Signals
Without blocking the backdoor path through churn_risk_score, every estimate is biased. Standard T-Learner and X-Learner uplift models cannot fix this — they condition on features, but they cannot intervene on the causal structure.
You're Undertreating High-Value Segments
The customers most likely to respond to an APR reduction are exactly those your uplift model flags as 'low probability of lift'. The causal signal is real — the model is measuring correlation, not intervention. Revenue leaks silently.
From Observational Data to Prescriptive Action
in Three Steps
Discover the Causal Structure
CausalLTV runs the PC algorithm on your historical data, enforcing your domain constraints — required edges and forbidden edges declared in YAML. The result is a validated DAG your compliance team can inspect and modify.
PC algorithm output — credit card customer DAG
Required edges: credit_score → credit_limit ✓ · Forbidden: ltv_12m → credit_score ✓
Apply Do-Calculus Identification
DoWhy's identification engine verifies P(Y|do(T)) is identifiable before fitting any model. It applies the backdoor adjustment criterion — selecting the minimal adjustment set that blocks all confounding paths. If identification is impossible, the system fails loudly rather than returning a biased estimate.
# DoWhy identifies the causal estimand automatically
identified_estimand = model.identify_effect(
proceed_when_unidentifiable=False # fail loudly
)
# Backdoor adjustment via do-calculus:
# P(ltv_12m | do(apr_reduction=1)) =
# Σ_z P(ltv_12m | apr_reduction=1, Z=z) · P(Z=z)
# where Z = {credit_score, tenure_months, churn_risk_score}
estimate = model.estimate_effect(
identified_estimand,
method_name=400">"backdoor.linear_regression",
confidence_intervals=True,
)
# ATE = 400">$187.50 [CI: 400">$140.20 – 400">$234.80] p=0.001Estimate CATE — Per Customer
Double ML / Causal Forest estimates the incremental LTV for every individual customer. For a customer with credit_score=620, churn_risk=0.75, an APR reduction yields $187 incremental LTV (95% CI: $140–$235). That confidence interval drives the action threshold — if CATE − offer_cost doesn't clear $50, no action is recommended.
CATE distribution — apr_reduction (n=50,000 customers)
Everything Your Uplift Model Can't Do
Uplift models are correlation machines dressed in causal language. CausalLTV is built on do-calculus from the ground up.
True Causal Effect
P(Y|do(T)), not P(Y|T,X)
Uplift models condition on treatment assignment — they inherit all the selection bias baked into historical policies. CausalLTV uses do-calculus to isolate the pure interventional distribution, cutting through confounding at the source.
Identifiability First
Fail loudly before fitting
Before training a single model, DoWhy's identification engine verifies the causal effect is identifiable from your observational data. If it's not — due to unmeasured confounding or structural violation — the system halts rather than returning a confident lie.
Per-Customer CATE
Individual-level, not population averages
CausalForestDML estimates a separate treatment effect for every customer. A population ATE of +$120 could mask a $-40 effect for churners and +$350 for high-spenders. Prescriptions are gated by individual CATE minus offer cost, not a population threshold.
Compliance-Ready DAG
Human-inspectable causal graph
Every causal assumption is explicit in a YAML-configurable DAG. Required edges (credit_score → credit_limit) and forbidden edges are enforced at discovery time. Your compliance team can audit, annotate, and version-control the causal structure — not a black-box weight matrix.
Refutation Tests
Built-in placebo validation
CausalLTV runs automated refutation tests — placebo treatment, random-cause addition, data subset — after every estimation run. If the estimated effect doesn't survive refutation, the system flags the result rather than silently propagating a spurious estimate into production decisions.
REST API
<12ms per recommendation
A single POST to /v1/recommend returns the optimal treatment, causal path, CATE confidence interval, and counterfactual narrative — ready to drive real-time decisioning in your CRM, mobile app, or offer engine. Lambda cold starts under 800ms; warm p99 under 12ms.
Why Uplift Modeling is Not Causal Inference
Both estimate treatment effects. Only one answers the right question.
Uplift meta-learners (T-Learner, X-Learner) are included in CausalLTV as a baseline comparison — run them alongside causal estimates with make evaluate.
Production-Ready Causal Inference API
Three endpoints. Everything you need to move from observational data to prescriptive action — with causal guarantees, not statistical correlations.
Performance SLA
REQUEST
400">"text-violet-400">curl -X 400">"text-violet-400">POST https://api.netcausal.ai/v1/recommend \
-H "400">"text-violet-400">Authorization: 400">"text-violet-400">Bearer 400">$NETCAUSAL_API_KEY" \
-H "400">"text-violet-400">Content-Type: application/json" \
-d '{
400">"customer_id": 400">"cust_7f3a9b",
400">"credit_score": 618,
400">"tenure_months": 22,
400">"churn_risk_score": 0.73,
400">"avg_monthly_spend": 1840,
400">"credit_utilization": 0.71,
400">"payment_behavior": 400">"on_time"
}'RESPONSE — 200 OK
{
400">"customer_id": 400">"cust_7f3a9b",
400">"recommended_treatment": 400">"apr_reduction",
400">"cate": 187.50,
400">"cate_lower": 140.20,
400">"cate_upper": 234.80,
400">"net_cate": 187.50,
400">"offer_cost": 0,
400">"action": 400">"treat",
400">"causal_path": [
400">"credit_score",
400">"churn_risk_score",
400">"apr_reduction",
400">"avg_monthly_spend",
400">"ltv_12m"
],
400">"explanation": "APR reduction directly addresses the primary churn driver (high utilization at 71%) while bypassing the credit_limit confounder. Backdoor adjustment set: {credit_score, tenure_months, churn_risk_score}.",
400">"confidence": 0.97,
400">"all_treatment_cates": {
400">"apr_reduction": 187.50,
400">"cashback_2pct": 142.30,
400">"credit_limit_increase": 98.60,
400">"retention_call": 45.20,
400">"annual_fee_waiver": -65.00,
400">"balance_transfer_0pct": 31.80
}
}Stop Measuring Correlation.
Start Measuring Causation.
Join ML engineers and CDOs from leading card issuers building the next generation of compliant, causal offer engines.
Request Early Access
Limited to ML teams at card issuers and fintechs. No credit card required.
What you get
- API key + sandbox with 50K synthetic records
- CausalLTV Python SDK (pip install causal-ltv)
- Jupyter notebook walkthroughs
- Slack channel with founding team
- Early-adopter pricing lock-in
“Finally an offer engine that won't get us sued by regulators.”
Head of ML, Top-5 US Card Issuer
“We ran it alongside our T-Learner. CausalLTV beat it on every offline metric.”
Chief Data Officer, APAC Bank
“The YAML DAG audit trail alone is worth the price of admission.”
Director of Risk Decisioning, EU Fintech