NSRCELIncubated by NSRCEL · IIM Bangalore

Causal AI · Built in India

We don't show what.
We show why.
And what to do next.

Causal AI that names the lever behind your numbers, with an estimator your data lead can audit. Built for Indian founders and the analysts who report to them.

Read a Case Study
CAUSAL SENTENCE · v1 · input → lever → lift
Input
Your dataSpend, sales, sessions
Lever
The named causeMumbai retarget · 14 Mar
Lift
+18%Causal effect (95% CI)
Input
Lever
Lift
The chart that names the causeInput → Lever → Lift

Recent work

Four engagements · audited Q1 2026

How it works

One decision, not a dashboard.

We don't sell a tool you have to learn. We sell an answer you can forward. Here's how a typical engagement runs.

We connect to the data you already have.

Sales, marketing spend, inventory, customer logs, support tickets. Tabular, time-series, whatever shape your stack happens to produce. No clean room required.

  • Sales and marketing systems
  • Inventory and customer logs
  • Industry benchmarks where they help
  • Tabular and time-series alike

We name the variables that actually move the outcome.

Correlation tells you which numbers travel together. Causal inference tells you which one, if changed, would have changed the result. We use the established estimators (DoWhy, EconML, frontdoor and backdoor adjustment) and we show our work.

  • Causal DAG built with you, not at you
  • Counterfactual and what-if simulation
  • Root-cause analysis on real outcomes
  • Estimator and assumptions documented

You decide. We hand you the lever, not a dashboard.

The deliverable is a short list of decisions, each with the expected effect, the confidence interval, and the trade-off. The kind of memo a founder forwards to a CFO without rewriting it.

  • Specific decisions, not metrics to watch
  • Expected effect with the math attached
  • Trade-offs named, not buried
  • Plain English, signed by a real human

Why this is different

We don't show you what.
We show you why.

Standard analytics is a chart of the past. Causal AI is a model of cause and effect. Same data, completely different deliverable.

What most analytics tools do

Show what happened.

  • Lay out KPIs in a dashboard. Sales, marketing, ops, all charted, all moving together, none of them explaining each other.
  • Lean on correlations. Two metrics rising together get treated as one causing the other. They almost never are.
  • Hand the founder a list of variables to watch, and leave the choice of which lever to pull as a vibe.
  • Reward optimization theatre. KPIs move. Revenue, often, does not.

What we do

Show what caused it.

  • Build a causal DAG with you. Every claim of "X moved Y" is named, written down, and tested with an estimator that doesn't depend on us.
  • Simulate interventions before you commit budget. "What happens if we cut Mumbai retarget by 30%?" gets an answer with a confidence interval.
  • Find the specific clauses behind churn, conversion, and operational drag. Not "customer experience matters". The exact email, the exact day, the exact step.
  • Hand back a short list of decisions, each with expected effect and tradeoffs. The kind a CFO doesn't ask follow-up questions about.

Worked example · 10-min delivery economics

One DAG. Nine variables. The lever, named.

The shape every Indian operator already knows from Zepto, Blinkit, Swiggy Instamart. Hover any edge to see the estimated effect; hover a node to see what it represents in the model.

LeverMediatorOutcome
Quick-commerce DAG: four levers, three mediators, two outcomesCausal directed acyclic graph for Indian 10-minute delivery economics. Discount depth, push notification timing, promised ETA, and rider surge influence checkout rate, actual ETA, and order completion, which in turn determine contribution per order and day-7 retention.LEVERSMEDIATORSOUTCOMESDiscount depthPush timingPromised ETARider surgeCheckout rateActual ETAOrder completionContribution / orderDay-7 retention

Hover an edge for the causal claim and effect size. Hover a node for what it represents in the model.

Estimator: DoWhy backdoor + frontdoor adjustment with IV check (Pearl 2009). Synthetic example calibrated on Indian quick-commerce benchmarks.

See more use cases →

Working with

Indian operators who'd rather have an answer than a dashboard.

They watched our video ads frame by frame and pointed to the second viewers drop off. We pulled three creatives that same week. Meta spend was projected 40% lower by next quarter.

Mr. Jishnu
Mr. JishnuCMO, Karishye

Hundreds of dormant accounts and no one to call first. They ranked the list with a model, not a guess. ₹15L came back in two weeks.

Mr. Bimal Kalra
Mr. Bimal KalraCEO, Gangpur Ventures

Most agencies hand you a deck. ProjektAnalytics handed us a shortlist of decisions, each with the expected lift and the math behind it. The ones we acted on landed where they said.

Dr. Kamlesh
Dr. KamleshCEO, PharmacoMedics

They didn't pitch a tool. They sat with our data, named the lever, and showed us what to expect when we pulled it. That is the team you want, not the dashboard.

Dr. Suren Kumar
Dr. Suren KumarCEO, Healthrytix

Five questions you'll ask

The questions every founder asks before booking a call.

Short answers below. Longer answers on the call. If yours isn't here, ask it directly.

  • BI tools chart what your KPIs did. Analytics consultancies write a deck about it. We do something narrower and more useful: we name the variables that caused the result, write down the assumptions, and run an estimator (DoWhy, EconML) that anyone with a stats background can audit. The deliverable is a list of decisions, not a dashboard you have to learn.

  • Most in-house data teams are full to the brim with reporting, instrumentation, and pipeline work. Causal inference is a different discipline with its own tooling and a steep ramp. We work with your data lead, not around them. They keep the keys; we bring the playbook.

  • Where to spend the next ad rupee. Which onboarding step is silently driving churn. Which city, channel, or cohort is masking a 40% margin leak. Whether the price change you're considering will move volume or just margin. The pattern: any decision where you have data but the data doesn't explain itself.

  • A first audit lands in two to three weeks. The Gangpur Ventures case study (linked above) was a 14-day engagement with a documented 65x ROI. Longer engagements turn into a retainer once the first audit pays for itself, which is the only sane way to price this work.

  • Yes. Book a 30-minute walkthrough and we'll trace one of our case studies end to end: the dataset, the DAG, the estimator, the assumptions, the sensitivity check, and the call we made. If your data lead pokes a hole, we'll tell you so.