The Caregiver Hiring Platform
Owning the funnel that fed the workforce MainStory · 2025
A 0→1 supply-side acquisition platform that took caregiver lead inflow from a flat 220/month on third-party job boards to a 435% sustained lift — after cratering for the first month.
The Setup
MainStory had a supply problem.
Not a demand problem. Not a quality problem. A supply problem. We could fill every center we opened with parents on day one. We couldn't fill them with nannies.
The numbers were unambiguous. Pre-launch baseline: 220 caregiver leads per month, sourced almost entirely from Glints and JobStreet. That number had been flat for months. It wasn't growing with the business. We were planning to scale centers — to add baby slots, to expand into new cities, to deepen the homecare side — and at 220 leads per month, every one of those plans was capped before it started.
The third-party job boards weren't broken. They were just theirs. Our listings competed with thousands of others. Our brand wasn't visible. Our pitch wasn't ours to make. And every lead came at a cost we didn't control.
The brief I gave myself was unusually direct:
- Increase lead caregiver inflow by 30% by end of year
- Reach surplus supply, not just parity
- Build a channel we own, not one we rent
This is the case study where the metric curve tells the story. So I'm going to tell it that way.
The Insight
We weren't competing for caregivers. We were competing for caregivers' attention.
Glints and JobStreet are job boards. A caregiver scrolling them sees us next to a hundred other listings, all making the same generic promises. We had no canvas to differentiate. No way to say what made MainStory different from the daycare down the street. No way to capture a lead who was browsing but not yet applying.
Building our own funnel meant getting the canvas. A landing page, a brand-led pitch, a controlled ad spend that pointed people at us — not at a directory of every employer in Indonesia.
The trade-off was obvious and uncomfortable. To build our own funnel, we'd have to redirect attention (and budget) away from the channels that were currently producing all 220 leads. Short-term, that meant the existing pipeline would weaken before the new one took over.
Most companies don't survive that crossover honestly. They either keep both running and dilute their effort, or they cut over too soon and watch the metric collapse. The path I designed required a third option: a brief, deliberate dip, with the discipline to hold the line through it.
The Architecture
I shipped the platform in two stages, deliberately.
STAGE 1 → Landing page + paid acquisition
Brand-led pitch. Lead capture. Ad routing.
Built fast. Released 1 October.
STAGE 2 → Full hiring platform (Lovable build)
Application flow, screening logic, status tracking.
Same brand surface, expanded depth.
Shipped after early Stage 1 signal.
Stage 1 was about replacing the canvas. A single landing page with MainStory's brand voice, the actual pitch ("here's why caregivers stay with us — fair pay, transparent payslips, a real career path"), and a lead-capture form. Ads pointed there. Word of mouth pointed there. Anything that wasn't a third-party board pointed there.
Stage 1 was deliberately small. It had to ship in days, not weeks. The goal wasn't to build the perfect hiring funnel — it was to prove that owning our canvas would convert better than renting someone else's. Once that was proven, we could invest in depth.
Stage 2 was about depth. Built in Lovable, end-to-end: application flow, document upload, status tracking, screening logic, recruiter dashboard. Same brand surface, but now a complete pipeline a caregiver could move through from "I saw an ad" to "I'm scheduled for an interview" without ever leaving MainStory's surface.
Lovable mattered here. Not as a tool brag, but as a velocity decision. Building a full hiring platform on engineering's roadmap would have meant 3+ months of backend work, design handoffs, and integration. Building it in Lovable meant I could ship the platform myself, in production, in days. The trade-off was code-quality at the seams — fine for a Stage-2 platform proving a hypothesis, not fine for a system at 1000 caregivers. We knew we'd rebuild it later if it worked. Most of it did.
The Curve
Here's where the case study becomes interesting.
| Time post-launch | Lead inflow growth (%) vs baseline |
|---|---|
| D+1 | 0% |
| D+7 | −99.1% |
| 1 month | −93.2% |
| 3 months | +322% |
| 6 months | +435% |
The platform cratered.
The first week, lead inflow dropped to almost zero. The first month, it had only recovered to 7% of baseline. We had 220 leads on the old channel; now we had ~15. If you were looking at this dashboard at the 30-day mark, you would have called the project a failure.
Several people did.
TODO — pushback during the dip: who pushed back, and how hard? Was there a moment where leadership wanted to revert, hire from Glints again, kill the platform? How did you defend it? This is the Principal-level moment of the case study — most designers never get to tell a "I held the line during a 90% drop" story. If you have one, this is where it goes.
I had a hypothesis for the dip, and I had to be honest about what was hypothesis and what was wishful thinking. The hypothesis: redirecting acquisition spend to a brand-new funnel always under-performs in the first 30 days because the funnel hasn't earned anything yet — no SEO traction, no word-of-mouth compound, no reference applications, no interview-day testimonials, no Instagram reshares from caregivers who got hired. All of that takes weeks to seed and months to compound.
The wishful-thinking version of the same belief would have been "it'll work eventually." The disciplined version was: here are the leading indicators I expect to see by Day 45, here are the ones I expect by Day 60, and if I don't see them, we revert.
TODO — the leading indicators: what specifically did you watch during the dip period? Application page visits? Time on page? Returning visitors? CTR on ads? Naming the leading indicators turns the "I held the line" story from gut feel into Principal-level rigor.
The Turn
Between Day 30 and Day 90, lead inflow went from −93% to +322%. A 415-point swing.
What happened in those 60 days isn't one thing. It's the compound of several things finally getting paid back:
The funnel started self-feeding. Caregivers who applied through Stage 1's landing page told other caregivers. Word-of-mouth referrals — the cheapest, highest-quality lead source any company has — started showing up in the data. None of that exists when you rent a job board; you can't word-of-mouth a search result.
Ads got dialed in. The first month of paid acquisition was learning data. By month two, we knew which creatives converted, which audiences responded, which ad copy resonated with caregivers vs. recruiters trying to hire caregivers. Ad efficiency stepped up sharply once we had real conversion data to optimize against.
Stage 2 shipped. The Lovable-built application flow let leads who'd been hesitating on the landing page actually complete an application without friction. Stage 1 was capturing intent; Stage 2 was converting it.
The brand canvas paid back. Caregivers had something to share. Pre-launch, telling a friend about MainStory meant pointing them at a generic JobStreet listing. Post-launch, it meant sending them a link to our page, with our pitch, with social proof embedded. Sharing rate compounds when there's something worth sharing.
By month 6, the platform was producing roughly 5x the lead inflow of the pre-launch baseline. Sustained. Growing.
TODO — the specific moment of turn: do you remember the day or week the metric flipped? Was there a single signal — a particular ad creative, a referral spike, a SEO ranking change — that you could point to as the moment the curve bent? "On X day we noticed Y" gives the case study a center of gravity.
The Loop That Closed
The Caregiver Hiring Platform doesn't stand alone. It's the front door of the same workforce system Caregiver OS is the back of.
HIRING → WORKING → PAYING
───────────── ─────────────── ─────────────
Caregiver Hiring Daily ops, audits, Caregiver OS
Platform supply-demand, etc. (Payroll, Tipping,
(this case study) OT, Payslip)
A caregiver enters MainStory through the Hiring Platform — sees our brand, our pitch, applies through our funnel. Once hired, she works inside an operating system designed for her — fair pay, transparent payslips, variable income that recognizes excellence, overtime that arrives reliably. The hiring platform doesn't have to over-promise, because the operating system actually delivers what the hiring page claims.
This is the unlock that's hard to see from the outside but obvious from the inside: the hiring funnel converts better when the back-end is good. Caregivers tell other caregivers. Word-of-mouth is the cheapest acquisition channel that exists, and it only works if the experience after hiring is worth talking about.
So 435% growth at 6 months isn't really a hiring-platform metric. It's a hiring-platform-multiplied-by-a-good-workforce-experience metric. One project would not have produced it. The two together did.
This is what "I design operating systems, not interfaces" means in practice. The thing the candidate experiences as "applying for a job" is one screen. The system that makes the screen actually work — brand, retention, payroll, tipping, transparent payslips, the whole back-end of being a MainStory caregiver — is the work.
The Pattern
Own your funnel, not rent it. But budget for the dip.
Building your own acquisition channel always under-performs in the early window, because you're trading immediate lead volume (rented, expensive, generic) for compound returns (owned, cheap, branded) that take 30–60 days to seed.
Most companies don't make this trade because the dip looks like failure on the dashboard, and dashboards drive decisions. The discipline is to:
- Set the leading indicators in advance
- Define what would actually trigger a revert (real failure) vs what's just the dip (expected discomfort)
- Hold the line until the leading indicators tell you to either keep going or stop
- Plan for the recovery curve to be slower than you wish and faster than skeptics expect
I now apply this framework to any project where short-term metrics will get worse before they get better. The worse-before-better window is where most good systems die — not because they don't work, but because nobody's holding the line.
What I'd Do Differently
I'd communicate the dip before it happened, not while it was happening.
The hardest stakeholder conversations in this project happened during the dip — Day 14, Day 21, Day 30 — when the metric was bad and I was asking for patience. Those conversations would have gone differently if I'd written down the expected curve before launch: "lead inflow will drop ~80% in the first two weeks, recover to ~baseline by Day 45, exceed baseline by Day 60." Without that prediction in writing, every bad-metric meeting started from scratch. With it, the conversation would have been "are we tracking to plan?" instead of "is this working?"
The lesson generalizes: for any system designed to get worse before it gets better, the curve is a deliverable. Ship the predicted curve before you ship the system. The team's anxiety during the dip is dramatically lower when the dip was on the plan.
TODO — second reflection: one more honest "I'd do this differently." Maybe about Lovable as the build tool? About the Stage 1 vs Stage 2 sequencing? About hiring vs onboarding? Two reflections beats one.
MainStory · Indonesia's premium childcare platform · 200+ caregivers · 10+ centers