Monthly Wrapped
Turning an internal report into a viral acquisition channel MainStory · 2025
A redesign of the Monthly Parents Call — shifting it from a high-effort, manual ritual into an AI-generated, parent-shareable artifact that doubled as MainStory's lowest-cost acquisition channel.
The Setup
Once a month, every Center Manager at MainStory built a slide deck. One per child. Filled with photos, weekly highlights, growth measurements, developmental observations, a "superpower of the month." Then she scheduled a call with each parent — the Monthly Parents Call, MPC — and walked them through it.
Parents loved it. They cried sometimes. They saved it. They forwarded screenshots to grandparents and to friends-with-toddlers and to the WhatsApp groups where decisions about daycare get made.
That last part was the interesting part.
Parents were already doing the work of acquisition. They just didn't have a product to do it with.
What they had was a screenshot of a slide that wasn't designed to be shared. No branding. No frame. Sometimes the wrong child's name on a header. The image quality of whatever WhatsApp's compression had done to it.
Meanwhile, on the supply side, CMs were spending hours per child every month assembling these decks. As MainStory grew past 10 centers, the math broke. We couldn't keep producing a manual artifact per child per month. We also couldn't kill it — parents loved it too much.
The brief I gave myself:
- Reduce CM work to ~zero on MPC production
- Make every output worthy of being screenshot-shared
- Treat each share as an acquisition touchpoint
- Don't lose the warmth that made parents care in the first place
This is how we built it.
The Insight
The deck wasn't the product. The screenshot was.
Once that reframe landed, every design decision followed from it. We weren't automating a slide deck. We were designing nine images, in 9:16 vertical, each one individually screenshot-worthy, that together told a coherent monthly story when swiped through.
The format wasn't a deck. The format was Spotify Wrapped. The format was Instagram Stories. The format was the thing parents were already trying to make manually — we just gave them the real version.
9 cards, 9:16 vertical, brand-consistent.
Each card individually shareable.
Together: one month of a child, told as a story.
Card 1 Cover / intro Child name, photo, period
Card 2 Stats overview Attendance, growth, activity count
Card 3 Weekly lesson plan Theme + plan structure
Card 4 Highlights — Week 1 Photo + AI-summarized observation
Card 5 Highlights — Week 2
Card 6 Highlights — Week 3
Card 7 Highlights — Week 4
Card 8 Superpower of the month AI-assigned developmental trait
Card 9 Development feedback Three-category framing for parents
+ Home activity suggestions
This was the deliverable. Now it had to be generated, automatically, from data MainStory already had.
The Architecture
The data already existed. CMs were submitting daily timeline posts (photos, captions, observations) into our system. Growth measurements were captured every Friday. Activities were tagged. Attendance was logged. The raw material was there — it just wasn't getting assembled into anything a parent could share.
The pipeline:
RAW DATA AI LAYER OUTPUT
─────────────── ────────────── ───────────
Timeline posts → Weekly narrative prompt → Cards 4–7
Growth measurements → — → Card 2
Sentiment patterns → Superpower prompt → Card 8
Aggregated obs. → Development feedback → Card 9
Lesson plan tags → — → Card 3
Activity DB → Selection logic → Card 9
I designed four AI prompts to drive the system:
Prompt 1 — Weekly Activity Narrative. Takes a week of caregiver post_text and condenses it into a 2–3 sentence parent-facing summary, plus 2–3 skill tags. Constrained to 50 words, Indonesian, warm-but-not-childish, focused on what this child did rather than what the activity was generally about.
Prompt 2 — Superpower of the Month. Takes a full month of observations, picks one of eight developmental archetypes (Social Butterfly, Little Explorer, Creative Star, Focus Master, Kind Helper, Super Active, Story Lover, Brave Learner), and writes a 30-word description grounded in specific behaviors quoted from the month's notes.
Prompt 3 — Development Feedback. Generates exactly three feedback points across three required categories: a strength, an area developing well, and an area for home stimulation. Hard rule in the system prompt: no negative or deficit language ever. "Perlu stimulasi" must always be framed positively — "still developing," "can be strengthened with practice." Each point includes one observation and one practical home tip.
Prompt 4 — CM Brand Voice. A meta-prompt that trained content output against MainStory's actual brand voice: warm, reassuring, competent. No slang, no overly formal tone, light emoji use, structured as warm opening → clear explanation → reassuring close.
This was the same KinderGPT infrastructure we'd built for QC Audit earlier in the year — different application, same muscle. Build once, redeploy.
The Quality Loop
AI hallucination on a child's developmental report is not an acceptable failure mode. Parents will catch it instantly. Worse, they'll lose trust in the entire MainStory system if a Wrapped describes their child saying something the child didn't say, or doing something they didn't do.
So I designed the AI to ship with three guardrails:
Pre-publish quality checks. Before any Wrapped is generated for a child, the system verifies: at least one photo + caregiver comment per week, complete growth data, and that AI-rewritten captions match the sentiment of the original observation. Anything that fails routes to manual review.
CM as feedback loop, not gatekeeper. Every Wrapped surfaces in a Retool table per center. CMs can edit + thumbs-up/thumbs-down each AI output. The edits become training signal — the model learns from CM corrections over time. The thumbs become the hallucination-rate metric: edited content / total generated items.
Phased autonomy. Phase 1 ships with CM approval required for every Wrapped before parents see it. Phase 2 (planned for after 90 days of training data) shifts to exception-only review. This is the same pattern from QC Audit — the AI hasn't earned full autonomy yet, so we don't grant it yet.
The hallucination rate is the metric I care about most. It's not in the OKRs because it's not a target — it's the floor. If hallucination rate climbs, every other metric becomes meaningless.
The Distribution
Parents receive their child's Wrapped through two channels, designed as a deliberate progression:
WhatsApp first. A link to a mobile-responsive web viewer with all 9 cards, swipeable, with a download button per card and a "Share to Instagram Stories" deep link that pre-loads the image into IG. This was Phase 1 — built first because it required zero app changes and could be measured in 2 weeks.
In-app second. Phase 2 embedded the Wrapped experience as a dedicated section on the child's profile in the MainStory parent app, with push notifications when a new Wrapped was ready. The same content, owned by the parent, accessible forever.
The viral loop is structural:
Parent receives Wrapped
↓
Shares one card to Instagram Stories
↓
Friends/family see child's progress at MainStory
↓
Interested parents click branded link
↓
Land on MainStory acquisition page
Every Wrapped that gets shared is an acquisition touchpoint that costs MainStory zero rupiah. The unit economics on this channel beat any paid acquisition channel we have.
What UAT Taught Us
The first round of user acceptance testing surfaced two things I'd missed.
The photos weren't personalized enough. The system was pulling photos from timeline posts where the photo was a group shot — multiple children together. From a CM workflow perspective, that was fine. From a parent-shareability perspective, it wasn't. A parent doesn't want to share a Story showing five other kids alongside their own.
The fix wasn't the AI — it was the workflow upstream. CMs and Sus Koor needed to log at least one personalized post per child during the week. We added a photo-edit feature to the admin dashboard so CMs could crop/select the right photo for each Wrapped. The AI got smarter; the workflow got smarter alongside it.
The captions felt template-y. Even with the brand voice prompt, AI-generated captions across multiple children started reading similarly. The fix: rotate detailed observations across children — one child per day gets a deep-observation post — so the AI has a richer corpus to draw from for any given child.
TODO: what was the surprise from UAT for you specifically? Was there a parent reaction during testing that landed differently than expected? A CM who pushed back on something? The UAT round is where the design met reality — what did reality teach you?
These iterations got us to the V2 plan: WLP (Weekly Lesson Plan) → Activity → Milestone, where the AI's narrative gets grounded in structured curriculum data, not just free-text observations.
Outcomes
TODO — Phase 1 metrics: the 1-pager defined the success metrics as 90% autopilot rate (P0), 20% share rate (P1), and AI hallucination rate as the guardrail. What did Phase 1 actually hit? Even rough numbers — "autopilot at 75% with most flags being photo selection," or "share rate landed at X% in the first 30 days" — make this section land. If Phase 1 hasn't shipped at scale yet, say so explicitly: portfolios are stronger when in-flight work is honestly labeled.
| Metric | Target | Actual |
|---|---|---|
| Autopilot rate | 90% | TODO |
| Share rate (≥1 card) | 20% | TODO |
| AI hallucination rate (guardrail) | minimize | TODO |
| Referral sign-ups attributed to Wrapped | — | TODO |
What I can claim with confidence:
- CM time on MPC production: dropped from ~hours per child per month to minutes per Wrapped reviewed.
- MPC cadence shift: moved from monthly to quarterly calls. The Wrapped became the monthly artifact; the call became the deeper conversation.
- Same AI infrastructure ran two systems. QC Audit and Monthly Wrapped both depend on KinderGPT. The marginal cost of adding the second system was a fraction of the first.
The Pattern
Existing internal artifacts often have viral potential — they just weren't designed for it.
The MPC deck wasn't the product to scale. The screenshot of the MPC deck was. The deck was the internal artifact; the screenshot was the external one that parents had been making by hand for months.
I now look for this pattern everywhere: where in the operating system are people manually translating an internal artifact into a shareable one? That manual translation is the design opportunity. Find it, automate it, brand it, and you've turned an ops cost into a growth channel.
This is what I mean by "I design operating systems, not interfaces." The interface here — nine cards, 9:16 vertical, swipeable — is the visible output. The system underneath is the real work: data pipeline, AI prompts, quality loop, CM training feedback, two-channel distribution, viral attribution. The interface only matters because the system is sound.
What I'd Do Differently
I'd ship the personalized-photo workflow before launching the AI.
The first UAT round revealed that the AI quality was being capped not by the model, not by the prompts, but by the upstream data — the photos CMs were uploading were group shots, not portraits. The AI couldn't fix that. No amount of prompt engineering could.
If I were starting again, I'd run a two-week pilot of the photo-personalization workflow first, generate a richer corpus, then layer the AI on top. The Wrapped quality would have been higher in week one, and we wouldn't have spent the first iteration patching the upstream.
TODO — second reflection: one more honest "I'd do this differently." Even something small. Principal-level case studies always have at least two — one tactical, one strategic. The tactical one above is the photo workflow. What's the strategic one?
MainStory · Indonesia's premium childcare platform · 200+ caregivers · 10+ centers