From Manual Logging to AI Recognition
Freeing nutrition coaches from calorie math Sirka · 2022 · UX Research → Product Design
A redesign of Sirka's meal logging feature that moved from a database-search interface to an AI-powered photo recognition system — cutting average logging time from 30 minutes to under 1 minute and reducing coach workload by 60%.
The Setup
Sirka was a subscription-based nutrition platform with a fundamental supply-and-demand mismatch. Five coaches were each managing 60 clients. The roadmap target was 105 clients per coach. The feature meant to make coaches more productive — Meal Log — was actually consuming their time.
The mechanism was simple. Every client logged meals to track calories. The Meal Log feature had two paths: search Sirka's food database, or input manually. In theory, database search was the efficient path — the system had calorie data, the coach didn't have to count anything.
In practice, ~80% of meal entries were manual. Which meant ~80% required a coach to read the entry, identify the food, calculate the calories, and reply with feedback. Five coaches, 300 clients, 80% manual logs — that math was breaking before we hit any growth target.
The brief I gave myself:
- Understand why clients chose manual over the database
- Reduce the share of manual entries requiring coach intervention
- Don't sacrifice accuracy — this was a clinical product, not a fitness app
- Free coaches to do coaching, not arithmetic
The Insight
I interviewed both clients and coaches. Multiple hypotheses had been floating around — "the database is too small," "users don't trust the database," "the UX is confusing." Some were true. None was the actual unlock.
The unlock was this:
Users weren't choosing manual because they preferred it. They were choosing it because they wanted immediate feedback — calorie totals visible the moment they finished typing, before the coach reviewed anything.
The database flow gave them that — a number appeared as they searched. The manual flow felt like delegation: "tell my coach what I ate, wait for them to compute calories, get a reply." Even when the database was incomplete, users still preferred the database path because the number was instant.
That's a different problem. It's not "expand the database." It's: how do we give every entry an instant calorie estimate, regardless of whether it's in the database or not?
This was 2022. The answer was an LLM.
The Architecture
I designed a three-layer recognition system that combined Sirka's existing database with image recognition and LLM-based parsing:
Layer 1 — Photo capture. User takes a photo of the meal and adds a short description.
Layer 2 — Image recognition. The model identifies the dish, with portion estimation derived from photo + description. Trained against Sirka's existing food database, plus FatSecret and NutriSurvey datasets for nutritional cross-reference.
Layer 3 — LLM disambiguation. When the image alone wasn't sufficient — variations like "ayam geprek," "gado-gado," "nasi merah" with portion modifiers — the model used the user's text description to refine. ChatGPT (then GPT-3.5) handled the parsing layer.
The output was a single calorie + macronutrient estimate, returned within seconds of the photo. The user got their instant feedback. The coach got freed from calorie math.
The flow collapsed from multi-step (search → select → adjust portion → confirm → wait for review) to: snap, describe, done.
The Research that Changed the Design
The first version of the redesign — before I went deep on user research — assumed the fix was a better database. We were planning to integrate FatSecret or NutriSurvey directly, give users more options, and improve the search UX.
The interviews killed that direction.
Coaches told us the meal log review was burdensome — they didn't want to verify calorie accuracy, they wanted to provide behavioral feedback. Clients told us they didn't read coach reviews of meal logs anyway — they were chatting their coaches directly for important feedback. Both sides were spending time on something neither side valued.
The research surfaced a more important truth: what users wanted from meal logging wasn't a logging tool. It was a self-monitoring feedback loop. The coach was infrastructure they needed to bypass, not a feature they wanted.
That reframe is what produced the AI direction. We weren't improving meal logging. We were redesigning the loop entirely so that the user could self-monitor in real time, and the coach could spend time on the conversations that mattered.
Outcomes
| Metric | Before | After AI Meal Log | Δ |
|---|---|---|---|
| Average logging time | ~30 min (manual + chat with coach) | < 1 min auto | 2× faster |
| Manual logs requiring coach input | 80% of entries | 40% | −50% |
| Feature retention rate | ~40% | 70% | +75% |
| Coach workload | 4–5 hours/day reviewing logs | < 2 hours/day | −60% |
The metric I'm proudest of is the coach workload one. That's the number that lets the business hit its 105:1 ratio target. Every hour reclaimed from calorie arithmetic was an hour available for the conversations the platform actually existed to provide.
The unintended outcome: the AI Meal Log became a key differentiator in Sirka's retention narrative. Users didn't just stay because the food was tracked — they stayed because tracking finally felt frictionless.
What This Project Taught Me (That I Carry Forward)
This was the project where I first learned the pattern that became the through-line of my work: designing AI as a human force-multiplier, not a human replacement.
The Meal Log AI didn't replace the coach. It moved the coach from arithmetic to coaching. The system did the math; the human did the work that required judgment. Same headcount, dramatically different output.
The same pattern shows up — explicitly — in everything I've designed since:
- At MainStory, QC Audit replaced 3 hours of manual ops review with 15 minutes of supervised AI scoring. CMs review exceptions, not data.
- At MainStory, Monthly Wrapped replaced a manual deck-building ritual with an AI-generated artifact. CMs approve, not assemble.
The Meal Log was the prototype. Three years later, the pattern is still working.
What I'd Do Differently
I'd run the AI feasibility test before validating the database hypothesis with stakeholders.
We spent the first weeks of the project aligning leadership around "we'll fix this with a bigger database." The interviews then changed the direction entirely, which meant re-aligning everyone on a different solution. That's normal — research is supposed to change minds — but the realignment cost time we didn't need to spend.
If I were starting over, I'd run a tiny AI prototype in parallel with the interviews. Even a rough one. Putting a working AI demo in front of stakeholders alongside the research findings would have shortened the alignment cycle by weeks.
Lesson: when research is going to challenge the prevailing solution, the antidote isn't more research — it's a working prototype of the actual answer.
Sirka · Indonesia's evidence-based weight management platform · 5 coaches · 500+ premium subscribers · 2022