When we launched FameCake, AI style transforms were the headline feature. Upload a photo, pick a style - Studio Ghibli, GTA, Comic Book, Synthwave - and get billboard-ready creative in seconds. Fifteen styles. Image-to-image generation via Gemini. The demo was genuinely impressive.

Five months later, we deleted all of it. One commit: 78 files changed, -6,482 lines.

What We Built, and What We Killed

Style transforms were the first AI feature we shipped. Fifteen styles, image-to-image generation via Gemini, a prompt library in Firestore for hot-swapping without deploys, background task queues to move generation off the request path. The system got sophisticated fast.

Along the way, we also tried:

  • Veo 3.1 video animation - 8-second looping animations from static images. Didn’t survive the day.
  • Background removal - tried twice (in-house library, then fal.ai BiRefNet). Both abandoned within days.

Content moderation came next. Then outpainting for extreme billboard ratios. Then the style system got deleted entirely.

Why Style Transforms Failed

They didn’t fail technically. Gemini produced good results. Users liked the output. The problem was simpler than that: styles weren’t why people booked billboards.

Nobody opens FameCake thinking “I want to Ghibli-fy my photo.” They open it thinking “I want my partner’s face on a billboard for their birthday” or “I want to advertise my shop on a local screen.”

Style transforms added friction to a flow that needed to be fast. Pick a photo, pick a screen, pay. Every extra step - choose a style, wait for generation, decide if you like the result, maybe try another style - was a chance for someone to bounce.

The styles also created a support surface. “Why does my photo look weird in Oil Painting?” “Can you add a watercolour style?” “The GTA style made my face look wrong.” Every style was a new failure mode to manage.

Meanwhile, the features nobody would put in a demo reel were quietly becoming essential.

What Survived

Content moderation started as a post-payment afterthought in December. Gemini Flash scanning uploads, flagging anything inappropriate, creating a pending_review state for admins. Within weeks it became clear this needed to happen before payment, not after. We rebuilt it as a three-tier pre-payment system:

  • Tier 1 (Safe): auto-proceed, auto-approve after payment
  • Tier 2 (Questionable): warning modal, flagged for manual review, Stripe auth hold
  • Tier 3 (Blocked): user must edit content before proceeding

Billboards are public. One inappropriate image on a screen outside a school is a relationship-ending event with that venue. Content moderation isn’t a feature - it’s a prerequisite for the business existing.

Outpainting solved a constraint we couldn’t work around. Billboard screens come in ratios like 1:2 and 2:1 that Gemini can’t generate natively. The solution: generate at the closest supported ratio (9:16 or 16:9), then use Imagen 3 to extend the canvas to the target dimensions. Without this, we’d have to crop aggressively or reject certain screen formats.

AI reframe replaced style transforms as the creative pipeline. Instead of artistic transformation, it does something utilitarian: fit your photo to different screen ratios while preserving composition. Less impressive in a demo. More useful in production.

The pattern

The AI features that survived all share one trait: they solve constraints that have no non-AI alternative. Moderation at scale requires visual understanding. Outpainting requires image generation. Reframing requires compositional awareness. Style transforms had a non-AI alternative: just use the original photo. And that’s what most users preferred.

The Lesson for AI Products

There’s a seductive loop when building with AI. You discover the model can do something impressive, so you build a feature around the capability. Style transforms felt like a differentiator. “Upload a photo and get 15 different artistic interpretations” is a great pitch. But capability-driven features have a short shelf life.

We were building features because the model could do them, not because users needed them.

— Building FameCake, internal retrospective

The features that lasted were constraint-driven. We needed moderation because billboards are public infrastructure. We needed outpainting because screen hardware comes in awkward ratios. We needed reframing because one photo needs to work across many formats. These aren’t optional. Without them, the product doesn’t function.

This maps to something I’ve written about before. In When AI Isn’t Fit for Purpose, I covered Salesforce walking back autonomous agents to deterministic scripting. Same principle: AI that solves a real constraint sticks. AI that’s the headline feature gets rearchitected.

What We’d Do Differently

  • Ship without the AI hook. If we’d launched with just photo upload and manual creative, we’d have found product-market fit faster. The style system consumed months of engineering that could have gone into supply, payments, or distribution.
  • Add AI to remove friction, not create options. Outpainting and reframing remove steps the user would otherwise have to do manually. Style transforms added steps. The direction matters.
  • Let moderation lead. We treated safety as a post-launch addition. It should have been day-one infrastructure. Every marketplace that handles user-generated content in public spaces learns this, usually the hard way.

The Uncomfortable Truth

The AI features that impress investors and generate social media buzz are often the first to get cut. The ones that survive are invisible to users. Nobody writes about content moderation. Nobody tweets “check out this outpainting pipeline.” But those are the features running in production five months later.

If your AI feature would still be impressive as a standalone demo but isn’t actually solving a constraint in your product, it’s probably a style transform. Beautiful, fun to build, and eventually -6,482 lines in a single commit.

Build around the problem, not the capability.