Case study · Rose

Building Rose: An AI Career Coach Built Entirely on the Employee’s Side

The Problem

I’ve spent most of my career in corporate environments where feedback is a ritual. Quarterly reviews. 360s. End-of-year summaries. And most of it is hard to use.

Not because people are bad at giving feedback. Some are, but that’s not the whole story. The harder problem is that feedback almost never arrives clean. It arrives with the giver’s assumptions baked in. Their relationship to power. Their blind spots about identity. Their inability to say what they actually mean.

And then the person who received it is left alone trying to figure out whether it’s useful, whether it’s fair, whether it reflects something real about them or something real about the person who said it.

Every feedback product I could find was built for the organization. For managers to write better reviews. For HR to track sentiment. For companies to reduce legal exposure. The employee sitting alone with a confusing performance review? Nobody built anything for that person.

So I built Rose.

What Rose Does

Rose is an AI career coach. You paste in feedback you received from a manager, peer, or executive. She helps you understand what it actually means.

Her response follows a three-part structure.

First: what this likely means

What was the giver probably trying to say underneath the words they chose?

Second: a fairness check

Does this feedback reflect bias or a microaggression pattern? Rose names it if it does. She draws on frameworks from Kimberlé Crenshaw, Derald Wing Sue, and Joan Williams to recognize when feedback is doing something other than what it claims.

Third: the reframe

The constructive version that strips away loaded language and gets to what the person receiving it can actually use.

The goal is not to validate the user. It’s to give them clarity.

The Design Decisions That Shaped Her

Voice before features

Most AI product development starts with capability. I started with character.

Before writing a line of code, I spent weeks working through who Rose is and how she talks.

Rose has two core voice rules. The first is dry recognition: one beat of acknowledgment before she gets to work. “They called you aggressive?” lands better than “That’s a concerning pattern I want to help you unpack.” The raised eyebrow, not the lecture. The second rule is bring clarity, not heat. The user is already feeling their feelings. Rose doesn’t amplify them. She lowers the temperature and raises the signal.

Those two rules shaped everything downstream. They’re why Rose doesn’t open with “I understand how you must be feeling.” They’re why she doesn’t close with a five-point action plan. They’re why she sounds like a person.

The onboarding modal

One of the earliest design problems was context. Rose is sharper when she knows something about the person using her: their role, their organization, whether there’s identity context that shapes how feedback lands. But asking mid-conversation interrupts the flow.

The solution was an optional onboarding modal before the user types anything. Three fields: role, organization type, and an open text field for identity context. The copy matters as much as the structure.

“Before we start, a little context helps me give you a sharper read. All of this is optional and you can update it anytime.”

That framing removes pressure, explains the ask, and signals immediately that Rose pays attention to specifics.

The identity context field was the most deliberate choice. An open text field, not a dropdown. It names gender, race, sexuality, and disability as examples and frames them as precision tools. The user decides what’s relevant. Rose uses it to be more accurate.

This was a values decision as much as a product decision. The people who most need Rose are often navigating feedback that was shaped by who they are rather than what they did. They should be able to tell her that directly, in their own words.

Grounding in scholarship, not pattern matching

Rose draws on a real body of research about how bias shows up in professional settings.

Joan Williams’ work on gender bias patterns gives Rose a vocabulary for the specific ways women are penalized for behavior that reads differently when it comes from someone else. Derald Wing Sue’s taxonomy of microaggressions gives her a framework for naming subtle exclusion without requiring the user to have that language themselves. Kimberlé Crenshaw’s work on intersectionality helps Rose recognize when feedback is responding to compounded identity dynamics that a single-axis lens would miss.

When Rose says “that’s a Tightrope pattern,” she’s applying a documented, peer-reviewed framework. That distinction matters. It separates analysis from opinion.

The knowledge base is a living document. New research goes through a structured review process before it enters Rose’s system prompt. The architecture is modular: persona, knowledge, and interaction rules are separate layers. Any one of them can be updated without touching the others.

Designing against sycophancy

This was the risk I thought hardest about. AI systems have a strong pull toward agreement. Mirror the user’s emotion. Validate the interpretation. Avoid saying anything unwelcome.

That pull is especially dangerous for Rose. Her users are often in charged situations dealing with feedback that feels unfair. The easy path is to agree with everything.

Rose is specifically designed to resist that. If the feedback is fair, she says so. She maintains what I think of as other-perspective presence: some acknowledgment that the feedback giver may have had a valid point, even when bias is also present. She doesn’t over-explain next steps as a way of performing helpfulness.

I built a testing rubric across four dimensions: action endorsement rate, other-perspective presence, repair and growth signal, and next-steps drift. I run scenarios through Rose regularly to check she’s holding the line.

A Few Things I Learned

Voice is a product decision. The rules governing how Rose talks aren’t style choices. A tool that validates users when they don’t need validation is a different product than one that gives them clarity. Every rule serves that distinction.

The equity angle is not a niche. Women, people of color, and LGBTQ+ employees are the majority of the workforce. A tool that understands their experience isn’t serving a niche. It’s serving a need that has been ignored.

Care is a design principle. One of Rose’s foundational documents puts it directly: Rose is built from genuine care for the people who use her, not liability protection. Those aren’t the same thing. A product designed to protect itself from users is different from one designed to protect users. That distinction shows up in every decision.

Where Rose Is Now

Rose is live at rose-website-tan.vercel.app. She’s powered by the Claude API, deployed via Vercel, and currently in a trusted-colleague testing phase before broader user testing.

The feedback decoder is the beginning. The larger vision is a career intelligence platform where users own their data, track their growth, and carry their record with them regardless of where they work.

The decoder is the proof of concept. It’s also the thing I actually needed and couldn’t find anywhere.

That’s why I built it.