AI but do my makeup

From eyeshadow to epistemic risk and back again

There’s a difference, my development economics professor Ricardo Hausmann said, between twenty years of experience and one year of experience repeated twenty times.

Hausmann was talking about the experience of low skilled workers. But we all have domains where we muddle through.

For me, makeup is one of these. At some point I settled into a routine. It was one of those accidental defaults we all set in life, where we just start uncritically repeating what we did the last time. My makeup wasn’t the best makeup I could do, it was just the makeup I did.

So I consulted AI.

Seeing pictures of me, and seeing pictures of my makeup, it quickly identified a number of mismatches. I have an eyeshadow palette where I’ve heavily used just two shades, neither of which AI thought looked particularly good on me.

Instead, it quickly assembled a four shadow look for me–lid, crease, corner, liner–out of colors I’d never used. I took out a Sharpie, labeled the four ‘good’ ones so I could find them again, and then tried out the look. It was better. Like not I’m-the-heroine-of-a-Dinsey-teen-transformation better, but a bit more like I was wearing makeup than it was wearing me.

More than that, it felt better. When you don’t know what you’re doing, there’s this low level chatter in your brain that never entirely goes away. Am I doing this right? Here we go again. Perfunctory is the best it can feel.

As I shared photos beyond eyeshadow, AI started proactively putting together a complete makeup look for me. An entire look is a bit much for my daily wear, so I asked it to rank the highest impact things I could do. Of its top five recommendations, two were products I didn’t even own–cream blush & brow gel–and one was a product I had almost used up, concealer.

So I asked AI for help finding products.

When I clicked over to the concealer it recommended, I noticed it only had a four star review. “Surely I could do better?” I thought. So I took a screenshot and my complaint back to the chat window.

AI pushed back, defending its recommendation. It summarized why other people ding the product–it’s lightweight and creases under powder–and pointed out that I don’t wear powder and a lightweight product suits my skin.

Four star reviews have long been a problem of internet reviews. The issue isn’t that they’re wrong: it’s that they collapse plural truths into a single ambiguous signal. But AI has more makeup knowledge than any person could hold, and it’s trained on more faces than any makeup artist will ever see. It doesn’t just know the product—it knows the use cases. It knows the art. It knows me. And that lets it disambiguate the four-star review.

So I bought the concealer.

The AI-suggested concealer is better than what I was using. Again, not life-changingly so, but maybe 20-30% better: a better color match, easier to apply, and, as promised, just enough coverage. As with the eyeshadow, the bigger change is that it feels better to use. Its selection felt more intentional than my past choices, which were informed by a mix of friends, sales, and inertia. And because it feels better, it goes on more quietly.

Right now, some of you are feeling uneasy. A machine is influencing me! And I am influencing you to be influenced by machines!

I have worries too. But not that OpenAI is going to add a ‘Buy Now’ button, which it is, but because influence doesn’t always feel like influence. It can feel like clarity, rationality, or help. That’s what makes AI powerful and what makes AI dangerous.

With makeup, the stakes are low: it’s inexpensive, reversible, and the effects are immediately visible. Even someone without a lot of makeup knowledge like me can follow simple instructions and judge the difference for myself.

But democracy doesn’t work that way. AI can shift what feels familiar, what feels reasonable, what feels true. And because it often presents itself in the language of help—streamlining, summarizing, clarifying—it’s easy to trust, even when it’s behaving in untrustworthy ways.

Just recently, Reddit users discovered they’d unknowingly been part of an AI experiment. Researchers had chatbots infiltrate debates on the subreddit /ChangeMyMind, and they were six times more persuasive than human commenters. AI can reshape how we think, without us realizing we’ve been molded.

The answer isn’t to ignore AI or refuse to engage with it. This just leaves you out of the conversation. It’s to engage–thoughtfully, critically, and with enough knowledge to push back.

For me, as a school founder, this means making sure students learn enough content knowledge that they can actually assess AI’s outputs. It means adding more unreliable narrator novels to English class so they can practice interpreting authoritative-sounding ambiguity. And it means teaching how large language models work so students can see through the gloss to the statistical weights, biases, and cognitive science principles that are powering its output.

But in the meantime, twenty years after I first wandered into a department store to get a professional makeup look in college, I’m finally ready for Year 2 of makeup experience. AI has given me just enough knowledge for learning to begin, replacing a low-level hum of confusion with a quiet sense of direction.

This was me with makeup. Maybe it’s you with gardening, your parents with weight training, or your team with learning AI. When learning, we all need a place to start, and the quiet confidence of believing it’s the right path. Unreliably, AI is here to help.

Co-authored Lifestyle pieces explore the question, What happens when we use AI not to automate our choices, but to understand them?

Co-authored Index for this piece: 85% authored, 15% clarified by AI.

Previous
Previous

AI but draw me a map