Condemned to Decide


Part I: Bharat Mandapam

There’s a particular kind of person who is so good at their job that they have no time to waste on talking about it to the general public. I’ve built a small career out of being useful to that person. 

I write for some incredible founders, philanthropists, and scientists. Some of them are working on India’s AI ecosystem, which means I get to see a lot of the actual progress happening in our universities, labs, industries, and wherever else progress happens.

This is how I ended up at the AI Impact Summit in Delhi last month. It was the largest gathering I’ve ever been to.
250 thousand people.

The summit was at Bharat Mandapam. The largest venue I’ve ever been to. I must’ve done 250 thousand steps in that week.

The scale of the whole thing is very hard to describe, because every description sounds awful, but actually being in there was exhilarating. I wonder if people feel this way about mosh pits.

ai impact is fun bro just trust me bro

Unlike mosh pits, there was high security everywhere.

There were tons of government officials and dignitaries from countries around the world. So many of the most powerful people in tech. CEOs of everything. PM Modi inaugurating proceedings.

At every single booth, and every single room, the main focus was demonstration. Here’s what we built, here’s how it works, here’s the gap it fixed, here’s what the outcomes are. Everything from AI assisting doctors, AI helping farmers with soil optimization, local governments using AI to shorten response times in grievance redressal systems. AI tailoring math lessons to students that need extra help in village schools where teachers are in short supply.

The future looks like AI-powered everything. Decide how you wanna feel about that.

I expected to be overwhelmed, and I was.
What I didn’t expect was to feel moved or affected, and I was. There was a real earnestness to all of this. People cared deeply about trying to solve these problems.

I ended up talking to a Delhi police inspector on the first day, in the margins of all this madness. It was slightly intimidating to speak to an actual senior police guy along with his team, and the conversation naturally steered to whether AI could help with their work. In one second, all of them were like: the paperwork. They fill up thousands of pages for a single case. We talked about how there were tools being built for this kind of thing.

Something was said about it freeing up time for them, and they all laughed. Ma’am, there will be no free time. The crime in Delhi is unlimited. At least this way we can spend time on solving cases.

He was identifying the difference between a productivity problem and a systemic one. A tool can fix the first but neither of us had anything to say about the second.

If you made an online payment today, or cleared airport security with your Aadhaar card, you have Nandan Nilekani to thank for the infrastructure that made that work. He helped build some of the key pieces of technology that have transformed how a billion people live, including the world’s largest payment interface. There were many government representatives at the summit figuring out a model of this for their own countries. I’m fortunate to write for him and his nonprofit that’s doing a lot of incredible work using AI for good, which is where I get a lot of perspective on this subject.

He described the current moment as two races running simultaneously. A race to the top, and a race to the bottom.

The race to the bottom is AI being used to make life worse: deepfakes, AI-generated slop flooding every corner of the internet, disinformation. The race to the top is AI being deployed to serve people. Using AI to improve their lives in whatever capacity it can.

Right now, the race to the bottom is winning, and the honest question for anyone working in this space is what they’re actually doing about it.

The best answer I got to that question came from the least expected place: A priest.

This next part sounds like a ‘walked into a bar’ joke but it really happened. There was a panel discussion with government officials, people from the Gates Foundation and other research think tanks, people from academia, and a priest.

After all the predictable questions had been asked of the usual people, someone asked the priest: Do you think AI is good? I rolled my eyes. The man flew in from Rome to answer this?!!! But then he started talking and I promise I nearly dropped my phone trying to note down what he was saying.

He said: if you ask a thirsty man what is good, he will say water. Good is not a property of the thing, it depends entirely on the need it’s addressing. What is the need? What do people want AI to do for them? A value is the thing you’re actually trying to protect, the thing that you want to hold onto because it’s what makes you who you are.

A norm is the structure you build around the value: the rules, the restrictions, the behaviours designed to keep the value intact. They look related (and they are) but they are not the same thing. If you treat them as the same, all your good intentions go wrong because when the norm starts to feel outdated or inconvenient, people abandon it. When they do that, they also end up abandoning the value they thought they were protecting.

He then gave an example: the Catholic Church holds marriage as an important value, and yet priests don’t marry. This looks like a contradiction until you understand what the norm is actually doing. Marriage demands everything. So does a vocation to the priesthood. A priest wouldn’t be able to give his life fully to both a spouse and the vocation, and so the norm of priests not marrying protects the value.

What he was really describing to a room full of people thinking about AI, was the difference between asking:

Are we using this technology responsibly? vs asking,

What are we actually trying to protect, and is this the right structure to protect it?

The first question will have a thousand conferences dedicated to it. The second question is what you’re going to have to figure out for yourself. I have a three-year-old at home who did not choose to grow up in this moment. She will, for now, inherit whatever values I decide are important.

I need to know what I value before someone else’s norms become the default for us.


Part II: Are you agentic or mimetic?

Silicon Valley has a theory about AI, and the theory is this: AI is about to divide humanity into two groups: Those who are agentic, and those who are mimetic.

The mimetic person copies what others do, follows instructions, and executes well once they’re given instructions (sounds like a good employee on paper). The agentic person doesn’t wait to be managed or instructed. They see the gaps, decide what needs doing, and go do it. Every team in your company has both, every friend group has both.

When it comes to the future of work and AI, here’s what’s happening. Agentic people are using AI to massively expand what they can do. There are companies where each person’s job is basically to manage a team of AI agents that run around and do the grunt work. Before you think this is stuff happening in Silicon Valley, no, it’s happening everywhere.

I work for a company in Bangalore exactly like this. It’s genuinely fascinating and also a little insane to watch in real time. As a human, it’s your job to tell AI what to do.

So what about the mimetic people? They’re now in an uncomfortable position because, according to Silicon Valley, they’re functionally indistinguishable from the AI agents they work alongside. If you can be replaced by a sufficiently good prompt, the argument goes, you will be.

As crappy as this sounds, I think the split is real. I also think it’s worth noticing that the people most excited about this weird future of work have already decided *they’re the agentic ones.*

It’s very us-agentic-people vs. people-not-like-us-mimetic-copycats. That’s a comfortable place to argue from.

This is what all the smart people have to say about the future of work.

I’m gonna quickly tell you the story of a 21 year old guy named Roy Lee because it’s ridiculous and also interesting.

He went to Columbia with one clear, explicit goal: find a co-founder, start a company. His initial plan, like most tech bros, was to join one of the tech biggies (Meta, Amazon, etc.) But he got frustrated at their long, tedious technical interviews where they expect people to live-code stuff. So he built a tool called Interview Coder.

It’s an invisible screen overlay that listens to your technical interview in real time and feeds you AI-generated answers through an overlay only you can see. He used it to ace an Amazon interview, and got the offer.

He then turned down the offer and instead posted the video of the interview online, and it got millions of views. Amazon was SO cranky and pissed because someone had managed to cheat during their interview, and they threw a whole tantrum over it.

They complained to his university and got him kicked out, but that was kind of the point for him. By then, he was famous and he’d managed to get his co-founders, moved to San Francisco, changed the name of his company to Cluely, and launched with the stated mission to cheat on everything. Their manifesto was this: the future won’t reward effort. It’ll reward leverage.

Close your eyes and imagine their founding team.

Open your eyes. Here they are.

jump scare lol

I didn’t share this story with you because I think you or I are anything like Roy Lee. Most of us aren’t, for better or worse. 83,000 people signed up for Cluely, and I’m not sure whether ALL of them want to ‘cheat on everything’ (but maybe I’m being very charitable here).

I think a lot of us just like not having to do hard work or decide things. We are constantly, always, without a single break, having to DECIDE. THINGS.

And this isn’t a modern problem, free will has always been the burden of humanity. What should I study? What do I get good at? How do I care for this body I’ve been given? How do I love the people I’ve got? What do I do with my one precious life?

Which is why, when Silicon Valley says the future belongs to the agentic people, I think they’re not wrong. Being someone with agency is hard, that’s why most of us just copy what others want, and copy what others do.

Sartre called it being condemned to be free, which is a very French way of saying: nobody is coming to save you.

I think highly agentic people have sort of made their peace with that. They’ve learnt to have a good and healthy working relationship with their own judgment.

Knowing what you actually think vs. what everyone around you thinks. Knowing what you truly want vs. what everyone else wants. Getting in touch with your lizard brain.

Part III: The lizard brain

I’ve been learning MMA for about a year. This might lead you to believe I’m good at it, which would be incorrect. What I am is a big fan of the sport. I never expected to like it, and I didn’t sign up because I did. I was mainly just curious, and curiosity has gotten me into lots of fun situations.

A few weeks in, when I was still an absolute beginner, my class did something called Iron Man. One person stays in the center the whole time while everyone else takes turns coming in 1-on-1 to spar them.

The first couple of rounds I was giggling my way through it, very much in the energy of “this is fun, we’re just learning and having fun.” But by the third round or so, I was so exhausted I could collapse. Sparring is so much more tiring than it looks.

There’s someone coming at you, and you’re constantly having to move, anticipate what they’re gonna do, make decisions, manage your own panic, and keep your eyes open, and punch, kick and whatever.

And in the midst of that exhaustion, something suddenly hit me (wait, wrong wording because EVERYONE was hitting me). But it felt like I suddenly got it. When you’re fighting, you’re participating in one of the most essential and ancient acts of survival which (whether we like it or not) involves violence. Survival has been violent. And fighting activates a part of your brain that doesn’t get much use if you live a regular life.

I know this sounds like the part where some epic music starts playing, and I, on that mat with my gloves on, discovered some hidden well of fighting spirit and began to move in some ancient, powerful way. This isn’t a sports movie. None of that happened.

Marvel called me after they saw this

Being a good fighter requires a shocking amount of skill, training, and talent, all of which I am slowly and humbly in the process of acquiring. But I continued round after round to the point of nearly passing out, and in that moment I began to appreciate what makes this sport so different, and so addictive, even for a pansy girl like me.

I’m going somewhere with this next part, I promise.

In college, I had to spend a whole year studying comparative physiology. Basically what you learn in that class is what you have in common with the rest of the world. And I mean animals, plants, down to bacteria. You’re like, they’ve got mitochondria? Us too, check. Cell membrane? Girl same hi 5.

It gets particularly fascinating when it comes to your brain.

Some parts of your brain are so critical from an evolutionary perspective that they’re present in creatures like lizards and fish. Your brain is a collection of systems with different jobs. The part that handles threat detection, movement, the instant stress response, the gut feeling of knowing whether something is safe or not–––let’s call that the lizard brain. Because you share that part of your brain with a bunch of creatures, and one of them is, in fact, a lizard.

(Side note: Neuroscientists are arguing about the validity of this analogy. Some call it the triune brain but that’s not catchy at all so we’re sticking w lizard)

Needless to say, you obviously are NOT like a lizard, because on top of your lizard brain, you now have additional compartments.

Your cortex is the part responsible for language, analysis, reasoning. It takes everything you’ve learnt and experienced and helps you use it to do stuff. Everything I write in my blog is an exercise in wrestling my cortex into submission.

Your cortex is beautiful and remarkable but also quite unreliable. If you fall in love, this part of your brain gets stupid. It forgets that you’re hungry or tired, it doesn’t disagree when it should, it overrides all its judgement and doesn’t think straight. You will never find a lizard doing this dumb stuff.

The lizard is embarrassed for you right now.

he said so himself

Now let’s treat AI like a brain for a second, like so many of you already do (hehe burn).

AI is all cortex, no lizard.

It has read more information than you can read in your lifetime, and it does the same thing your cortex does: recalls information, and pattern-matches at a scale FAR better than yours will ever be. Great.

But what it cannot do is the lizardly stuff like sensing danger in a split second, trusting your gut feeling instead of overthinking, and basically keeping you alive.


A guy named Scott Alexander wrote a story called The Whispering Earring in 2012, before AI existed in its current form. Back then, Call Me Maybe was a hit song, and everyone thought the world was ending was ending. Cute.

His story is about a little jewel that tells you what to do, and it’s never wrong. The very first thing it whispers to the wearer is: it would be better for you if you took me off.

The guy doesn’t, because he is a hopeful idiot and also because it’s ✨sparkly and fun✨ and also, if it’s never wrong, what exactly is there to worry about?

The earring starts out with advice on the big stuff: career, relationships, where to live. Everything goes brilliantly. Then it works its way down to smaller things. What to eat. When to sleep. Eventually it goes all the way down to individual muscle movements. The guy stops thinking entirely. He just does what the earring says. He lives an extraordinarily successful, completely optimised life.

When he dies and the priests prepare the body for burial, they find his brain has almost entirely wasted away. The cortex was gone. But the older parts of the brain (the lizard brain if you will), were completely fine.

Alexander’s point is simple: if something is very good at processing information, let it process information. What you should NOT do is hand over what comes next. Which is what to do with the information you have, how do you make a judgement call and then take responsibility for the outcome. HOW DO YOU HAVE AGENCY?

The story ends with a line that took me a while to understand.

It is well that we are so foolish, or what little freedom we have would be wasted on us.

What he means is THANK GOD you’re not purely rational. Thank God there’s a part of you that’s ancient and stubborn that overrides what everyone says and thinks, and just acts based on gut feeling. That’s the lizard.

He says hi

Here is the thing about AI that is also true of politics, of your health, and of every large and inconvenient fact about the world: it does not care whether you are paying attention. It is happening to you either way. It will show up in your life, at your doctor’s office, in your kid’s classroom, in your next job application, the news.

You don’t get to opt out, there is no opting out. If you had to ask me whether AI is ‘good or not’ I honestly don’t have great answers. And I’m suspicious of anyone who seems certain about their answers.

This is what I’m aiming to do:

  • Decide what I actually value
  • Decide what norms I’m setting up for my life to protect those values
  • Decide what I will NOT hand over (independent thinking, imagination, etc. whatever it is for you)

Here’s who isn’t gonna decide that for you.

BOO jumpscare again

Here’s who should.

hi it’s me, I’ll take care of u

What I think we need, in giant quantities, is curiosity and humility and the ability to hold two things at once. AI can be good, but we’re gonna have to be fierce about how we protect what matters.

Put your gloves on.


Appendix: So what do you actually value? A little exercise.

Before we start: this isn’t an anti-AI manifesto (but you are free to develop one of your own if you want).

My perspective is that AI can be genuinely useful, and most of us are already starting to use it for some tasks. This is just about knowing where you stand on the more important stuff.

Step 1: Pick your values

This is a tentative list, feel free to make your own.

Creativity: The ability to come up with ideas that are your own, that came from how you specifically see the world.

Relationships: The connection you have with the people you love.

Autonomy: Making your own decisions after assessing the information you have, and then owning the outcomes.

Mastery: Getting genuinely good at things through practice and struggle.

Taste: Having your own take and opinion about art, travel, music, food, writing. Basically knowing what you actually like and why.

Curiosity: Thinking about what questions you care about. What answers matter to you.

Originality: Having ideas that start with you, even if AI helps you develop them.

Step 2: Figure out your norm

This is the important part. Once you know what the value is, you can arrive at the norm.

For each value you choose, ask yourself: am I looking for an outcome here, or is the process the whole point? I’ve written about this before, there are many things in life where the process really DOES matter. Those are things you can’t outsource to AI.

“For me, [value] means I won’t use AI to ___________.”

For me, relationships matter, which means I won’t use AI to resolve a conflict with someone I love.

Here’s an example of what my list looks like:

  • Creativity: I use AI for work with very stringent quality control, and my clients know. My blog, my nonsense Instagram posting, anything with my actual voice on it, is mine!!! (because AI does a really bad job with sounding like me. I’ve tried.)
  • Relationships: I won’t use AI to communicate with my friends, family, or anyone I actually care about. I won’t use it to sort out relational dynamics or reply to difficult messages. I won’t use AI to apologize for me. If it matters, I’m writing it.
  • Autonomy: I won’t use AI to decide whether to take a job, end a relationship, change my treatment plan, or figure out how to parent.
  • Mastery: I can’t use AI for mastery anyway, sigh. If AI could make me good at MMA, you’d best believe I’d be on it.
  • Taste: I will never ask AI what it thinks about things. I won’t ask AI for opinions on books, films, music. I will think about that myself, or use communities like Reddit, or ask my friends. Human judgement matters here.
  • Curiosity: If something surprised me or confused me or made me want to know more, I’m going to sit with my own thoughts before scrambling to find out what others think, or use AI to decide how to think about it.
  • Originality: If I need to come up with ideas, I’m going to sit and think through problems, solutions, context etc. on my own before I begin to involve AI in the process.

I fully appreciate that your list will look different. My list is evolving and I feel quite vulnerable sharing this publicly. But here’s where I am for now.

Step 3: For parents

Kids who learn to think, write, argue, and reason on their own will be dramatically better at using AI than those who don’t.

You can’t evaluate good thinking if you’ve never done it yourself. You can’t prompt well if you don’t know how to pick between many answers that all look *correct* or *good*. You’ll be much, much, much better at being in charge of a tool if you know how to do the thing it’s doing for you.

Someone who has written enough essays knows when AI has produced a bad argument. Someone who has had to wrestle with their own opinions will be better at spotting when AI is giving them BS. That’s what I’m building towards. Someone who can use AI along with their own judgement.

For that, they’ll need to do the hard, annoying, mentally exhausting, mistake-making journey of developing a sense of judgement. This foundation is what keeps them in charge. What I’m protecting against is the shortcut that removes the skill entirely.

Fine to use AI:

  • To explain a concept in a different way when the textbook explanation isn’t making sense
  • To come up with mnemonics or memory devices to remember stuff
  • To simplify a complex topic before going deeper
  • To come up with questions or quiz questions to test their understanding after studying

Not a good idea:

  • Asking AI to write for you before you’ve tried to construct your own argument
  • Asking AI what to think about something before you’ve thought about it on your own
  • Using AI to figure out social dynamics and friend situations 
  • Using AI to process emotions and discomfort
  • Asking AI to solve a problem instead of figuring out where you’re stuck 

If you’ve made it this far, thank you for reading and for taking any of this seriously. Thanks for spending this time with me.

If you have questions the good AI work happening in India (believe me, there’s lots), or if want to know more about my little values-norms framework, or just want to talk about any of this, email me at soniarebeccamenezes@gmail.com.

I love these conversations. I’m always learning, and I’m always happy to chat.

Comments

Leave a comment