AI expertise for consumer insights

"There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy."
Hamlet says this to his friend after seeing a ghost. Horatio is a scholar. He has a framework for understanding the world, and it is a good one, but it does not cover everything. The ghost is real and Horatio's philosophy has no room for it.
I think about this line a lot when I listen to AI leaders talk about the future of work.
The people building these systems are, overwhelmingly, coders. They live in a world of text, logic, and digital computation. When they look at the economy and ask "what can AI replace?", they see the economy through the lens of their own experience. They see tasks that look like coding: information processing, document generation, data analysis, pattern recognition. And from where they sit, it does look like AI can do most of it.
They're right about the part they can see. They're missing the rest. The coder's view of the economy mistakes the slice visible from a screen for the whole thing.
Dario Amodei, the CEO of Anthropic (the company that makes Claude, the AI I use daily) published an essay called "Machines of Loving Grace" that imagines AI solving biology, curing disease, lifting the developing world, and compressing a century of progress into a decade. It is thoughtful and worth reading. But there is a conceit underneath it that I find troubling: the implicit belief that intelligence is computation, and that enough computation can therefore solve everything. Taken to its logical end, this is an attempt to remake God in the image of an engineer.
Here is something computation cannot do.
You walk into a house and smell cinnamon and old wood. Before you form a single conscious thought, you are six years old, standing in your grandmother's kitchen on a Saturday morning. The feeling arrives whole: warmth, safety, loss, love, the specific quality of winter light through a window you haven't seen in thirty years. You didn't choose to retrieve this memory. It grabbed you. It came through your nose and went straight to a part of your brain that doesn't trade in logic or language.
No language model will ever have that experience. Not because the engineering isn't good enough yet. Because the machine has no grandmother. It has no childhood. It has no body that spent years building the associations between a molecule and a feeling. It processes the word "cinnamon" as a token with statistical relationships to other tokens. You process cinnamon as a life.
That observation sounds sentimental. It is actually technical. And it points to a blind spot in how the AI industry thinks about intelligence, work, and what it means to be human.
What the machine actually does
Modern AI runs on a loop. You give it a prompt. It predicts the most statistically likely next token. It chains those predictions together into something that looks like thought. Then you evaluate the result, adjust, and run the loop again.
I have written about this loop before. It's called a Ralph Loop, after Ralph Wiggum from The Simpsons, because Ralph is not brilliant but he keeps trying, and the trying is the point. The loop is: try, evaluate, feed back, try again. It is the core mechanic behind every useful AI workflow I have seen.
The loop is powerful. It can write code, generate reports, classify images, predict consumer preference from molecular structure, and produce first drafts of nearly anything. The people who harness it well are getting extraordinary results. I run these loops myself, often overnight, and wake up to finished work.
But the loop has a hole in it. A big one.
The loop cannot decide what to work on.
The question the machine cannot answer
Life is a giant multi-objective optimization problem. You are always optimizing for multiple things at once, and those things conflict. Career and family. Speed and quality. Profit and ethics. Health and pleasure. Freedom and security.
The hard part is not solving any one of those problems. The hard part is deciding which ones matter, how much, and to whom. That decision is subjective. It depends on who you are, where you come from, what you have lost, what you love.
AI can optimize. It is the best optimizer we have ever built. But it cannot choose the objective function. It does not care about anything. It has no preferences, no losses, no skin in the game. It is engineered, and it is cold.
Someone will object here: of course the machine can optimize for whatever you want. You just tell it. Maximize revenue. Minimize cost. Maximize user engagement. The machine will do exactly that, and it will do it better than you can.
This is true and it misses the point. You can always give the machine an objective. But where did that objective come from? You chose it. And why did you choose it? Because of some higher-level goal. And why does that goal matter? Because of something above it. Follow the chain far enough and you always arrive at a place where the answer is not logical. It is felt. You care about your kids' future. You want your work to mean something. You are afraid of being irrelevant. You have a gut sense that this path is right and that one is wrong, and you cannot fully explain why.
That top-level choice, the one at the root of the whole tree, is a value. And values don't come from data. They come from living a life in a body that feels things.
Humans are not engineered. We evolved. And evolution gave us something that no architecture diagram includes: emotion.
Why emotion is not a bug
Technical culture treats emotion as noise. Something to filter out so the signal comes through clean. Damasio showed decades ago that this is backwards. Patients with damage to the emotional centers of their brains can reason perfectly well in the abstract, but they can't make decisions. They can list pros and cons all day, but they can't choose, because choosing requires caring, and caring is a feeling.
Emotion is how evolution solved the objective function problem. Over hundreds of millions of years, organisms that cared about the right things (food, safety, offspring, social bonds) survived. The ones that didn't, didn't. Your sense of what matters has been tested by natural selection for longer than multicellular life has existed. It earned its place.
This is why the "AI will replace humans" narrative gets the situation backwards. The machine handles the middle of the workflow. The human handles both ends: deciding what to work on and judging whether the result is actually good. Both of those tasks require a sense of what matters. The machine doesn't have one.
The nose knows something the model doesn't
This is where my own field becomes relevant, and where the coder-centric worldview breaks down most visibly.
I have spent my career in sensory science. My PhD is in mathematics, but my work is measuring human experience of products through the senses: taste, smell, touch, sight, sound. I started at age 17, writing Fortran code for my father's sensory analysis company. Clients would mail us floppy disks with data, I would run the analysis, and we would mail the results back. I have watched this field evolve for decades, and I can tell you that the chemical senses are where AI's limitations become impossible to ignore.
Smell and taste are the phylogenetically oldest senses. They evolved first because they solve the most fundamental problem any organism faces: is this thing food or poison? Should I approach or avoid? The chemical senses are wired directly into the limbic system, the emotional brain. When you smell something, you feel something before you think anything. The cognitive processing comes later, if it comes at all. That wiring is the deepest and most battle-tested circuit in your nervous system.
Vision and hearing, by contrast, are newer senses. They are more cortical, more abstract, more amenable to the kind of pattern recognition that neural networks do well. You can build an image classifier. You can build a speech-to-text model. These are impressive accomplishments and they work because vision and hearing operate in domains that map relatively well onto digital representation.
But building an artificial nose that captures what a human nose captures is a different kind of problem. The human nose detects molecules, yes. It also triggers memories, emotions, and survival responses that are deeply tied to the body it lives in. A model trained on every wine review ever written can generate a new review that reads beautifully. It has never tasted wine. It doesn't know what it's talking about.
This matters beyond wine criticism. If you are a coder, your work lives in the visual and logical domains where AI is strongest. So when you look at AI capabilities, you see a machine that can do most of what you do. If you are a perfumer, a chef, a brewer, a physical therapist, a midwife, a farmer, or anyone whose work depends on embodied judgment, you see something different. You see a tool that is useful for some things and irrelevant for others.
The coder's mistake is assuming that the world runs on code. Much of the economy does not. It runs on bodies doing things in physical space, making judgments that depend on senses and feelings that evolved long before the cortex existed.
Moravec noticed this pattern in 1988. The things that are hard for machines (sensorimotor skills, contextual interpretation, gut feeling) are easy for humans because evolution spent a long time on them. The things that are easy for machines (arithmetic, search, logical deduction) are hard for humans because we have only been doing them for a few thousand years. The oldest capacities are the deepest, and the deepest are the hardest to replicate.
Searle made the complementary point with his Chinese Room: a system can manipulate symbols perfectly and still understand nothing. Syntax isn't semantics. Processing isn't experience. The things that can't be digitized aren't edge cases. They are most of life.
More problems, not fewer
Here is the part that most job-loss predictions miss entirely.
When you solve a problem, you do not reduce the number of remaining problems. You increase it. Every solution unlocks new problems that were invisible before. Antibiotics solved bacterial infection and created antibiotic resistance. The internet solved information access and created misinformation, attention economics, and cybersecurity as entire fields. The car solved transportation and created traffic engineering, urban planning, insurance law, and emissions regulation.
The pattern accelerates. It is combinatorial. As AI solves more problems faster, the space of new problems will expand faster than it ever has. We aren't heading toward a world with less work. We're heading toward a world with different work, and a lot more of it.
You don't have to take this on theory. Ask anyone who has been working closely with AI for the past year. Everyone I know, myself included, is working more than ever. Not because the tools don't work. Because they work. Suddenly a million problems that used to be impractical to solve are solvable, and once you see that, you can't stop. The backlog doesn't shrink. It explodes. Tuesday's solution creates Wednesday's three new projects. The tool that was supposed to save you time has instead revealed how much there is to do.
This is what the job-loss predictions get backwards. They model AI as a fixed quantity of labor being transferred from humans to machines. The actual experience is the opposite. AI is a lever that makes the accessible problem space vastly larger, and humans rush in to fill it because that is what humans do.
Productivity will go through the roof, and that will be genuinely good. The price of goods and services will drop. Things that are expensive now will become cheap. Things that are impossible now will become routine. It will be wonderful.
But it will not feel like paradise. It will feel like a new normal.
We already live in a world of miracles
A person from the year 1300 transported to the present would think they had arrived in a post-scarcity utopia. Abundant food, clean water on demand, warm shelter, medicine that cures infections, machines that fly through the air, a device in your pocket that contains most of human knowledge. By any medieval standard, we already live in heaven.
It doesn't feel like heaven. It feels like Tuesday. There are bills to pay, kids to raise, health problems, political anxiety, existential dread. The material conditions improved beyond recognition, but the human experience of life recalibrated to the new baseline and found new things to worry about.
The same thing will happen with AI. Twenty years from now, people will live in a world that would seem miraculous to us today. And they will experience it as a new kind of normal, with its own pressures, its own trade-offs, and its own problems that need solving.
That sounds cynical, but it's actually how human progress works. We solve a set of problems, recalibrate, and start working on the next set. The fact that the problems change doesn't mean progress is an illusion. The problems really do get better over time. Medieval problems were worse than modern ones by almost any measure.
But they do not disappear. And solving them requires something that a statistical prediction engine does not have: a sense of which new problems actually matter to the people living in that world.
The job that remains
The word "intelligence" is doing a lot of damage in this conversation. When people hear "artificial intelligence," they hear "artificial version of what I have." That is wrong. What the machine has is a specific and powerful kind of capability: fast pattern matching, statistical prediction, and optimization across large search spaces. What you have is something else: a body, a history, emotions, relationships, mortality, and the ability to care about things.
Those are different kinds of capability, and confusing them is the source of most of the panic about jobs.
The weaver who judged the power loom on its first yard of cloth made an understandable mistake. The loom was crude. It broke threads. The cloth was worse than what a skilled hand could produce. But the loom got better, and the weaver's job changed. It stopped being about producing cloth and started being about deciding what cloth to make, running thirty looms at once, and judging whether the output was good.
The same shift is happening now, to all of us, across every field. The machine handles the production. The human handles the judgment. And judgment, real judgment about what matters, is not something you can automate. It comes from having a life.
Horatio was a brilliant man. His philosophy was good. It just wasn't big enough.
The people building AI are brilliant too. Their tools are extraordinary. But there are more things in heaven and earth than can be captured in a token prediction loop. The chemical senses knew that before the cortex existed. The body knows it still.

John Ennis is a leading expert in sensory science and consumer research, with extensive experience in statistical analysis and product development methodologies.