Introduction: The Bottleneck Shifts from Measurement to Meaning
Life is lived through the senses. Our field exists to make that lived experience measurable, comparable, and actionable. AI now sits inside nearly every step of that pipeline. The usual storyline says the “human sensor” will be replaced by algorithms that can define flavor and predict preference with disembodied precision.
That storyline is wrong.
As AI automates what is easy to formalize, the value of what only humans can do—embodied sensing, cultural interpretation, ethical judgment, and meaning-making—rises. This stance is consistent with long-standing insights: Moravec's paradox reminds us that what is hard for machines (sensorimotor nuance) is often what evolution made easy for animals and humans.
Embodied cognition shows that intelligence is enacted by a living body in an environment. Sensory science—the science of lived experience—is therefore not sidelined by AI; it is elevated by it.
It is worth noting, however, that the Moravec Gap itself is narrowing faster than many predicted. The 2025–2026 period has seen the emergence of Vision-Language-Action (VLA) models—such as Google DeepMind's Gemini Robotics 1.5—that unify perception, language-based reasoning, and physical action in a single architecture. These advances do not invalidate Moravec's insight—the computational cost of sensorimotor skill remains immense—but they do compress the timeline and sharpen the question: as machines close the gap on embodied action, the uniquely human contributions of cultural interpretation and meaning-making become the irreducible differentiator.
What AI Is—And What It Is Not
Modern AI systems are extraordinarily capable statistical engines. Large language and vision models learn regularities from vast corpora and generalize within distribution. They are not perceivers. They have no bodies, interoception, or developmental histories; they possess no first-person point of view.
Searle's Chinese Room reminds us that syntax is not semantics. Damasio shows that feeling and reason are integrated through the body's somatic states. In short: AI can model perception and predict preference; it cannot have experience.
A deeper architectural critique reinforces the experiential distinction. Yann LeCun argues that scaling Large Language Models will not yield human-level intelligence because language describes only a fraction of the world; a four-year-old has absorbed more information about physical reality through vision than the largest LLMs have through text.
Where to Lean on Models
- • Search and retrieval
- • Prediction and screening
- • Triage and prioritization
- • Pattern recognition
Where to Insist on Human Arbitration
- • Construct validity
- • Context and culture
- • Ethics and trade-offs
- • Meaning-making
Simulating the Senses: Progress and Reality Checks
The past decade has seen remarkable progress in “sensing with silicon.”
From Chemometrics to Deep Learning
Our tradition of multivariate modeling now includes neural networks that relate chemistry, instrumental measures, panel data, and consumer response.
The Principal Odor Map
Developed by a team originating from Google Brain, the POM is a 256-dimensional embedding trained on over 5,000 molecules with perceptual labels. In a controlled “Odor Turing Test” using a 138-word fragrance-wheel lexicon and 320 unique compounds, the POM predicted odor profiles more accurately than the average trained human panelist in 53% of cases.
By early 2026, the POM has moved decisively from laboratory curiosity to industrial platform. Osmo, the Google Research spin-off that developed the map, closed a $70 million Series B and launched “Generation,” a B2B fragrance house powered by what the company calls Olfactory Intelligence.
Electronic Taste
In January 2025, researchers at Penn State University unveiled a graphene-based electronic tongue that surpasses human sensitivity in specific quality-control tasks. When the AI was allowed to define its own assessment parameters directly from raw sensor data—rather than using 20 human-selected metrics—accuracy rose from 80% to over 95%.
Industry Consolidation
NielsenIQ's April 2025 acquisition of Gastrograph AI signals the definitive end of the “experimental pilot” era for sensory AI, integrating predictive flavor modeling into NIQ's BASES Creative Product AI suite across 95+ countries.
Reality Checks
End-to-end “AI-only” systems stumble in messy contexts. High correlation is not comprehension. Demos are not deployments. The lesson is consistent: AI is a potent partner when humans remain firmly in the loop.
AI as Amplifier: What Changes in Practice
The most productive framing is AI as an amplifier of human expertise.
From Static Reports to Living Insight
Cloud pipelines and APIs cut cycle time from weeks to hours; models pre-screen “silicon samples,” prioritize formulations, and surface latent structure across studies.
Biometric Integration
Biometric sensors are becoming standard practice in sensory research: galvanic skin response (GSR), heart-rate variability, and EEG provide objective signals of attention and emotion that self-reports often miss. A logistic regression model developed in early 2026 achieved 84.1% accuracy in predicting olfactory preference from changes in heart rate and respiratory features alone.
Twelve-Month Starter Plan
- 1. Audit data assets and consent language
- 2. Build a lightweight pipeline that joins panel, instrumental, and consumer data
- 3. Add uncertainty estimates
- 4. Pilot model-guided pre-screening in one category with pre-registered decision rules
- 5. Set an internal interpretability standard (drivers, error taxonomy, counterfactuals)
Narrative-Driven Analysis: Making Meaning Measurable
Humans think, remember, and decide through stories. Decades of work show that narratives—actors, goals, obstacles, turning points—organize memory, transport attention, and shape judgment. If sensory science is the science of lived experience, then narrative is a first-class data type, not an anecdotal garnish.
What's New
Large language models (LLMs) make interactive, scalable, and grounded narrative analysis feasible. With retrieval-augmented generation (RAG), a researcher can ask the model to structure transcripts, diaries, and open-ends into explicit story grammars—with evidence links to the original text—and to propose rival narratives that can then be tested experimentally.
A Practical Workflow
- 1. Elicit mini-stories, not just opinions. In home-use tests or EMA, ask participants for the short story of their last use (setting, companions, goal, obstacles, outcome, feelings).
- 2. Structure with story grammar. Use the LLM (constrained by your corpus) to tag setting, characters, goals, obstacles, turning points, outcomes, affect, sensory imagery—and to cite verbatim IDs for every tag.
- 3. Surface rival narratives. Request 3–6 data-backed narratives with prevalence estimates, signature cues, enabling contexts, and contradictory evidence.
- 4. Quantify without flattening. Link narratives to hedonic/JAR/CATA and to instrumental data via mixed models.
- 5. Design counterfactuals. Turn narratives into testable changes and pre-register criteria; let experiments arbitrate.
- 6. Report transparently. Summaries should pair a concise narrative synopsis with 2–3 anchor quotes, the enabling cues and contexts, recommended design moves, and an uncertainty band. No evidence, no claim.
Design for Causality, Not Just Correlation
Models are best at proposing hypotheses; experiments should arbitrate. Build pipelines that enable counterfactual testing:
- • Use target-trial thinking and factory-floor A/B pilots to measure causal effects (e.g., +10% volatile X while texture held constant → Δ freshness).
- • Carry uncertainty through to decisions (Bayesian decision analysis).
- • Treat context as a manipulated factor: usage setting, co-consumption, time-of-day, and social frame often beat composition in explaining outcomes.
Governance for Sensory AI: Treat Models Like Instruments
Sensory AI should be governed as carefully as any lab instrument:
Intended-Use Statement
Where the model may be trusted; where it may not.
Data Sheet
Sources, representativeness, known gaps; explicit handling of perceptual minorities.
Drift Monitors
Seasonality, supply-chain shifts, demographic changes.
Interpretability Pack
Drivers, localized effects, counterfactuals, error taxonomy.
Human Stop-Rules
Thresholds that automatically trigger panel/consumer re-checks.
EU AI Act Implications
The EU AI Act, entering full implementation on August 2, 2026, introduces a four-tier risk classification system with direct consequences for sensory AI. Systems that use biometric data (including physiological responses to olfactory or gustatory stimuli) for identification or categorization fall into the “High-Risk” category, requiring pre-market conformity assessments, mandatory registration, and continuous post-market monitoring.
Responsibilities and Risks: Design for Flourishing
Optimization has a dark side. If we optimize only for short-run liking, we risk hyper-palatable cul-de-sacs and homogenization of taste, eroding cultural variety. Sensory scientists should:
- Broaden objectives. Include durability, nutrition, sustainability, and cultural relevance alongside acceptance.
- Protect attention and well-being. Personalization should respect thresholds and avoid sensory overload.
- Preserve provenance and ritual. The stories around products shape perception; steward them responsibly.
- Champion open ontologies. Shared descriptors and data hygiene will outcompete proprietary vagueness.
A Five-Point Framework for Practitioners
Reassert Domain Authority
You are the expert on human perception. Use AI to widen the search, not to narrow your role.
Be a Data Skeptic
Curate training data; probe bias, coverage, and drift. Garbage in, gospel out is the new danger.
Insist on Interpretability
If it can't show its work, it isn't a scientific instrument.
Keep Humans in the Loop
Panels, ethnography, and expert judgment don't go away; they get sharper and more targeted.
Track the Edge
Pilot neuromorphic sensors, event-based data, and embodied evaluation setups now to understand their affordances.
A Research Agenda for the Next Five Years
Model–Human Alignment
When do model-predicted similarities match human perceptual similarity across contexts?
Cross-Modal Binding
How do AI-learned multimodal embeddings relate to human cross-modal correspondences?
Uncertainty for Decision-Making
Which metrics most usefully triage formulations before panel exposure?
Adaptive Panels
Can model-guided, sequential designs reduce N while increasing learning without bias inflation?
Cultural Generalization
Sampling strategies that preserve diversity; transfer testing across cultural palates.
Human–AI Co-Creativity
Interfaces that let experts express tacit constraints and steer generative ingredient/aroma spaces.
Neuromorphic Sensing
Real-world advantages (latency, robustness, power) for quality control and safety.
Tactile Sensing
Robotic tactile systems for texture and mouthfeel characterization.
Conclusion: When Machines Predict, Humans Must Decide
AI can classify, cluster, and forecast with superhuman speed. It cannot care. It has no palate, no memory of a grandmother's kitchen, no stake in whether a product becomes part of a family ritual. That is precisely why sensory science becomes more important as AI spreads. We hold the methods and the judgment needed to link signals to meaning and to steward trade-offs that are technical, cultural, and ethical all at once.
The convergence now underway—latent world models that build physical intuition from observation, neuromorphic hardware that enables milliwatt-level always-on sensing, tactile systems that resolve forces lighter than a paperclip, and persistent AI agents that require new governance frameworks—makes the amplifier relationship between AI and sensory science not just a hopeful metaphor but an operational reality.
The future is not “AI versus panels.” It is AI-elevated sensory science: models do the combinatorics; humans do the sense-making. If we embrace that division of labor—critical, curious, and unapologetically human—our field will shape technologies that deepen, rather than dilute, lived experience.

Dr. John Ennis
President & AI Pioneer, Aigora
With over 30 years in sensory science and a postdoctoral focus on AI, Dr. Ennis is regarded by many as the world's foremost authority on applying artificial intelligence to sensory and consumer science. Author of 50+ publications and 4 books.
%20(1).png&w=3840&q=75)