Skip to content

Part of the Complete Guide

What AI Is—And What It Is Not in Sensory Science

AI can model perception and predict preference; it cannot have experience. Understanding this boundary is essential for every sensory scientist deploying AI tools.

By Dr. John Ennis, PhD — Aigora

Modern AI systems are extraordinarily capable statistical engines. Large language and vision models learn regularities from vast corpora and generalize within distribution. They are not perceivers. They have no bodies, interoception, or developmental histories; they possess no first‐person point of view.

Searle's Chinese Room reminds us that syntax is not semantics (Searle, 1980; Bender & Koller, 2020). Damasio (1994) shows that feeling and reason are integrated through the body's somatic states. My own postdoctoral work on spiking neural networks (SNNs) explored biologically inspired temporal coding (e.g., time‐to‐first‐spike) and neuromodulatory mechanisms to model efficient sensory processing (Maass, 1997). Such models are superb for explaining or predicting aspects of human performance, and for building faster systems, but they do not collapse the gap between simulated spikes and the felt pang of hunger (Levine, 1983). In short: AI can model perception and predict preference; it cannot have experience.

The Practical Governance Challenge

This philosophical boundary is now meeting a practical governance challenge. As AI systems transition from stateless, one‐off tools toward persistent agents that maintain state, remember past interactions, and plan across extended horizons, the traditional binary of “natural person” versus “mere thing” proves insufficient. A pragmatic framework proposed by Leibo et al. (2025) draws on property‐law theory (Schlager & Ostrom, 1992) to “unbundle” legal personhood into a flexible set of rights and obligations—contractual capacity, persistent identity, liability, and transparency duties—that can be assigned à la carte to match an AI's actual societal role. Under this view, the question shifts from what an AI is to how it can be identified and which obligations are useful to assign it in a given context. Maritime law, which has long personified the vessel itself to resolve jurisdictional disputes, offers a working precedent: a judgment against an “ownerless” AI could seize its operational capital or arrest its runtime, much as a court can arrest a ship (Leibo et al., 2025). For sensory science, this matters because AI tools increasingly operate as persistent, adaptive agents inside formulation pipelines and consumer‐insight platforms; clear governance of their identity, accountability, and decision boundaries is essential to maintaining scientific trust.

A Deeper Architectural Critique: JEPA

A deeper architectural critique reinforces the experiential distinction. Yann LeCun argues that scaling Large Language Models will not yield human‐level intelligence because language describes only a fraction of the world; a four‐year‐old has absorbed more information about physical reality through vision than the largest LLMs have through text (LeCun, 2022). His Joint Embedding Predictive Architecture (JEPA) program proposes an alternative: rather than generating tokens or pixels, JEPA learns abstract representations of world dynamics in a continuous latent space, predicting the “essence” of physical states rather than their surface details. This is not merely an algorithmic refinement; it is a move toward systems that build causal world models—closer to how biological organisms learn—yet still without subjective experience. The lesson for sensory scientists is twofold: AI is gaining genuine physical intuition (which makes it a more powerful partner for modeling perception), but it remains without a first‐person point of view (which makes human judgment irreplaceable for meaning and ethics).

“AI is gaining genuine physical intuition (which makes it a more powerful partner for modeling perception), but it remains without a first‐person point of view (which makes human judgment irreplaceable for meaning and ethics).”

The Hauntological Dimension

There is also a hauntological dimension worth acknowledging. Every response generated by a large language model is, in a sense, a séance—summoning fragments of the dead: the words of authors long gone, obsolete ideologies, and cultural ephemera compressed into parameter weights (Derrida, 1994; Fisher, 2014). The AI training corpus is not a neutral database but a “hyper‐rhizome” linking disparate ideas in a decentralized tangle of parameters. McLuhan's “rear‐view mirror effect” applies: we interpret new technologies through the lens of old ones, and AI, as currently constituted, treats the past as the most reliable guide to the future. This recursive dynamic can reinforce existing biases and suppress true novelty. Sensory scientists should be alert to this when deploying LLMs for narrative analysis or consumer insight—the model's fluency can mask the fact that it is remixing the archive, not encountering the world anew.

The Practical Distinction

This distinction matters practically. It tells us where to lean on models (search, prediction, triage) and where to insist on human arbitration (construct validity, context, culture, and ethics).

Where to Lean on Models

  • • Search and retrieval
  • • Prediction and screening
  • • Triage and prioritization
  • • Pattern recognition at scale

Where to Insist on Human Arbitration

  • • Construct validity
  • • Context and culture
  • • Ethics and trade‐offs
  • • Meaning‐making

Key Concepts Referenced

Searle's Chinese Room (1980): A thought experiment demonstrating that syntactic manipulation of symbols does not constitute semantic understanding.

Damasio's Somatic Marker Hypothesis (1994): Feeling and reason are integrated through the body's somatic states—you cannot separate cognition from embodiment.

LeCun's JEPA (2022): Joint Embedding Predictive Architecture—learning abstract world dynamics in continuous latent space rather than generating tokens.

Pragmatic AI Personhood (Leibo et al., 2025): A framework for unbundling legal personhood into flexible rights and obligations for autonomous AI agents.

Hauntology (Derrida, 1994; Fisher, 2014): LLMs recombine archived cultural fragments—they remix the past, not encounter the world anew.

Dr. John Ennis

Dr. John Ennis

President & AI Pioneer, Aigora

With over 30 years in sensory science and a postdoctoral focus on AI, Dr. Ennis is regarded by many as the world's foremost authority on applying artificial intelligence to sensory and consumer science. Author of 50+ publications and 4 books.

Frequently Asked Questions

Can AI replace human sensory panels?

No. AI systems are statistical engines that learn regularities from data and generalize within distribution. They have no bodies, interoception, or developmental histories and possess no first-person point of view. AI can model perception and predict preference, but it cannot have experience. Human sensory panels remain essential for construct validity, contextual judgment, cultural interpretation, and meaning-making.

What is the difference between AI prediction and human perception?

AI prediction is based on statistical pattern recognition across large datasets—it correlates inputs with outputs. Human perception, by contrast, is embodied: it integrates somatic states, emotional context, developmental history, and cultural meaning through a lived, first-person experience. As Searle’s Chinese Room argument illustrates, syntax (what AI does) is not semantics (what humans do). AI can predict what you might prefer; it cannot understand why you prefer it.

What is the Chinese Room argument and why does it matter for sensory science?

The Chinese Room is a thought experiment by philosopher John Searle (1980) showing that a system can manipulate symbols according to rules without understanding their meaning—syntax is not semantics. For sensory science, this means that even the most sophisticated AI model that predicts flavor preference or odor descriptors does not understand taste or smell. It processes data patterns, not experiences. This distinction tells practitioners where to trust AI (search, prediction, triage) and where to insist on human arbitration (construct validity, context, culture, ethics).

What is JEPA and how does it relate to sensory science?

JEPA (Joint Embedding Predictive Architecture) is Yann LeCun’s proposed alternative to large language models. Rather than generating tokens or pixels, JEPA learns abstract representations of world dynamics in a continuous latent space, predicting the “essence” of physical states rather than surface details. For sensory science, JEPA is significant because it moves AI toward systems that build causal world models—closer to how biological organisms learn—making AI a more powerful partner for modeling perception. However, JEPA systems still lack subjective experience, which keeps human judgment irreplaceable for meaning and ethics.

How should sensory scientists think about AI governance and legal personhood?

As AI tools transition from stateless tools toward persistent agents that maintain state, remember past interactions, and plan across extended horizons, the traditional binary of “natural person” versus “mere thing” proves insufficient. A pragmatic framework proposes “unbundling” legal personhood into flexible sets of rights and obligations—contractual capacity, persistent identity, liability, and transparency duties—assigned to match an AI’s actual societal role. For sensory science, this matters because AI tools increasingly operate as persistent, adaptive agents inside formulation pipelines; clear governance of their identity, accountability, and decision boundaries is essential to maintaining scientific trust.

Ready to Put AI to Work?

Now that you understand what AI can and cannot do, see how Aigora's THEUS platform puts these principles into practice—amplifying your sensory science expertise with responsible, interpretable AI.