Skip to content

Part of the Complete Guide

Simulating the Senses: E‐Noses, E‐Tongues & the Principal Odor Map

The past decade has seen remarkable progress in “sensing with silicon.” From chemometrics to neuromorphic hardware, AI is digitizing smell, taste, and touch—with transformative results and important caveats.

By Dr. John Ennis, PhD — Aigora

From Chemometrics to Deep Learning

The past decade has seen remarkable progress in “sensing with silicon.”

Our tradition of multivariate modeling (Wold, 1995) now includes neural networks that relate chemistry, instrumental measures, panel data, and consumer response (Chen et al., 2020; Nunes et al., 2023).

E‐Noses, E‐Tongues, and Olfactory Maps

Sensor arrays coupled with pattern recognition classify mixtures; a “principal odor map” can predict odor descriptors from molecular structure with human‐level accuracy on targeted tasks (Tan & Xu, 2020; Lee et al., 2023). Start‐ups leverage these mappings to design novel fragrances (Mullin, 2023).

The Principal Odor Map (POM): A Closer Examination

The Principal Odor Map (POM) deserves closer examination, as it represents the first scalable computational bridge between molecular structure and perceived smell. Developed by a team originating from Google Brain, the POM is a 256‐dimensional embedding trained on over 5,000 molecules with perceptual labels—far surpassing legacy representations such as the Morgan Fingerprint, which relied on binary vectors. In a controlled “Odor Turing Test” using a 138‐word fragrance‐wheel lexicon and 320 unique compounds, the POM predicted odor profiles more accurately than the average trained human panelist in 53% of cases. Critically, the model's power appears to derive from recognizing metabolic similarities—clustering molecules by the minimal enzymatic steps required for biological conversion—rather than surface‐level chemical structure alone (Lee et al., 2023). Current research continues to address important limitations: the POM initially struggled with enantiomers (chiral mirror‐image molecules that produce distinct scents, e.g., the orange versus lemon aromas of limonene) and with concentration‐dependent nonlinearities, where a compound that smells “roasty” at low levels may be perceived as “urine‐like” at high levels.

“The POM predicted odor profiles more accurately than the average trained human panelist in 53% of cases—its power derives from recognizing metabolic similarities rather than surface‐level chemical structure alone.”

Osmo: From Laboratory Curiosity to Industrial Platform

By early 2026, the POM has moved decisively from laboratory curiosity to industrial platform. Osmo, the Google Research spin‐off that developed the map, closed a $70 million Series B (led by Two Sigma Ventures, total capital now $130 million) and launched “Generation,” a B2B fragrance house powered by what the company calls Olfactory Intelligence (OI). Generation enables the entire product lifecycle—molecular design, formula creation, manufacturing, and packaging—significantly reducing time‐to‐market and, crucially, democratizing custom fragrance development for indie brands that previously needed millions in investment and months of lead time (Osmo, 2026). Alongside the software platform, Osmo has announced a specialized manufacturing plant in Elizabeth, New Jersey, designed to operationalize “molecular printing,” where digital scent recipes are translated into physical liquid samples. The company has also demonstrated “scent teleportation”—a hardware–software loop in which a GC‐MS sensing unit captures the volatile profile of a sample, uploads it to the cloud for POM mapping, and transmits a precise digital recipe to a remote molecular printer that reconstructs the aroma. The long‐term roadmap includes miniaturizing these sensors to smartphone scale, and researchers are exploring focused‐ultrasound stimulation of the brain's olfactory regions to create scent experiences without physical odorants—a development with profound implications for VR/AR immersion and for individuals with anosmia (Osmo, 2024; ultrasound olfaction research, 2025).

Electronic Taste: The Graphene E‐Tongue

Parallel progress in gustation is equally striking. In January 2025, researchers at Penn State University unveiled a graphene‐based electronic tongue that surpasses human sensitivity in specific quality‐control tasks. The system uses graphene chemitransistors as artificial taste buds and molybdenum disulfide (MoS₂) memtransistors as an “electronic gustatory cortex” capable of mimicking hunger, appetite, and feeding circuits. When the AI was allowed to define its own assessment parameters directly from raw sensor data—rather than using 20 human‐selected metrics—accuracy rose from 80% to over 95%, enabling the detection of minute differences in coffee blends, milk freshness, and water quality that are often indistinguishable to human tasters (Penn State, 2025). Because the artificial neural network compensates for device‐to‐device variation much as the human brain adjusts to slight differences among taste buds, manufacturing tolerances are relaxed and production costs are significantly lowered.

Medical Diagnostics: Scentprints for Disease Detection

The digitization of the chemical senses has found its most socially impactful application in medical diagnostics. AI‐powered electronic noses can now identify disease‐specific “scentprints” in human breath—panels of volatile organic compounds (VOCs) that serve as metabolic fingerprints. Platforms such as OneBreath™ have reduced analysis time from 60 minutes to under 10, and clinical trials by early 2026 report 80–85% accuracy in detecting early‐stage lung cancer, up to 100% sensitivity for asymptomatic COVID‐19, and 85–100% accuracy for breast cancer using a 10‐VOC panel (OneBreath, 2025; various clinical studies, 2024–2026). Osmo, leveraging Gates Foundation funding, has also used olfactory screening to identify over a dozen alternatives to DEET mosquito repellent. The global digital scent technology market is projected to reach approximately $1.43 billion by end of 2026, growing at a CAGR of roughly 10.3%, with some forecasts predicting $3.23 billion by 2034—driven by entertainment/VR adoption, healthcare diagnostics, and industrial quality control (Fortune Business Insights, 2026; Precedence Research, 2026).

Digital Scent Market at a Glance

$1.43B

Projected market by end of 2026

$3.23B

Forecast by 2034

~10.3% CAGR

Growth rate through 2034

80–85%

Lung cancer detection accuracy

Multimodal Representation Learning

Models that bind modalities (text, audio, vision, motion) enable cross‐modal inference and retrieval (Meta AI, 2023; Assran et al., 2023; Bardes et al., 2024; Shu et al., 2024).

A pivotal development in this space is Meta's V‐JEPA 2 (June 2025), which scales the Joint Embedding Predictive Architecture to video understanding with a 1‐billion‐parameter Vision Transformer encoder trained on over 1 million hours of unlabeled internet video. Unlike generative models that waste compute predicting every pixel, V‐JEPA 2 predicts in a continuous latent space, learning the “essence” of physical dynamics rather than surface details. The results are striking: state‐of‐the‐art on motion‐centric benchmarks such as Something‐Something v2 (+10.9% over InternVideo2) and a 44% improvement in action anticipation on Epic‐Kitchens‐100 (Meta AI, 2025). Most relevant for sensory science, the action‐conditioned variant (V‐JEPA 2‐AC) learns to forecast future visual states given specific robotic actions using fewer than 62 hours of unlabeled robot trajectories, achieving zero‐shot pick‐and‐place success rates of 65–80% on unseen hardware—a concrete demonstration that latent world models can translate passive observation into physical manipulation (Meta AI, 2025). The vision‐language extension VL‐JEPA further unifies these modalities, outperforming GPT‐4o and Gemini‐2.0 on world‐prediction benchmarks while reducing decoding operations by roughly 2.85× through a “selective decoding” mechanism that only invokes a text decoder when a significant semantic event occurs (Meta AI, 2025).

Neuromorphic and Embodied AI

Event‐based sensors and SNN‐like hardware promise low‐latency, low‐power perception closer to biological efficiency (Lee et al., 2021; Schuman et al., 2022), while embodied training emphasizes learning through action (Mishra et al., 2024; Barrett & Stout, 2024).

The neuromorphic landscape has advanced rapidly toward commercial scale. Intel's Hala Point system—packaging 1,152 Loihi 2 processors, 1.15 billion neurons, and 128 billion synapses (roughly owl‐brain complexity)—delivers 20 petaops at up to 15 TOPS/W by exploiting event‐driven processing where neurons consume power only when they spike (Intel, 2024). A critical 2025 breakthrough adapted large language models to “MatMul‐free” architectures (using bit shifts, ternary weights, and element‐wise additions), yielding 3× higher throughput and 10× less energy per token than comparable edge GPUs on multi‐chip Loihi 2 deployments (Intel Labs, 2025). At the consumer edge, SynSense's Speck 2.0 integrates a dynamic vision sensor and spiking neural network on a single chip at under 5 mW and a projected cost below $7 per unit, enabling always‐on perception in AR/VR headsets, smart locks, and healthcare wearables. SynSense reports deployment in millions of IoT devices globally, with automotive partnerships (BMW smart cockpits) and healthcare applications that have improved real‐time sensory feedback in prosthetics by 30% (SynSense, 2025). The neuromorphic computing market reached approximately $9 billion in 2025, with a projected CAGR exceeding 50% through 2034—led by consumer electronics (48% market share), automotive (27%), and industrial IoT (15%) (various market reports, 2025). For sensory science, these chips offer a path to always‐on, milliwatt‐level chemical and tactile sensing at the point of use, fundamentally changing how and where data can be collected.

Neuromorphic Hardware at a Glance

1.15 Billion

Neurons in Intel Hala Point

15 TOPS/W

Peak energy efficiency

< 5 mW

SynSense Speck 2.0 power draw

$9B+

Neuromorphic market (2025)

Industrial Cases

Industrial cases echo the same pattern. NotCo's “Giuseppe” uses AI to search vast ingredient spaces to approximate target sensory profiles, with humans steering and validating (Daniels, 2019; Kraft Heinz, 2022). dsm&hyphen;firmenich reports AI&hyphen;assisted ideation and selection across millions of formulas, accelerating time&hyphen;to&hyphen;market while expanding creative breadth (IMD, 2021). Wine perception modeling shows strong statistical links between chemistry, expert ratings, and consumer outcomes (Capone et al., 2021), and industry reports claim high correlations between model predictions and crowd ratings (Grape & Wine Magazine, 2024). These are powerful tools for screening and hypothesis generation—not replacements for tasting.

Industry Consolidation

The consolidation trend has accelerated. NielsenIQ's April 2025 acquisition of Gastrograph AI signals the definitive end of the “experimental pilot” era for sensory AI, integrating predictive flavor modeling into NIQ's BASES Creative Product AI suite across 95+ countries. The platform simulates aroma, texture, and flavor interactions before a single physical prototype is produced, cutting traditional research time by an estimated 65% and accelerating market entry by up to six months (NielsenIQ, 2025). Meanwhile, Tastry has emerged as a hyper&hyphen;specialized disruptor in viticulture, using proprietary chemistry analysis of thousands of chemical data points per wine sample to create digital “palate fingerprints” cross&hyphen;referenced against a database of 248 million drinking&hyphen;age palates in the United States. Partnering with Republic National Distributing Company (RNDC), Tastry's Sales Accelerator program matches specific bottles to individual consumer preferences, with some retailers reporting a 200% increase in online sales (Tastry, 2025). The methodology is now expanding into spirits, beer, cannabis, and fragrance—anywhere molecular chemistry can be mapped to human perception.

Reality Checks

Reality checks abound. End&hyphen;to&hyphen;end “AI&hyphen;only” systems stumble in messy contexts (Kelso, 2024). High correlation is not comprehension. Demos are not deployments. The lesson is consistent: AI is a potent partner when humans remain firmly in the loop.

Despite rapid advances, digital sensing systems face unique challenges. Sensor drift from temperature and humidity requires frequent recalibration; emerging self&hyphen;calibrating algorithms have boosted correlation coefficients to 0.9 against reference instruments, but maintenance costs can reach 30% of initial hardware cost annually—a barrier for smaller operations (various industry reports, 2026). And the comparison with traditional methods remains nuanced: human panels offer extreme sensitivity and contextual judgment; GC&hyphen;MS delivers objective but slow laboratory analysis; AI&hyphen;powered e&hyphen;noses and e&hyphen;tongues provide real&hyphen;time, low&hyphen;cost objectivity but with variable sensitivity that is still improving. The three approaches are complementary, not interchangeable.

“Human panels offer extreme sensitivity and contextual judgment; GC&hyphen;MS delivers objective but slow laboratory analysis; AI&hyphen;powered e&hyphen;noses and e&hyphen;tongues provide real&hyphen;time, low&hyphen;cost objectivity but with variable sensitivity. The three approaches are complementary, not interchangeable.”

Sensing Approaches Compared

Human Panels

Extreme sensitivity, contextual judgment, cultural interpretation. Gold standard for meaning&hyphen;making but slow and expensive.

GC&hyphen;MS

Objective chemical analysis with high precision. Slow, laboratory&hyphen;bound, and cannot assess perception directly.

AI E&hyphen;Noses/Tongues

Real&hyphen;time, low&hyphen;cost, scalable objectivity. Variable sensitivity, requires recalibration, cannot interpret context.

Dr. John Ennis

Dr. John Ennis

President & AI Pioneer, Aigora

With over 30 years in sensory science and a postdoctoral focus on AI, Dr. Ennis is regarded by many as the world's foremost authority on applying artificial intelligence to sensory and consumer science. Author of 50+ publications and 4 books.

Frequently Asked Questions

What is the Principal Odor Map?

The Principal Odor Map (POM) is a 256-dimensional embedding developed by a team originating from Google Brain, trained on over 5,000 molecules with perceptual labels. It represents the first scalable computational bridge between molecular structure and perceived smell. In a controlled “Odor Turing Test” using a 138-word fragrance-wheel lexicon and 320 unique compounds, the POM predicted odor profiles more accurately than the average trained human panelist in 53% of cases. Its power derives from recognizing metabolic similarities—clustering molecules by minimal enzymatic steps required for biological conversion—rather than surface-level chemical structure alone.

How accurate are electronic noses?

AI-powered electronic noses have demonstrated impressive accuracy in targeted domains: 80–85% accuracy in detecting early-stage lung cancer, up to 100% sensitivity for asymptomatic COVID-19, and 85–100% accuracy for breast cancer using a 10-VOC panel. However, sensor drift from temperature and humidity requires frequent recalibration. Emerging self-calibrating algorithms have boosted correlation coefficients to 0.9 against reference instruments, but maintenance costs can reach 30% of initial hardware cost annually—a barrier for smaller operations.

Can AI replace human taste panels?

Not entirely. While Penn State’s graphene-based electronic tongue surpasses human sensitivity in specific quality-control tasks (achieving over 95% accuracy when AI defines its own parameters from raw sensor data), human panels offer extreme sensitivity and contextual judgment that digital systems cannot match. The three approaches—human panels, GC-MS laboratory analysis, and AI-powered e-noses/e-tongues—are complementary, not interchangeable. AI is a potent partner when humans remain firmly in the loop.

What is Osmo and how does it use the Principal Odor Map?

Osmo is a Google Research spin-off that developed the Principal Odor Map. By early 2026, Osmo closed a $70 million Series B (total capital $130 million) and launched “Generation,” a B2B fragrance house powered by “Olfactory Intelligence.” The platform enables the entire product lifecycle—molecular design, formula creation, manufacturing, and packaging—significantly reducing time-to-market and democratizing custom fragrance for indie brands. Osmo has also demonstrated “scent teleportation,” a hardware–software loop that captures, digitizes, transmits, and reconstructs aromas remotely.

What is neuromorphic sensing and why does it matter for sensory science?

Neuromorphic sensing uses event-driven processors and spiking neural networks (SNNs) that mimic biological neural processing. Intel’s Hala Point system delivers 20 petaops at up to 15 TOPS/W, while SynSense’s Speck 2.0 integrates a dynamic vision sensor and SNN on a single chip at under 5 mW. For sensory science, these chips offer a path to always-on, milliwatt-level chemical and tactile sensing at the point of use—fundamentally changing how and where sensory data can be collected. The neuromorphic computing market reached approximately $9 billion in 2025 with a projected CAGR exceeding 50% through 2034.

Ready to Put AI to Work?

From e&hyphen;noses to neuromorphic chips, the sensing landscape is transforming fast. See how Aigora's THEUS platform integrates these advances into a practical research workflow for your team.