
Hi, I'm Dr. John Ennis, CEO at Aigora. In this episode, I had a wonderful conversation with Dr. Damir Torrico, an Assistant Professor at the University of Illinois Urbana-Champaign. I particularly enjoyed our discussion on why curiosity is so vital, and specifically how "always asking" and pushing back on AI outputs is the key to achieving real fluency with these new tools. I hope you enjoy this conversation as much as I did, and remember to subscribe to AigoraCast to hear more conversations like this one in the future!
Dr. Damir Torrico is an Assistant Professor at the University of Illinois Urbana-Champaign, specializing in the intersection of food properties, human physiology, and consumer behavior. His research focuses on flavor perception, emotional responses to food, and salt/sugar reduction strategies.
With a global academic background spanning Louisiana State University, the University of Melbourne, and the University of Costa Rica, Damir utilizes emerging technologies like virtual reality and biometrics to capture authentic consumer insights. His work aims to bridge the gap between laboratory testing and real-world eating experiences to create foods that are both nutritious and highly palatable.
Remember to subscribe and leave a positive review if you like what you hear!
John Ennis: Okay, welcome back everyone to another episode of AigoraCast. Today I'm very happy to have Dr. Damir Torrico on the show. Dr. Torrico is an assistant professor at the University of Illinois Urbana-Champaign, specializing in the intersection of food properties, human physiology, and consumer behavior. His research focuses on flavor perception, emotional responses to food, and salt/sugar reduction strategies. With a global academic background spanning Louisiana State University, the University of Melbourne, and the University of Costa Rica, Damir utilizes emerging technologies like virtual reality and biometrics to capture authentic consumer insights. His work aims to bridge the gap between laboratory testing and real-world eating experiences to create foods that are both nutritious and highly palatable. So, Damir, welcome to the show.
Damir Torrico: Hi John, thank you for having me here, and I'm glad to have this conversation with you.
John Ennis: No, yeah, definitely I think it'll be great. We had a really nice conversation in our pre-call, so that was great. All right, well Damir, maybe we can start with a little bit of your background. You know, it's funny, but you were at Louisiana State and I collaborated with Witoon a fair amount actually. But maybe you can take us through the course of your career and then we can get into some of the research that you presented at Pangborn that I thought was quite interesting as well.
Damir Torrico: Yeah John, I'm happy to talk about that. Just to mention that I'm originally from Bolivia, a country in South America. My first language is Spanish, so I've been learning English maybe since I was 10, but if something is not clear you can tell me to repeat it, I'm happy to repeat that.
I actually moved to different places when I was studying. For example, I did my undergrad in Honduras in a university that is called Zamorano. My background, my BS, was focused heavily on agriculture, but also food science, and that's where I actually was inspired to try to improve foods and see if I could develop this as a career. But then I moved to the United States. I got an opportunity to do a master's and then a PhD with Dr. Witoon Prinyawiwatkul. Karen was there — I mean, she was the senior in the lab and I was working with her in the project of the tetrad. I remember going to the classes of the kids and asking them about which of these three is the sweetest. So I think that's the paper you wrote with her, right? The tetrad and the validation of that. But yeah, I think that was a great experience.
I got my PhD. My research was focused on taste perception. I did a lot of experiments in understanding bitterness and saltiness in oil-in-water emulsion systems. The good thing about my lab is that we do kind of a sequence type of assessment — we did discrimination tests, descriptive analysis, consumer tests — and we got some understanding about how these physical properties of foods affect saltiness and bitterness.
But then I moved to Australia to do a postdoc at the University of Melbourne. When I was there I was working mainly on some consumer tests, but then we got into a group that was interested in understanding physiological responses in consumers — for example, facial expressions, body temperature, and those kinds of cues that sometimes we don't assess in traditional sensory setups. We developed a system in which we could actually measure this. Nowadays we find different software that can give you that. But in Melbourne we were trying to investigate if we could find some kind of correlation between these biometrics and consumer liking or other types of consumer assessments.
And in Melbourne I was also exposed to machine learning. We worked with a group that was really invested in trying to apply machine learning algorithms — mainly artificial neural networks — to predicting sensory responses based on consumer assessments. I think that was the starting point where I understood maybe we can use novel techniques to see and to understand consumers.
I moved to New Zealand, I continued with this work there. And then recently I moved to Illinois. I'm an assistant professor here in the University of Illinois. And yeah, it was the AI project that now is published in the journal about the brownies. It was an idea I had at that time when ChatGPT was really exploding, right, in terms of people using it and finding ways like how to apply this. I realized that there is a potential there — and for sure your company is also invested in this as well — but what I'm interested in right now, and we can continue this conversation, is understanding the connection or the differences you can find between what these large language models can give you compared to consumers. If you want I can give you more details about the paper, but I think that's kind of a brief overview of what I've been doing so far. Sorry for talking a lot.
John Ennis: No, it's good, yeah, talk all you want, it's great. We'll get to your ChatGPT research, which was really interesting. But first, it's kind of a neat life you've had, that you've gone all around the world. What have you noticed as you have traveled the world in differences in both the different populations you've been testing with, attitudes towards food, and then attitudes towards research also? How are people, as consumers, approaching the experience of food? And then how are researchers maybe differing in their approaches to understanding people's experiences of food? What have you noticed from a global perspective?
Damir Torrico: Yeah, that's a great question. I think we still kind of have a common goal, if we talk about pure consumer research. Understanding consumers is a complex task. We know that there are different layers in consumers — we talked about this in our previous conversation about emotions. That is something that, different from machines or other types of instruments that you can use to assess quality of foods, consumers have these layers of complexity. I think all the groups I've been working with were trying to understand what is in these layers that we can measure. And maybe there is something that we cannot measure. At the end of the day we're still trying to understand the human brain, right? How it works. I think the idea was kind of all of these groups want to approximate something that can give you an idea of how consumers behave.
Having these goals is good — it gives you a vision of what you want to do in the future with your research. But it was also fine because I've been working with people from different parts of the world, and I know that can enrich a little bit of the discussions you can have when you want to solve these types of issues that sometimes don't have a really good solution. Sometimes you need to break things apart and try to understand individual pieces that may be connected to this bigger goal, which is understanding consumers. That was really interesting for me — approaching these different goals and approaching some solutions working with these different groups.
And in terms of consumers, yes, one of the goals, for example, in the research I was doing in Melbourne was that there is an increasing Asian population in Melbourne right now, and we were working with a company, Mondelez, that was trying to assess this as well in terms of chocolate products. We published a couple of papers about the differences you can find between local Australians compared to the Asian population that you have there. I think these are issues you need to address. The changing demographics is something that is going to be part of these assessments. I know we're adding more complexity to this topic, but at the end of the day, understanding the little pieces and then putting everything together and trying to understand the big picture of what a consumer is — that's really the work.
John Ennis: Yeah, it's really interesting. I grew up with a kind of — I came from Canada, I was born in Canada, then we moved to Ireland, then I moved to the United States. So I grew up basically as an American from the time I was five. And it was really only after my grad school — I took a seven-week trip to Europe — that I started to realize there are other approaches to life. I was very much indoctrinated in the American way of living.
I think American attitudes towards food are different than European attitudes towards food. And I would say the Australian attitudes and the Asian attitudes. And I think that has an impact on the research that happens too. If consumers in America are not particularly concerned that much with the sensory experience compared to say European consumers, then that affects American research. And it affects European research.
The older I get, the more I realize that human beings are actually very special. We're all different, we all grew up in different cultures, but then we have individual differences. And I think that there's a tendency right now for people to look for silver bullet solutions — that there's AI and it's just going to give you answers. It's way more complicated than that. Humans are very complex. And I think AI is a really powerful tool. But it's only a tool. It's something that can help us do our research, but it can't replace our research.
So maybe that's a good springboard into the next topic, which is your research on brownies with ChatGPT. What kind of inspired that, what did you find? I thought that was quite interesting research. So please, if you could tell us about it.
Damir Torrico: Yes, yes. Well, everything started in 2020, '21, '22, when we had the release of ChatGPT and people using it and finding ways to understand these large language models and prediction. With my base that I have in Melbourne about machine learning, I understood that — I mean, I know artificial intelligence as a term by itself is not new, we've been trying to figure this out for several years now — but the way that we are implementing this is increasing right now. I don't have the statistics, but this is kind of an exponential increase that we're seeing. And we talked about this before: similar to the arrival of the internet or any new technology, there's going to be an adoption phase where everybody will be adopting this and finding ways how to use it.
The connection with sensory sometimes can make sense because, at the end of the day, artificial intelligence is trying to mimic human intelligence based on algorithms, based on processes. But understanding differences as well, I think, is important because, similar to any traditional modeling system — let's say a simple regression analysis — you're still trying to predict something, but not all models are correct; some models are more useful than others. We know there's going to be some limitations among those models as well.
My idea with this paper was trying to understand responses of this GPT release, which I think at that point was 3.5. Understanding, if we prompted with different food systems — in this case I chose brownies because I think it's a good product to work with; it's sweet, and it's a well-known product — if the GPT could tell us something about that in terms of description. It was a simple type of thought experiment that I had. Because everybody's learning right now about how to use it, at that time I was thinking about using a kind of prompt engineering system — trying to repeat this again and again and give me results that I can measure.
So what I did is create these recipes with different ingredients for brownies. Some of them were very common — like flour, eggs — but some others had changes. For example, instead of oil, we used fish oil, and other types of changes that for real product development sometimes don't make sense because you're going to affect the flavor. But I was trying to see if the GPT was giving an answer if you gave it these uncommon ingredients.
I ran the experiment. I got some good results. The key takeaway message — and this is something that may change depending on how you change the system — was that most of the responses were positive even if we have some uncommon and untraditional ingredients. This is something I teach in my classes of sensory analysis: humans sometimes are affected by what is called hedonic asymmetry. Foods in general — and this is something that Kines and Meiselman in 2010 were discussing when they were trying to understand emotions in foods — we tend to have positive emotions toward foods. By evolution, I mean, we tend to have a bias more toward positive than negative, because food is kind of a utilitarian type of product. So whatever we feed these generalistic models, they will tend to have this as well — more positive responses than negative. The paper was discussing that we may have some really positive responses for uncommon ingredients because humans also act like this.
That actually gave me more questions than answers, because you have this first idea and then you think about it — okay, maybe humans act in a different way, or maybe we can change the systems and try to see if these two systems are related or not. That's why I'm continuing to do research. In the university I have one student right now working on a brownie project, and we are doing this with humans first, and then getting the data of descriptive analysis will give us insights into how we can compare this with GPT or other types of systems that maybe are more fine-tuned to what we need. But yeah, that was the message and the idea behind that project. And I know there are some other groups doing really good research in this area as well. I think it's an implementation phase that we are facing right now.
John Ennis: Well, first off, you're doing the right thing, which is what everybody should be doing — experimenting with the tools. Because this is brand new technology, and it clearly is going to have a big impact on society. I mean, it already is. But this is what I tell the students in the course — I just finished the first cohort of the AI course — that "only the doer learneth." That's a quote from Nietzsche. "Only the doer learneth." You can sit around and think about what you think LLMs are going to do, or you can try, and you can learn, and you can experiment.
I think this is an almost alien sort of intelligence that's come along. It's incredibly useful, and sometimes it's magical, and sometimes it's bafflingly wrong, right? And I think you found that with the fish oil. So maybe you can talk a little bit about the fish oil, because I thought that was really interesting. What happened with the ChatGPT when you put fish oil in the brownies?
Damir Torrico: So yeah, fish oil — or also we did something more extreme, we added larvae meal, insect-based type of material. With this type of uncommon ingredients, the responses were not as different as just having the common ingredients. So we ran what is called a sentiment analysis with these responses that ChatGPT was giving us. Sentiment analysis relies on descriptions that ChatGPT can give you, and it gives you an overall balance score — positive or negative, depending on the words that you use for describing the products.
Surprisingly, what we found is that uncommon ingredients or non-traditional ones — like fish oil or larvae or other types of kind of gross ingredients that humans can find off-putting — were actually rated positive by the machine. Again, this comes to the idea that sometimes the data is skewed to the positive side. It's also maybe an imbalance that you find in terms of positive foods versus negative foods. If you think about the way that we work in terms of product development, we work to create better foods, not foods that have negative experiences. So most of the time we're trying to see that positive side. But negative types of foods are sometimes difficult to find, right? Naturally, foods tend to be positive or negative, but that's also something we're trying to do with this new study — trying to see if we can vary this, having really uncommon experiences, and see if those are connected to what the machine can give us.
John Ennis: Yeah, I think it was really good research, because an obvious criticism might be, "Well sure, it was GPT 3.5 or whatever" — someone might say something like that. But my view on that is it doesn't matter. Because mechanically these models — maybe they become more complex and they get better at reasoning and they can go deeper into the space of patterns because they're bigger — but at some point you're always going to have this problem that you ran into, I think.
For one thing, the training data is going to have bias in it. I have the same thing working with perfumes. If you start to get descriptions of perfumes, there is a huge over-representation of advertising material in the corpus that these models are trained on. And you will get descriptions of perfumes that sound like magazine catalogs. That's just what you — you have to really do a lot of work to get the magazine catalog language out of perfume descriptions if you're trying to get realistic descriptions of perfumes, because that's just what it's trained on.
And it's the same way with ChatGPT or Gemini or whatever. They're trained on recipes that are good, for the most part. You've got blog posts where people are talking about how great the recipe is, you've got YouTube comments, you've got books. But generally speaking, like you said, humans talk about food in a positive way, and there's an extra bias that this sort of data the language models are trained on is going to be selected for positive experiences. Then you have the reinforcement learning, where these models are tweaked so that they'll behave better in common situations — and that again takes you further away from these unusual situations.
So I do think it's just the nature of the beast. It's very important that when people work with language models, they don't see them as crystal balls, and they don't see them as search engines. They're pattern-matching machines, and they're very useful, but they should not be trusted just as they are. You always need to use them as part of a cycle where you get a result, you evaluate it, you revise, you change, you rebalance, whatever. But I really don't see any way to get humans out of the loop fully — especially in the work we do.
I think your research was a really good cautionary tale of people overestimating how the models will do on unusual situations. So for that reason, I thought it was really very good research, and I can't wait for part two. I think part two will be also very interesting. Looking forward now, you said you're planning follow-up research. What are some other topics that you're interested in these days? What else is on your mind?
Damir Torrico: Yes, I'm continuing with my research in sodium reduction and sugar reduction. I mean, that is challenging, right? As humans, we crave sugar, we crave salt, fat. And you still have to understand humans. Even though we know that health is important and the way that we eat affects our health and our activities, humans behave in a very complex way, and we still don't fully understand how, for example, healthy products can be assimilated by us.
A simple example is reducing sugar in foods. I have my student working right now in taste perception, understanding gradual reductions of sugar in solutions, emulsions. We're using pancakes — I don't know why my lab is always working with baked products, I think because we like to bake maybe — but we're trying to see in different model systems if the sugar reduction is affecting the perception of the products from a descriptive analysis point of view. At the same time we're going to do a consumer test and understand thresholds of reduction — what is acceptable, what is not. Into this mix, again, layers of emotions and consumer behavior are still valid and something we need to think about when we are using this type of simple model.
I've been working a little bit in plant-based products — not much, but again, my idea is using these models just as a way to understand taste perception. I'm really fascinated about what is behind that, that is really changing the way that we eat foods. The connection to food and human processing is something we still need to think about how to discover, right?
John Ennis: No, I completely agree. I think it's not a coincidence that AI has the most trouble with the chemical senses. It does very well with text. It actually does very well with vision. Audio does well, video does well. But when you start to get into the chemical senses, the models start to struggle. Because it's such a complex area. And I actually think that's one of the defining differences between human and machine intelligence — it's tied up with the emotion and meaning and the chemical senses. The things that evolution over a long period of time has been developing and refining and responding to.
The chemical senses are the oldest, right? As soon as you had organisms on planet Earth, they were swimming toward or away from chemical gradients because they were poisonous or nutritious or whatever. That's in there really deeply. So I do think that actually our field is very well positioned right now.
Take coding, for example — people who write code for a living. That's basically over. They don't want to hear it, and they argue with me on LinkedIn, but it is over. They can still do things, they can do engineering, they can go up a level, they can become builders, build systems. But the job of getting paid money to write code is now not a job. It just doesn't exist anymore, at least — and it'll be fully gone within just a very short period of time.
But I think our field is going to become more important. That we are the ones who are studying the thing that makes humans different from machines. The machines can support us in that task, but I think a machine can't really understand what it is to be a human. So I think it's really good work you're doing. There's a certain amount of mental manual labor in our jobs that will be done by AI. And that's great, because I didn't like filling out forms anyway. I didn't like going through huge files of reports anyway. So great. But I do think it's a really exciting time for sensory. And I'm happy that we've met, because I feel like you're a fellow traveler on this journey that we're all on.
Damir Torrico: Yes. Yeah, we've been going on different paths, but it's good to understand this from different points of view, and it's good to have this conversation. Anyway, we will try to see the best ways to improve our understanding of humans, and hopefully we can have some tools that we can use as a way to understand this.
John Ennis: Yeah, I think one of the challenges AI has is it is simultaneously over-hyped and under-hyped. There are people out there making all sorts of grand promises — especially the CEOs of these big companies. They're trying to raise money, and they're also trying to scare people into agreeing to regulations so that they'll have monopolies. They have their own motives, right? They're out there making promises that I think AI really can't deliver on. But at the same time, AI really is great at other things. And you get this weird tension where people are reacting to the promises, and so they are underestimating the value of AI in some ways as a reaction to the over-promising that's happening. I think the only way to form your own opinion about AI is to use it and to be in the trenches, like you are, getting your own experience and making up your own mind. So that's what I would encourage people to do — experiment.
Damir Torrico: Yeah, that's true. Once you use it, you know how it works, you know the results. At the end of the day, you have to compare the results that you have with what you were expecting at the beginning. I know that right now there's a lot of noise, but I think this will disappear over time. We will try to learn more about how to better use these systems.
John Ennis: Yeah, that's really good. Okay, great. Let's wrap up with the usual question, which is: what advice do you have now for younger researchers? Someone's coming into food science, coming into sensory — what are the important topics in your mind, what would you encourage them to study, what sort of professional activities do you think they would be well served by doing? What advice would you have for a younger researcher?
Damir Torrico: Yes, that's a good question. I would say curiosity is important in the research field. If you have questions — I think everybody has questions about anything — when I was starting to use these large language models, I had the question of how we can compare this with humans. And curiosity is driven by your desire to understand things.
I know that sometimes it's difficult to find ways to really achieve that — understanding things that are very complex. But I would say you just need to prepare yourself to maybe face some challenges. If you really want to understand something, it's better to be fully involved in that piece of things that you want to understand. For me, for example, not having — I mean, I have a little bit of background in statistics and everything, but I had that idea, and I said, maybe it's going to take me some time to train myself to use these systems. But again, I was very curious about how these things work, and I spent some time trying to learn about this.
If you have that motivation, curiosity, and you want to learn, that's a good way to motivate yourself to maybe take trainings, courses, pursue a degree in that particular area, or also work independently. Many people do this nowadays — they can learn and they can be self-taught in these issues. If you learn something and then at the end of the day you get what you want, that's very rewarding.
So yeah, curiosity — don't stop asking questions. It's important to ask questions. It's important to try to see what are the pathways you can use to answer those questions. And also motivation — you have to be motivated if you want to continue something or understand something that is interesting to you. That's my take, and I always recommend to my students to always ask. Not only just read and say, "this is okay." There are always things that can be improved. And this is kind of the aim of research, right? Improving things that can be better for society.
John Ennis: Yes, no, I completely agree. You've got to go out and make up your own mind. You've got to ask, like you said. I really like that — I think I'm going to make that the show title: "Always Ask." Because that is really true. You should not just accept things as received wisdom. You should ask and challenge.
When you're working with AI — I don't know if you saw this study, but Anthropic did a big study on AI fluency. They found the number one trait of people who are AI fluent is they push back on the model. They challenge the model. They don't just give up, they don't just accept what the model says — they say, "Hey, wait a minute, I don't agree with that," and they engage with the model. And I think that habit of engaging, like you said, being motivated, being curious, pushing back — that's where you'll make real progress. You have to be involved.
I think our position is safe. Our jobs will change, but there's going to be lots for us to do. So I'm actually very optimistic.
Damir Torrico: Yeah, it's true. You have to just ask and try to push things to the limit, and maybe there's going to be more questions on the other side of that. But you have to be prepared for that as well.
John Ennis: Yes, every answer unlocks a whole bunch of new questions. And every technological advance suddenly creates all sorts of new problems that can now be solved. The sphere just grows. Inside the sphere are the problems that are easily solved, the frontier is where things are interesting, and then out here it's impossible. As problems get solved, the frontier just gets bigger and there are more problems to work on. So yeah, it's great.
Okay, well Damir, this has been a real pleasure. How can people get in touch with you? What's the best way for them to find you if they've got questions?
Damir Torrico: They can find me on LinkedIn, they can find me on my university website, and by email — dtorrico@illinois.edu. I'm happy to collaborate on any project, and I'm happy to answer any questions as well.
John Ennis: Sometimes people are afraid to reach out to famous people or people in positions of high status. I think you'll find that people who have accomplished things like it when younger people who are trying to do things reach out, because they see themselves in that person. They see that this is someone trying to do something. Like, I'm always happy when I get questions from people. That's what makes the world go round — younger people moving things forward.
Damir Torrico: Yeah, I think we can accomplish that, and it's going to be younger generations that have to build on that. We stand on the shoulders of giants, right? I think Newton said that. We build from that, and we keep having layers and layers of knowledge, and that is important.
John Ennis: Yeah, 100%. That's great. All right, Damir, this has been a really nice conversation. Thanks for being on the show.
Damir Torrico: Thank you, John. Thank you for having me.
Aigora is a contributor to the Aigora blog, sharing insights on AI-powered sensory science and product development.