John Ennis - Embrace the Machines
- Tian Yu
- Mar 18
- 24 min read
Welcome to "AigoraCast", conversations with industry experts on how new technologies are transforming sensory and consumer science!
AigoraCast is available on Apple Podcasts, Spotify, Podcast Republic, Pandora, and Amazon Music. Remember to subscribe, and please leave a positive review if you like what you hear!
John Ennis is the CEO of Tulle, a company that builds economic efficiency tools for businesses operating on the blockchains, and the Founder and Chairman of Aigora. He’s also a market research consultant trained in both mathematics and computational neuroscience. He is the author of 50+ peer-reviewed articles and 4 books in sensory and consumer science, and the recipient of the 2013 Food Quality and Preference "Researcher of the Future" Award.
Transcript (Semi-automated, forgive typos!)
Tian: Welcome back to AigoraCast, John.
John: Okay. Well, thanks for having me. I'm happy to be back.
Vanessa: Yeah. Thanks, John. So I will kick it off. And my first question to you is, well, for our new listeners who might not know the full story, could you give us a quick recap of how you founded Aigora, why you moved on to Tulle, and more importantly, why you're coming back to Aigora now?
John: I see. Well, first off, I'm still at Tulle, just to be clear on that. And I'm overseeing Aigora, and I'm going to be a little more involved here for the next few months. I think that it's a good time for me to come back and help refresh some of the things happening in Aigora because I've learned a lot in being in the AI space for three years. I'm not going to be in the AI space for long for that, but with Tulle for three years. For the people who don't know the history of Aigora, I started Igora in 2019 because it became clear to me starting in about 2017 that there was this big change coming. I did my PhD in math in 2004, and then I did my post-doc in computational neuroscience in 2004 through 2007, where we were writing spiking neural network models in Fortran. And the goal there actually was to understand humans better. We were writing these neural networks to be able to generate hypotheses which we would test with humans. These were neurobiologically motivated neural networks. So it's interesting how back in those days, really, AI was a tool in neuroscience because we wanted to understand something about how the brain worked. And so we would program impoverished models that we could use to test hypotheses. And then when we found interesting things, we could go and test them in humans. We had different hypotheses about how humans learn categories that we wanted to understand better. And the neural networks were kind of a tool there. And over time, what's happened is it's gone the other way, where now in AI, there's, neural networks are inspired by how the brain is structured, but they're only loosely inspired. It's not really a very tight coupling there. And now, as people are reaching the limit of LLMs and what they can do, they're going back to the neuroscience literature to try to get ideas, because it's an interesting time in that if you were to show anybody from 15 years ago, something like ChatGPT, and ask them, "Is this general intelligence?" They would say, yes. But we work with ChatGPT every day, and we have the feeling that this is, while extremely useful, still not intelligent in the same way that humans are intelligent. The humans have creativity. Now, the planning and reasoning has come a long way with the LLMs, but there still is something that humans do that really isn't being captured by the models. And so researchers are going back to the neuroscience literature, and they're trying to get ideas about what else, what are some other avenues to explore? Because the LLMs are plateauing, and it's clear that we're going to need some new unlock. So anyway, that was my background in AI. And then I went to the Institute for Perception in 2008, and I worked there. Actually with my father, Daniel Ennis, who a lot of you guys, but you all I'd know. A lot of listeners probably know his work as well. And I worked with him all the way until 2019. So I worked with him for 12 years. And during that time, I got to become very familiar with sensory consumer science. When I was an undergraduate, I actually did analytic work also in Fortran for my dad. And so I actually published my first paper in Sensory in, I think it was 1998. I published the first... It was the psychometric model of the tetrad test. I did that in 1998. It was my first author paper, and that turned out to be really important for my career. So I've been always in touch with sensory, and I grew up in the world of sensory with my dad taking me to conferences and that kind of thing. But then during that 12 years working for him, I got to know the field very well, and I did a lot of work on the tetrad test. People are probably familiar with my work there. That's why I won the prize in 2013 was for the work on the TETRA test. I also did a lot with combinatorial methods, and I still feel like that's an underutilized aspect. There's more that can be done with combinations of flavors. I think, actually, it'd be interesting to look at that with LLMs and see, because the LLMs know which flavors tend to appear, at least in text, recipes and that thing. Michael Nestrud actually used to do a lot of work on that back in the day. I had the feeling in about 2017 that AI was coming and that we needed to really upskill ourselves in sensory. It was around that time I started to study data science pretty intensely. I think I first started really studying data science in about 2015, but in 2017, I got really into scripting all the analysis that I was doing at IFP. Got interested in reproducible research, the idea that you should be able to start with the data set and have a script that takes you start to finish. Because in the early... Once upon a time, some of this stuff is probably funny for the younger listeners, but it used to be. You'd have a report, and you'd have no real idea how you got the answers that are in that report. Someone did some analysis somewhere, and it ended up in the report, but there was no real documentation of what happened. Nowadays, you've got your data set and you have all your scripts, and if you get another similar data set, you can run the same scripts and you can actually generate a report using the same code. Actually, that was something that I saw in 2017, 2018, that automated reporting was going to be a big deal. And that turned out to be a cornerstone of some of the stuff that we do at Aigora, automated native reporting. I also felt like the machine learning models... Now, back in those days, we weren't really using neural networks very much. We didn't have enough data, but we definitely were interested in training models to predict individual responses. One of the things that's unique about our field is that people are different, and the experiences that different people have of the same products are probably actually different. If you don't capture those individual differences, then you're going to end up with a really impoverished view of what's going on in market. So I was thinking about machine learning and how we could use it probably in 2017, 2018. Finally, I founded Aigora in 2019. And along the way, I figured out that the four areas, like the mission of at Aigora is to help sensory consumer science teams implement artificial intelligence. I realized that the four areas that were going to be needed, we're going to be organizing historical data, so certain knowledge management, automating processes, including the automated reporting, the modeling, including machine learning, and then using technology to collect new types of data. Those were the four areas. And that's what Aigora has really thrived on over the past six years is doing those kinds of projects. So I started off by myself and then gradually... Actually, Tian and you were one of the first hires, I think, maybe 2020, something like that. I don't know how long you've been with Aigora, but-
Tian: 2020. It was, yeah, it was 2020 that we first talked.
John: Yeah, and you joined. And then Vanessa, you joined in 2021. Is that right?
Vanessa: Yeah, a year later, 2021.
John: Yeah, that's right. Really lucky to have you joined. And Danielle also joined, I think, just right around the time that Tian joined. Danielle joined. And the reason I'm back is that... So Danielle is having to step back from the President role for some personal reasons. And so I'm stepping back in here. And I think it's a very good time for me to get involved again. I've learned a tremendous amount over the last three years about chatbots, the chatbot engineering, and about the use of LLMs, not just as user-facing tools, obviously chatbots are very useful, but under the hood, LLMs for analysis and LLMs for reporting. LLMs are incredibly... So that's large language models, just for the people who maybe aren't as familiar, but I would guess most people know LLM means Large Language Model. They're incredibly useful tools, and we need to be using them everywhere. We need to be using them. We've already been using it in Aigora internally to manage things. You have to look everywhere you can. How can we leverage this technology? Because if you don't, you just can totally get left behind. The world is changing, and we have to change with it. We need to adapt these tools. So that's what I'm back to work with you guys on, and excited to talk to you about it and talk to our clients about some of the new ideas I have.
Tian: We'll talk about lots of LLM and AI-related topics later. But the first one thing I'm always curious about is how is your life in Web3? What do you find the differences compared your consulting experience in Aigora and being an entrepreneur in a Web3 startup?
John: Yeah, it's been a real adventure. It's been three years now running really a venture-backed startup at Tulle. Tulle is the fabric that two threads are made of, and it's that interwoven fabric. It's strong, flexible fabric. And the reason that Tulle is called Tulle is because... The idea I've been working on for a long time is this problem of multi-way matchmaking. That within groups, you have a group of people, people own all sorts of things. If you think about all the stuff you own, there's probably a lot of stuff you own that is valuable, but you don't really care about. If somehow it could get reassigned to somebody who cared about it, just think by magic, there was some inventory of all the stuff that each of you guys own all the stuff that I own. God came along and just looked down into our hearts and had an idea of how much we value all the items. God could probably reassign the ownership and we'd all be happier. I'd give up a bunch of books I don't care about, and maybe I'd get some skis that you don't care about Vanessa, but that are good for my wife or whatever. If we could just reassign ownership all at once in an intelligent way, we could all become wealthier without actually any of us having to come up with a lot of money. That's the core idea of Tulle. It turns out to be very good to do that on blockchain. That's why I got interested in blockchain. I've been in the Web3 AI space now, doing that as a startup. It's been a very different experience Startups are very different from bootstrap companies. Aigora is a company that my wife and I just... Well, my wife's a co-founder. She doesn't really work in it very much, but we started it. We didn't have any backing. We just had to try to get work and I actually had to go into debt and then get clients, and then gradually get out of debt. I own Aigora entirely. Tulle was the opposite, where we had an idea, and then we went out and we got funding, and then we had to do all sorts of experiments, and we're still experimenting. We found, finally, Web3 gaming is a very good use case for us, and we are building tools for helping do matchmaking within Web3 gaming. But Web3 is an interesting space. Bitcoin is the first blockchain, and most people know Bitcoin. The US government just announced that they're going to be having a Bitcoin strategic reserve, so the US government is now accumulating Bitcoin. Other nation states, like UAE, I think is buying Bitcoin. El Salvador is buying Bitcoin. So Bitcoin, I think, is going to be a cornerstone of the financial, just the financial world. I mean, Bitcoin has a lot of nice properties when it comes to capturing value in a way that everybody can can agree on, and then allowing the value to move around the network in a way that everyone can agree on and no one controls. It's a bit like, Bitcoin is almost a type of perfect money. It has some issues with day-to-day transactions, but for transferring large amounts of value around a network, Bitcoin is very good. Now, other blockchains have come along, and there's really, so far, two main use cases for blockchain. One is for just what are called stable coins, where you've got something like the dollar, and you'd like to send somebody dollars. But just sending international wires is a huge hassle. You guys have dealt with international wires. It's a real pain. We get payments from clients It's like you're supposed to get paid from France or whatever, and the time comes to finally get paid, and some little thing is wrong, and where's the money? No one knows, and you have to call people. Two weeks later, maybe you finally get the money, or six weeks later, or whatever. International wires are a big mess. Stablecoins are like international wires, except they settle on the blockchain, and so they settle almost instantly. I could send you guys USDC on Solana right now. Solana is a blockchain. It's a high performance blockchain. I could send it to you right now and push the button, you'd have it. And what's cool about it is that the ownership there is not recorded on any one computer. There's not some computer that somebody could hack into. It's very secure. You've got computers all around the world that have the record of what got sent where, and that makes it extremely secure. That's the one big use case of stablecoins, and that just is an innovation and a great use case for blockchain. The other one, of course, is gambling, meme coins, speculation. You got the, you know of all this stuff that goes on in Web3, which is just a lot of speculation. A lot of people have gotten wealthy on the speculation, and a lot of people have lost money on the speculation. I think that's happened mainly because blockchain gives people a very quick way to send each other to do commerce. People being people, as soon as there's a way to do commerce, they're going to start speculating. That's what's happened to a large extent in Web3. That hasn't been great news for Tulle because Tulle is a real technology that wants to reassign the ownership of things that people care about. When everybody's just speculating on meme coins, it's not really a great use case for us. We have been early. And part of why we've had to work so hard is we've had to really search for use cases that are closer to real life, that are closer to real things. And we think games are actually a good use case. Now, you had Todd from Baxus on the show, I think, Tian, last year. And he's got another real use case, which is tokenizing whiskey. You've got whiskey in a vault. The ownership of the whiskey is recorded on chain. And there's some real advantages to tokenizing whiskey. And eventually, we will work with them to do multiway trading of whiskey. You We also have other things like Pokémon cards in vaults. There's one of our partners, Collector Crypt, is doing Pokémon card trading on chain. That's a good use case for us. So we've been doing our experiments, and there's been a lot of froth in terms of trading NFTs, which are basically JPEGs. They're status symbols, but they're not necessarily worth a lot more than just the status. NFTs have largely been a tool of speculators. Meme coins have been going We've been building and waiting and waiting and waiting for the world to catch up with us. The form of commerce that we support is really better than selling and buying. Simultaneous reassignment of ownership is more efficient. And in 10 years, I would think 90% of the world economy will just run on AI agents trading in large networks. That's just what it's going to be. And what we've had to do as a Tulle is just make sure we can stay alive, find the use cases, and eventually, we'll be in the right place to do things like matchmaking of commodities or actual real assets, and then I think we'll be successful. And in the meantime, we just have to stay alive. So that's me and Hamza, the CTO of Tulle. We've been working on that. Yeah, it's been a lot of fun. Okay, I'll pause here if you got other questions.
John: Yeah, well, of course. Yeah, more like back to Aigora, John, what are the key AI-related learnings from this journey, Web3 and Tulle that you want to bring back to my girl with this fresh view and all the knowledge that you got about this journey?
John: Yeah. Well, for one thing, obviously, one area that's been highly impacted by LLMs is computer programming. And that affects us in a couple of ways. One is, I want to make sure that everybody who is creating computer software or anything like computer software internally, they're using LLMs. Honestly, nowadays, if you are writing code by hand, if you're pressing a single key to code something, you're probably making a mistake. It's like if you try to do an analysis of variance by hand. It's good to be able to know how to do that. But if you're actually taking on a piece of paper and doing an analysis of variants, then you shouldn't be doing that. It's like that with coding now, where if you're writing code directly into the editor yourself, you probably are making a mistake. You shouldn't be writing code like that anymore. And so I do want to make sure that everybody at Aigora is leveraging AI coding tools. I also want to make sure that we're using AI everywhere else we can. And I've seen that. We're using different systems. Now we've got different modules for every project that we have, or we have a knowledge base for everything to do with the project. And if somebody's working on a project and they need to get caught up, they can just go to the knowledge base and they could ask their questions. So I think that these LLMs, when you have a healthy... We can talk about the chatbot systems and what they look like. But if you have a well-designed chatbot system, you should be able to take all the knowledge related to some topic, have it in one place, and get what you need from the chatbot, including potentially code, if you need to get code. You guys know, I've been back now for a week and a half. We're pushing hard to make sure that every process internally at Aigora is running AI first. We don't ever want to be doing things in a way that's out of date. Then, of course, related to that, we have to make sure our clients are also wherever they can using AI. That's one of the things we already do, but I think we can push harder. I think that... some of our clients are using chatbots. I think that most people are not using chatbots nearly as effectively as they could be, and we want to make sure that internally, our clients can easily get their hands on the information that historical knowledge in a nice natural language interface so they can easily get whatever they're trying to learn from the historical data or run analysis, also from the chatbot.
Tian: I have a question, follow-up, what you just said. Now that you said we're just using LLMs to write a code and it's much more efficient. What do you think in this new era of using all the AI tools, what are the specific techniques that a developer or scientist that needs to have using AIs?
John: Well, okay. Level zero, base level, is just being proficient, getting answers you need from the various chatbots. I mean, if someone's writing R code or whatever, they certainly 100% should be asking ChatGPT or Gemini or whatever you're using. Obviously, make sure that if you're working inside a company, that you've got data security. You don't want to put data inside some just generic ChatGPT, and then they're trading on your data. You got to watch out for that. But most companies have approved chat bots that can be used for coding internally. That's just like global zero is asking questions Now, I do think that we use Cursor AI at Aigora now. The next step of this is it's actually a lot easier than you might think to build your own basic apps. I know Tian, you've been playing with this. Vanessa, I think you've also started to get into using Cursor. Certainly, our whole tech team is using Cursor. You can get prototypes up and running very quickly. Now, of course, within a large company, you always have all the IT headaches. And so if you're trying to run something, that's its own challenge. And that's actually a lot of the work we do with IT group teams, getting things deployed. But it is easier than it has ever been to get a basic app running. So if you're inside a large company and you have some idea, the simplest thing would be you're using R, you want to make a little shiny dashboard, you should be able to get the files you need. Just describe what you're looking for. Now, prompting is its own skill and we offer coaching on how to prompt properly. But you should be able to get a rudimentary app running much easier now than you used to. Now, if you're coding in Python or JavaScript, unfortunately, R is not very well supported when it comes to complex AI coding tools. But Python is very well supported. Something that some people may want to do is they may want to start learning Python. I know, Tian, you've been on this journey yourself, where you're very good in R. You'd like to learn Python. Well, Cursor is great because you can make your basic app using Cursor, and then you can take your R code and you can say, okay, here's the R code. I need to get this ported to Python and put into my app. That's not really that hard to do, especially if you work with a model to first take your R code and make it what's called Pythonic. Because the fact is just regular R doesn't necessarily port to Python very well. Because there's all sorts of syntax and whatnot. But what you can do is you can port the R code you have to R code that is more likely to port to Python correctly, and then you can test it in R. You can say to the chatbot, I need to make this R it Pythonic so that it can port to Python. It'll rework it. Now you can test it, you can work on it in R, and then when you're happy with it, then you say, okay, great. Now let's port this Python, port to Python. Then you can bring it to cursor and say, okay, now put this analysis into the back-end of my app. You would be surprised at how quickly you can get up and running in a new language using LLMs. One of the things that I think has been surprising to me is this is almost a new way to learn. You can just learn on the fly because any questions you have, any of the mysteries of computer science, or what does NPM run dev do, or whatever, you got all these mysteries. How do I structure this? How do I do this? Whatever. You can just ask the very bot that's helping you code, and it'll give you the answers.
Tian: I can share a little bit of learning Python. Actually, it's very interesting. I feel like using Cursor to learn Python, it's like learning another language. Language meaning, I don't know, I'm learning Polish or something else. It's like you're going into Poland to learn Polish yourself. You're not starting from learning the grammars and all the words and everything. You just see what cursor lets you to write, and you write them down and you start to know what they're for. It's really a totally different experience from me doing that data science for R and learning how R works. Then learning Python right now is very different. It's like learning a new language. It's totally different.
John: That's a great analogy. It's like you're going to another country, but you also have a translator, not just a translator, but someone who's really knowledgeable about the culture. And they know everything about Poland, and they're by your side all the time. And you're like, "Hey, what's this? What's that?" And they're like, "Oh, yeah, this is the special Polish dish, the stew, whatever." And you're like, okay, cool. And you pretty quickly get up to speed on Polish culture and Polish life. Yeah. And that, I think, is one of the reasons I'm actually not saving for my children to go to college, because I really don't think they're going to go to college. I think that the way the world is headed, we will all have basically personalized tutors by our side teaching us all the time. And it's much more important to be someone who is able to connect different disciplines and generate new ideas and basically manage these tools than it is to be somebody who is extremely technically deep in an area because the models are going to be more technically deep than anybody. Now, you need to go to read the code and go through it and understand it. But a lot of the things that used to be valuable are not going to be so valuable in the future. What the machines can't do is they can't tell you what to work on. They don't have a sense for what's important, what's not important. They'll just very enthusiastically do whatever it is you want. They need humans to guide them. I think it's more important than ever for humans to really think about big picture goals and think of themselves more as managers. We already do this to some extent. I mean, even a regular programming language, we don't write machine code. That's the whole point of you already have layers of technology that you never see. You write R code, well, the computer doesn't really run R code, it runs machine code. And there's all these layers that... You take a high-level language like R, and it's getting translated all the way down, eventually into machine code or assembly, and you never see it. And we're just going up a level now, where now we are operating one level above the high-level languages, and we're operating the level of ideas. And yes, you need to have some understanding of the languages to be able to make sure that you're not writing code that's going to fail in important ways. But the vision is becoming a lot more important than it used to be.
Vanessa: Yeah. John, I know that we need to start wrapping it up, but I have one more question to you. With AI becoming more sophisticated, some fear that human expertise in sensory and consumer science may become less relevant. What are your view on that? I mean, on AI decision-making, replacing this SME in the field, so what are your thoughts on that? If you can summarize in a quick answer.
John: I think that it always is... Some of this is disturbing, for sure. That there is like... If all you really have is knowledge by itself, well, a chatbot, especially one that's connected to the internet and can do searches, is going to have access to a lot of knowledge. Now, it's still true that a lot of the detailed knowledge, there's a lot of wrong information on the internet, right? Having an expert who can sign off on things and can go through and say, okay, this plan looks good or this plan looks bad, is important. It's also true that there are a lot of subtle details in our field, and the chatbot doesn't really have a good feel for those details. I do think that subject matter experts, it's like the way chess. When the chess engines came along, the chess engines could beat the grandmasters, but grandmasters who had access to a chess engine could beat the chess engine. I think, finally, maybe that's no longer true. Maybe the chess engines just always win now at this point. For them to be so superhuman. But I think that when it comes to understanding the nuances of our field, someone with experience who's working with a chatbot is going to be a lot more capable than just a chatbot by itself, and also more capable than somebody who's not using chatbot. So this is actually one of the things I want us to start to deliver at Aigora are, chatbots that can provide low-level consulting and let you know, okay, now we need to get an expert involved. And I think a good analogy to this is that doctors are going to start using chatbots more and more. There's no reason why we go to the doctor. The doctor should not put all your symptoms into an LLM to get an opinion. Now, that doesn't mean that the doctor just has to blindly do whatever the chatbot said, but maybe the doctor missed something. Maybe the doctor didn't think about something. Automatic second opinion is very valuable, very helpful. So I do think subject matter experts need to embrace these tools. And if they have a snobby attitude that, oh, well, these tools don't know what they're talking about, they are going to lose a lot of value. Certainly going to save a lot of time. It's certainly going to save a lot of time. If you come up, it's much easier to edit a draft of something or a plan of something than it is to just create something from nothing. So if you get a preliminary plan for designing experiment from an LLM, and then you go through it and you adjust it yourself, you're going to be a lot better off than if you just try to do it from nothing. One advice I would have for our listeners is use chatbots everywhere you can. Be addicted to chatbots. We use Grok. I use ChatGPT a lot as well. It doesn't really matter what you use, Perplexity. But never use a search engine. Always use some chat-enabled search like Grok or like perplexity when you ask your questions. When you write code, never write code by hand anymore. Always just go to a bot and get help. Whatever work you're doing, ask yourself, "How can an LLM help me?" And start to become proficient in these tools. Like, when the Industrial Revolution happened, there were a lot of people who just fought the machines, and they said, "Oh, well, these socks that are made by the machine are not as good quality as the socks made by human weavers." Well, they were right, but the socks made by the machine were like a 10th of the price or 100th of the price, so you just couldn't compete. You had to embrace the machines. And the weavers who stayed in business were the ones who used the machines to print off lots of socks, and then they do custom work on top. And they would add value. They would also decide big picture, what needs to be done, what's important. Humans are good at the value decisions, value judgments. Thoughts can't really figure out what's important. They just are good at executing. But embrace the machines for sure. Don't fight the machines. You got to embrace them. This change is happening.
Tian: Right. I think that's naturally going to our last question. We used to ask what are your... To all the guests, I know you know it, that we used to ask all the our guests, what are your recommendations or suggestion for new sensory scientists, sensory consumer scientists? Now that we are in this era of rapid evolution of AI, what do you think are the critical skills that you would recommend to a sensory consumer scientist to stay ahead and maximize their impact.
John: Yeah. Well, you definitely need to use LLMs wherever you can, for sure. It's good to also have awareness of other AI tools. I mean, you have all family machine learning tools that are not language models. I mean, you've got all the tabular data, you've got random forests, you've got... When you have traditional neural networks, you've got XGBoost, gradient. You've got all sorts of models that are more... Yeah, basically statistical models that are being trained. But work with an LLM to learn. You're working on a problem, ask for advice, go through it, have it teach you thing, cross-reference, but become very fluent in the use of LLMs and try to have it... The challenge, when I first started doing AI coding, I started to feel like I was being locked out of the code. I wasn't really even able to think about... It would just generate stuff, and it would be so much that I felt like I didn't understand what was happening inside the code. Then I learned, Okay, you got to do this piece by piece. You still have to be working like you would normally on a project, but you have to be working in small pieces and using the tools to help you. You don't want to end up where you're so reliant on the machine that you're not thinking at all. You need to be using the machine as an extension of yourself, not as a replacement to you. You need to use it to empower yourself. And some of that just comes through experience. You just have to... I remember when I first got it into using Cursor, definitely took me a while to get used to it. But my son and I are making a chess app in the evening just for fun. Just experiment and make things and try things. And over time, you'll become proficient and you'll have a new set of skills. Use AI wherever you can. That would probably be the main bit of advice that I would give people. There's always going to be a place for creative, enthusiastic people who are willing to work hard. You just have to embrace the technology.
Tian: Yeah, I think that's a very nice place to end this call. I wish we could have this for much, much longer. I become a fan of Lex Friedmann podcast, which lasts five hours. So maybe that's probably one of those. To ask all the questions we have.
John: I don't know if anyone ever watches all those.
Tian: It took me a week sometimes to finish one of his one episode. But thank you very much, John, for coming back to Aigoracast and share all the knowledge about AI learnings and your Web3 experience with us. And we'll see you next time, I guess you will be the host back to Aigoracast again.
John: Well, you guys will be hosting, too. We'll take turns. But I'm happy to be back. I do. I mean, Tulle will succeed, and I have to focus on Tulle and get it going. But I love Aigora, and I'm happy to be back for at least the next little while working with you guys and helping you through this transition. And I feel like it's a very good time for me to come back. That I think the next year at Aigora will be very exciting with you guys bringing all sorts of new tools to your clients. And yeah, I think it's going to be really good. I think we're going to revamp the way we're working at Aigora. I'm really looking forward to it.
Vanessa: Yeah, that's exciting. Thank you very much, John.
John: Okay. Thanks, Tian. Thanks, Vanessa.
Vanessa: Thank you.
That's it. I hope you enjoyed this conversation. If you did, please help us grow our audience by telling your friend about AigoraCast and leaving us a positive review on iTunes. Thanks.
That's it for now. If you'd like to receive email updates from Aigora, including weekly video recaps of our blog activity, click on the button below to join our email list. Thanks for stopping by!
Comentarios