top of page
Aigora means: "Now is the time for market researchers to prepare for the rise of artificial intelligence."

Greg Stucky - The Human Element

  • Writer: John Ennis
    John Ennis
  • 15 minutes ago
  • 25 min read

Welcome to "AigoraCast", conversations with industry experts on how new technologies are transforming sensory and consumer science!


AigoraCast is available on Apple Podcasts, Spotify, Podcast Republic, Pandora, and Amazon Music. Remember to subscribe, and please leave a positive review if you like what you hear!


 

Greg has focused his career on the development of new methods, techniques, and services for consumer product innovations. His deep experience in the application of consumer behavior to product innovation has garnered industry attention and awards, with work featured in Harvard Business Review, The LA Times, ESOMAR World, and other industry publications. Greg has pioneered cue signals research, an approach where identifying product cues which motivate specific behaviors helps develop successful new products and brands. At InsightsNow, he is responsible for the growth of new business initiatives. Greg holds a master’s degree in food science and technology from Oregon State University.




Transcript (Semi-automated, forgive typos!)



John: Greg, thanks for being on the show today.


Greg: Yeah, absolutely love to be here. Big fan. 100 pluses of episodes that we crank through. That's beautiful.


John:  Thank you. Yeah, I know it's a lot of work, but it's fun. It's to talk to people like you. So that's one of the reasons that's actually the main reason, I mean, it's nice to get some exposure, but honestly, I do it for the conversation. It's just great to have these conversations.


Greg: I agree. I love the conversations, it's fun to fun to, you know, figure out where, where people really see things moving and how things are influencing folks.


John:  Yeah, we were doing our kind of pre-show warmup and we got into such an interesting conversation. I had to cut it off. So maybe we can get back to that.


Greg: That's right, that's right.


John:  Yeah. So we were, you know, talking, I was asking you about, you know, the impact of LLMs and you had some really interesting points you're making about, you know, the need for creativity and, you know, originality, a human in the loop sort of thing. So I think maybe jump into that.


Greg: Sure, sure. I think the most exciting thing for me about AI. Modeling that's available to all the LLMs, especially now that they're being made really simple for people, you don't have to have stats degree to work with them, is really how much they enhance people's creativity.  I mean, getting work done, speeding yourself up, that's kind of nice, but that's actually like in my opinion, it's not that much value overall. What excites me is the ability to throw patterns at you or throw ways of looking at information at you that you hadn't thought of before, and I don't think, I don't think that's a bad thing.

I think that being able to make people more creative is a great thing.


John:   I think that there's a lot of jobs, replacing. And the point you're making here is really kind of the opposite, that the idea that you're just going to automate everything, I mean, that's the path to mediocrity, I think, in a lot of ways.


Greg: Oh, absolutely, yeah,


John: What are your thoughts? For example, suppose you're in product development and you're trying to come up with an idea, how would you see this process playing out with the use of AI?


Greg: Well, I think the great thing is that AI lets you, lets you do things faster, right? So, like typically you and I having a conversation, we start coming up with ideas around a particular topic, and we're going to launch off each other, right? You're going to say something that sparks a memory that pulls up some signal that I remember, some code association that I have in my head. And that's great, and in small groups that has traditionally worked exceptionally well, and great facilitators can ask fantastic questions that really get people going. However, even in those sessions, you still end up with a fairly small number of ideas and you're kind of like handcuffed to whatever group you happen to be in. You throw yourself into another group and all of a sudden, you came up with brand new ideas that you didn't come up with the first group. So that the LLMs can just basically amplify that on steroids. So it's like having A whole ton of other people in that session to spark your brain to be able to see patterns, put things together, and that's for innovation and product development. I mean, there's two big facets there, right? Like there's the innovation side. Innovation, you really need that creative spark. You really want to be able to disrupt the market. You want to be able to push people to try something new to improve their lives, to really bring up new ways of doing work or new products. And I think in that scenario, these LMs are fantastic at helping that creativity. So running sessions where you're just incorporating that, right?You're telling it, hey, give me a bunch of ideas on this, but base it, you know, founded in this moment, found it in all this context, put it in this particular way of viewing the world, voice it this particular. All of a sudden you've got information coming at you that can really spark that conversation and really drive that thinking process forward.


John: Yeah, that's fascinating. I mean, I would guess you've probably done something like this as well, but you know, in my explorations and different projects I've worked on, a very interesting thing to do is to ask the AI to generate a diverse panel of, you know, profile, give me, you know, 20 different profiles of potential consumers of this product. OK, now you've got your profile, and those are essentially, you know, the LLMs are just making predictions based on information and once you have these diverse panel, then you can say I've done this a lot with naming, for example, where you want, you know, try to test names and you put a name out there and you get the feedback from the group, you know, we. Actually had a, we were in my other company Tulle, Hamza lives in Dubai quite a lot, you know, he's Muslim, he's, you know, from Morocco, so he lives in Dubai and we were at one point going to open a branch with an Arabic name there in Dubai, but we were very worried that we would accidentally do something offensive. And so it was very helpful to create a diverse panel of, you know, 20 different, you know, Arabic speakers from around the world and run names by them and actually we found we were going to make some mistakes. So that was a very interesting exercise, but maybe you can talk about how you've seen the, you know, these ideas play out, you know.


Greg: Sure, sure. So like, yeah, well, let's take your component there and just flip it a little bit.

So one of the things that we believe in wholeheartedly in order to make a product really align with a person's life, it's got to fit with a specific moment, right? You've got to get used to be successful. Just, you know, single time purchase, great, it's a fad, whatever, you can make a little money, but you really want to build a brand, you got to get integrated into a person's life. So that means you've got to know the context, and, the series of different contexts that it fits in. So instead of creating necessarily say 20 different unique people. Think about these are all the different possible ways a product like this could be your or situations in which this can be used with different motivations. And then we say things like as you're building ideas, say, OK, tell me which of those, you know, let me ideate moment by moment by moment, but then when you come up with ideas and you start aggregating them, you say, now think across all those different moments of use. Where is it going to get used most? Why would it fit? Better in one than another. That helps you target the story, it helps you hone the message, helps you pick imagery, messaging, all those pieces, and even find then people. You say, well, what kind of people frequent that moment. Now, you may very well not have all the information out there, so you do have to make sure that what you're feeding that model is that value. I would certainly say like for me if I just go to any of the AI's out there, we use Co-Pilot a lot. If I just say generically give me this information, guarantee 100% it's going to give me something completely wrong. Now that may be OK for creative work, but when you're trying to filter down and hone, you know, being able to clean the data, make sure that you have that it's like really looking at the right source of information is valuable.


John: Yeah, I get a bigger point that I feel passionate about, which I think that everybody needs to understand how these models actually work, at least some big picture, because these are tools. And you know, tools are great when you use them properly and not great when you don't use them properly. And I think for what you're saying here, you know, I was thinking about context, you're talking about different contexts for the usage, right? I think really everyone listening to the show should know that, and maybe this is education, maybe review, but like the way these models work is they have some input, which is the you know, the context, right? That you've got this thing called a context window, that's where the input goes in. And what the model is trying to do is from the context that goes in, what is likely to come next. I mean, it's a technical process here where you have this the input is turned into tokens, they get embedded, you get a bunch of math and at the end you get a new token it comes back to, you know, like the next predicted thing, and then that's fed back into the machine and this process repeats and you end up with, it's basically glorified autocomplete. So you have a bunch of information. So you might say, OK, here's a moment in time. Here's a person, here's a beverage. What do you expect to happen based on your knowledge of basically digitized human history, which is more or less what you know, ChatGPT knows, right? And these are the patterns, right? And I think when people understand that that's what's going on, that it isn't an oracle, it's just making a prediction based on what has been digitized. And so it's good for drafting, and it's good for, it's probably good for screening that if something is a good idea, it's very unlikely the model would say, if something is a bad idea, it's very unlikely the model would say it's a good idea. OK, so if you have a whole bunch of ideas, you feed them into the model and you're told that something is a good idea, it's probably worth looking into more deeply, right? That you know, right. And if it and if it comes out that like the model says this is a terrible idea, then probably you should avoid it. So I think it's good for screening.


Greg: I think it's a I think it's you know there's some pattern recognition that I really like about it, right? Its patterns that might not be intuitive to you and you can actually have it suggest patterns that are non-intuitive that don't fit the model, and that's part of creativity too. We do this quite a bit in our sessions where we apply tools like scamper and other types of creative assessment tools or or forcing it to basically disrupt. So while it's trying to, it's normal mode is to fill things in that are a good match, you can also tell it, give me the exact opposite, right? Give me the things that shouldn't go together. And sometimes when you look at things and say, what doesn't go together today, that's your white space. Now, they might not go together because there's a really good reason they don't go together, but they might not go together because nobody's ever tried it or thought about it or the tech wasn't available before to make it happen. All of those things are possible. So we use that, we use that component a lot. So we normally like we focus on the what could happen and we usually also have it not just look at the world, but we feed it research information. So a lot of times going into working with these, we start by saying, OK, what do you already know, and then we look at that and go, ok, there's a lot of gaps. Let's go do research to fill in the gaps. And then we also take historical data. I think people forget that they probably have years of focus group transcripts sitting in there, you know, digital lockers, etc. and so forth. They probably have, you know, a ton of content available. You have to teach the models how to read that, how to pull it out, how to organize it, things like that, and I think using behavioral frameworks to organize that information so you know what pieces are, what's, what's a need, what's a job, what's a motivation, what's a, you know, an ideal or a goal, if you can help it understand how to define those pieces and organize that information. Now you can make sure that you've got a complete picture of all the knowledge, right? And that really helps drive value as you go through the process and the honing.


John: Yeah, maybe I should clarify because this is kind of you. I digitize human, I mean publicly available, right? And I think you're making an important point here that people need to understand that the ChatGPT does know a lot of things, but What it knows is, you know, what's been on YouTube, what's been on Reddit, what's been on Facebook depends on how, you know, what they've done over the years.

They probably have just flagrantly violated copyright at every opportunity, but whatever they did, they have a lot of information, but it was generally speaking publicly available information and what companies have is much more nuanced information and your point about harvesting all of your historical data. And you know, another nice thing here is that LLMs are great at processing text. And so if you've got all these data sources and they're all a giant mess, well, LLMs are your friend for sure, because now you can process these old documents, but now you have your own knowledge that isn't available. It isn't in ChatGPT, right? It's just that that knowledge is not in there, And then that can be added to the context, so you know, selectively, or it can be you can train your own model, you know, train another layer on top of, you know, one of the major models and now you have your own model.

But I think what people really have to understand is that these are just tools and they should be used like tools. And that they're not really not human brains, right? And you were talking about drives and goals and feeling and this kind of thing.

I think that's another really important topic. So maybe you can share your thoughts on how humans differ from these machines.


Greg: Like I want to encourage another person, right? I want to get them excited about seeing possibilities. I want to, you know, as a leader, you want to always encourage great behaviors, you want to get people going through hard times, going through failures, you know, we as humans learn from failure. And that's part of our learning process. We learn from having problems, you know, you're great, some of the things, think about your favorite songs that you've ever heard. There are a lot of them come out of very emotional moments for people. Some of the most popular songs or breakup songs and other things that just are not real positive pieces in life, but they spur great creativity. I think that's, that's a piece of that human element like we can get great ideas, we can see fantastic patterns in data and you know, shoot this, I mean just because it's an LLM, I mean statistics was that way, you know, going through school, I was always taught, right? There, your models are always wrong, sometimes useful. Great. Well, same thing is true here. It's another model, so it's going to be useful, but only if you put real human application or a human layer against it. So just because it says I should try something or a certain thing is a great combination, I still have to put that human intuition against it. We still have to figure out from real people. We still have to bounce it off real people to be able to say, tell me exactly where it fits and why it fits, and you, you've got to get to that deep, subconscious level memory in order for it to become a repeat use product. So you you know you're going to get trial by wowing people by, you know, getting eye candy out there, but to get the repeat purchase, you've got to get those subconscious cues exactly right, and you can get hints at them from the LLMs and pretty darn good ones in many cases, but you're not going to get the full picture of how to communicate with that person.

Unless you're there, but it does help you a lot of barriers, cultural barriers, you name it, right? You talked about that, like I don't know, like if you said, hey, go to France and make a new product. I don't know that culture that well, right? I mean, I've been visited many times, got friends who live there, but that's not the same as having grown up there and like even in the US, I mean, if you grew up on the West Coast like I did the South, you're going to have very different life experiences and being able to bring those to bear, you know, I can learn a lot. But I've, but the human emotional element that comes along with those ideas and the ways that text is put together, those stories are brought to life. That's the part that we don't want to stop having added, if you will.


John: Now, you said two things here that I've been kind of thinking about a lot lately. One is that I think a good way to think about an LLM is as a simulator, that it's a simulation. And actually, when you think about it that way, you know, the problem of hallucinations where will give you nonsense, hallucinations are not actually. A bug is a feature that you have in a simulation and sometimes the simulation is not that close to reality and sometimes the simulation matches reality very well for coding, you know, it's an objective question like does the code run or not like is the code correct, right? Is the math problem correct or not? And in those cases, you're basically simulating a human programmer how they would, you know, write, how they would write the code, a good one, and that simulation is sometimes perfect.

It's excellent, right? But as you get into more and more of these human experiences, I think there is going to be a gap between the simulation and reality. And that's, I think where that opportunity lies, because we all have access to the, yeah.


Greg: Yeah, the simulation pieces, the simulation piece is really a nice, nice, simple way to think of it.

The hallucinations are actually super valuable. In fact, we kind of force them sometimes in that creative space. But if you're trying to create an optimization where your your goal is efficiency, you know, hitting the mark all the time, obviously we want to code those out. But that isn't really any different from any other efficiency play when you're trying to build models that are really good at kicking out the bad carrots off a production line as an example.


John: And the other thing that you said here is a bit about you know feeling, and I think that for me a difference that will always exist between humans and the machines, no matter how sophisticated they get, how advanced they get is that humans are evolved organisms, and that means that over a very long period of time. All of our ancestors have been trying to survive and reproduce, and you know, we take care of their families and you know find food and whatever. They've been, you know, had basic drives to survive that had to be present or they weren't gonna make it . So they were strongly selected for these drives and they're deeply in there, and the machines had no such history. They've been, you know, created, you know, you can think of us as like this, you know, a bootloader for the machines and for the intelligence, but their intelligence is missing that drive, and I don't see how it will ever get that drive. There's, it's actually that drive is sometimes we think about how many people ruin their lives by having an affair or whatever, you know what I mean? And they why do they do it, you know, I mean they just like destroy their lives and but they do it because they've got this drive in there that they, you know, that gets the better of them, or they have, you know, addictions or they have whatever, but humans, or they do great creative things, but humans have drives that are not always rational but are a deep part of the human experience and the machines will never have those drives. They don't care about anything. They can pretend to care. They can simulate it very well, but deep down, they don't care, right?


Greg: Yeah, they don't have compassion. They don't have a sense of love, but they can they can tell you the right words to say, but they don't have the actual intuition to be able to respond to all of the component pieces. You know, and a lot of the other piece of the puzzle is that you have to remember that humans don't exist in the text world only, right? So like right now, you know, and I'm sure eventually we'll get, you know, our technology will be, you know, full 3D and it'll be able to have really good spatial awareness and things like that, but certainly today we don't have that quite the level that's needed to be able to react well, not not in the fingertips of most people anyway. And I think that that level of understanding how to react in real time is something that is incredibly valuable, and that quick intuition. I mean, we spend a lot of time at Insight now working on subconscious queuing implicit, which happens insanely fast in the brain. And emotions are that way, right? They zip through your brain super fast.

To be able to capture that, I mean, we're, we're getting good associations and we can see a lot of patterns there, and every time we do it, we, it just opens your eyes and says, wow, there is, there's a whole another level of knowledge that is just not existent out there in any of the data models. It just does not yet exist. Now, how does it get there? The only way to really get there, and this is my other big thing about remembering about your LLMs and all that other stuff is that it's only as good as what's been put in there, and it's if you stop putting really great information in, it will stop evolving. So if it's going to learn, if it's actually going to get better, we actually have to have a thought process of putting stuff in as well. So to think about it from the researchers standpoint, I know there's a lot of people that are like, oh my goodness, research is going away. It's like, well, sort of, it's changing. I think it's pivoting. I don't think its going away, but you still have inputs and outputs, and the inputs are probably just going to shift and change in a little bit different way, but we've got, we got to think seriously about what kind of information we want to feed these models because there's stuff that they just don't have available to them yet.


John: I totally agree with that. Oh yeah, so, and there's so many things to talk about here. This is, well, first off, on the topic of multimodal modeling, that's interesting because I, you know, it is now the case that you can get very good, you know, the Google models are multimodal, the opening models are multimodal. I'm not entirely sure if Grok is multimodal on the input, but it is on the output. But it at least give you images, right? I'm not entirely sure about how Grok is working, but the multimodal models, I mean Google, you can in real time share your screen with Gemini and get feedback in real time, and actually you can feed. I did this, we were applying for a grant for my startup and the grant was under review, and the meeting was recorded. It was on YouTube, OK. So I fed it to Gemini and I asked it questions like when did the, you know, reviewers seem excited? When did they seem disappointed? And I got the topics that were being discussed at these times, right? And it's all just a totally fascinating that, but the chemical senses are entirely new like the senses are much more complex and much closer to that deeper.I mean, you know, that like taste and smell that those are like the earliest senses, right? You have, you know, small organisms that are just detecting chemical gradients and that's like the origin of chemical senses and that information is very far away, I think, I could be wrong. It could be some listener reaches out and says you're totally wrong, but my understanding is that there's still a big frontier here when it comes to modeling the chemical senses. Do you, so first off, let me ask you, do you have any particular knowledge in that area I should get out of you or?


Greg: Yeah, I was gonna say that. Yeah, it's a big frontier we haven't really a whole lot.

I can say is that we're certainly putting a lot of exciting databases together, because one of the things that we're seeing is those subconscious associations. A lot of them stay pretty true, like once they're deep-rooted in a person's brain, they stay there for a really long time because they're attached very heavily, especially when they're emotionally attached memories, both good and bad. And it takes often quite a while and quite a lot of repetition to change a person's mind about what something relates to. You know, the simple one is like the milk industry worked really hard to make sure that we knew calcium and milk went together and calcium was good for bones, right? Those associations, anybody growing up that's our age, that's, that's what they know that and it would take a lot of work to change that subconscious association in people's minds, but we don't really have that out there and available, right? We have weak signals as maybe there are some possible associations. Some things that are super obvious like calcium and milk, so well published that Yeah, you can pretty well predict it no matter what kind of math you use, but the vast majority of things that are in the innovation space, where you're connecting specific like ingredients and or specific keywords or specific sensory experiences to a specific benefit that a person gets in life, or a specific, you know, physiological thing that the product is going to do for you. Those things are also there and that, that's like for us, the real gem of putting together an innovation because now you have that holistic picture. I don't have like a team of marketers over here saying, you know, hey, if I communicate it this way, I'll get a lot of trial and a team of developers going, hey, I'm making something everybody likes. Well, that's only like a, it's almost like barely table stakes today, right? In order for you to really have a standout innovation today, you've got to be developing that product to deliver those in the moment motivations, which, quite honestly, often are not liking. And a lot of times they're liking plus something else, for food products, for non-food products, almost everybody knows it's something else, right? Go into the perfume space, they've they've been working on not liking for a long period of time. And, and we just, I think the industry as a whole is just starting to really grasp that these subconscious cues are what's actually driving that huge amount of success because become you leverage things that are already deep-rooted in people's brains and or if you are the forerunner, you create a new association that never was there before and now your competitors can't touch that. So that's the piece I think that is really fascinating about this particular space because as we get that data and we start modeling against that, all of a sudden I think you're going to have new forms of information that teach you how to move innovation a lot faster.


John: Yeah, so much stuff here. Well, I mean, so one thing I'm really getting out of this is that if we think of the of the LLM or the neural network as a simulator, that the, you know, publicly available data is not sufficient, and we're going to need to augment the data sources with either historical data that's been pre-processed and or data that have been specifically collected for the purpose of filling in gaps and something, you know, we do a lot of work on knowledge management on Aigora and one of the things actually very interesting that you know, LLMs are very good at planning, OK. And they are very good at if you have a knowledge base and you have a desired state for the knowledge base like you want to say like we would like to answer questions like this, OK, from the knowledge base. It is very actually the LLMs are very good at answering this question if you say we would like our knowledge base to be answered these sort to answer these sorts of questions and here's what we know what are we missing Actually you will get very accurate answers to that in terms of what sort of data are missing. So I can kind of see a project here where as companies get the desired end state here is to have a more accurate simulator that which can then be used for ideation or for screening or whatever, OK. However, in order for that simulator to be a good simulator, you need more nuanced data than you can get from the publicly available data. OK. So now you go to your historical data, you organize it, you ask what are we missing that guides research, and you can fill in the blank and you can have a better simulator. So I think that's extremely interesting. Now, unfortunately, we are coming up on time. I can probably talk to you for hours. OK, just to wrap things up here, so as you were young, because you were talking about research changing, and I think that it is something I'm preaching to my team all the time and you were talking about coding. We're coding about 50 times faster now. And if you would like advice on on how to code very fast, we like, you know, the other night I did a well we had this chatbot system that I'm making and I don't, you know, I mean, I'm just, I'm a mathematician, you know, I did my, I've done a lot of Fortran programming, I did a lot of art programming, but you know, there's all these things I need to learn and I built an excellent chatbot system in about 90 minutes that would, you know, taken a team of people weeks and, you know, with unit tests and everything, beautiful actually. So I do think that coding is changing everything. I think the world is changing. But I think there's going to be a need for humans. So what would be your advice to young person starting out? How do you see evolving and how do you see, you know, how should a young person set themselves up to succeed as this revolution happens?


Greg: Yeah, don't think of it as a coding language. Think of it more as a communication language.

And if you were going to learn to communicate with a culture that you hadn't communicated with before, what would you have to do to get up to speed, right? It may be a really hard like you maybe have to learn a completely different language. If that's the case, jump in and start learning it. You don't have to learn it all, you know, it's the Duolingo level of learning will get you a long ways. So get in, learn what it's, learn what the parameters are that it expects, right? Learn its keywords. I love having, you know, what are the power words that you use? I mean, it's not necessarily the exact same words that we use in the English language day in and day out, more custom words, words have more value, more meaning.

So think of it as communication, think of it as learning, that as your starting point, so that you have a foundation for that. The second piece is remember this, this is, you have the chance as a young person to completely change the way we've approached these problems and solved these problems in the past.

So you're sitting at the forefront of the ability to do things that people used to have a very linear mindset to be able to solve and now could be solved in seconds or minutes with an iterative mindset or maybe a multimodal mindset, right? So I think the opportunity to say, how might this be very powerful for me is really where I think most people should start their journey if they haven't started yet.

I think it's, you know, start with things that you love to do and just find out how you can enhance your own level. If I like to write short stories, how can it help me do that? I help me write better. I don't want to write it. If I tell it, write it for me, I'm going to go, man, that sucks. But if I say I got this idea, I want to make it even better than I've written before, then it's going to help me take my game to the next level.


John: OK, so, so prompt you, I think. That everyone learn the basics of giving good prompts and actually tip I would give everyone is use a second model to formulate your prompts. If you have an important prompt, open Gemini or, you know, ChatGPT and say look, write me an excellent prompt to accomplish this goal, and then we'll write you a beautiful prompt. And then you go use that prompt. So that is learning to speak the language of the machines. And then the other is opening your mind, because I think so many things are possible now that we're just like dreams once upon a time. And you can just so try it, see what happens, you know, don't like allow yourself to be surprised. Yeah. And then the third thing would be you use these, use the models everywhere you can in your regular life and just get used to them, get used to working with them.


Greg: It's like having your cell phone on you, right? 20 years ago, nobody did, now everybody does.

It's, you know, all of these, this is the new set of tools that everybody's gonna have to learn.


John: Yeah, I mean if you were trying to do business without a computer right now and you're writing letters to you, I mean you can do it, it's the same way. These are just tools and we have to learn to use them.


Greg: The other thing I would say is, jump into the young, right? Like remember that the younger the person is, the more creative they are. So don't be afraid to ask your kids, right? Like, you know, kindergartener might have a better creative mind than you because you've been so trained in life to try and accomplish certain goals and you're trying to go, you know. Paycheck to paycheck, so you go, you go talk to some young people and you're going to find that their willingness to be creative is also very powerful.


John: Yeah, that is really my children are growing up in this crazy world where you know we have like our Tesla is basically full self-driving at this point. My 5 year old daughter, her experience of life is that cars just drive themselves. That's just what she thinks, you know what I mean? And that's the world she's growing up in. And so like, we, they don't know, like we think, I think there's this barrier sometimes we put on ourselves that like, we think something's impossible actually it is possible now, and a young person doesn't know it's supposed to be impossible. So they're just gonna try it. Yeah. So let us try. And I think that's very important.


Greg: Well, that's a great way to close. Let us try.


John: Yeah, that's great advice. All right, Greg, well, it's been a pleasure talking to you.


Greg: Yeah, you too.


John: Yeah, wonderful. OK, well, thanks so much.


Greg: Thank you.


John: Oh, but one more thing, Greg, how can people reach you?


Greg: How people can reach me, right? My LinkedIn is greg-stucky, or just hit our webpage, InsightsNow and you'll be able to see my contact information right there on the about us page.


John: OK, sounds great and we'll put that in the show notes.


That's it. I hope you enjoyed this conversation. If you did, please help us grow our audience by telling your friend about AigoraCast and leaving us a positive review on iTunes. Thanks.


 

That's it for now. If you'd like to receive email updates from Aigora, including weekly video recaps of our blog activity, click on the button below to join our email list. Thanks for stopping by!


Join our email list!


Yorumlar


Bu gönderiye yorum yapmak artık mümkün değil. Daha fazla bilgi için site sahibiyle iletişime geçin.
bottom of page