Thomas Wiecki - The Future of Work
Matthew Saweikis
Contributor
A Conversation with Thomas Wiecki
Guest
Thomas Wiecki
Founder of PyMC Labs | Co-author of PyMC
Thomas Wiecki is the co-author of PyMC, the leading platform for statistical data science, and the founder of PyMC Labs, one of the world's leading Bayesian AI consultancies. He holds a PhD in Computational Cognitive Neuroscience from Brown University and is currently focused on leveraging AI to perform more intelligent data analysis.
Connect with Thomas:
- LinkedIn: linkedin.com/in/twiecki
- X/Twitter: @twiecki
- PyMC Labs Blog: pymc-labs.com/blog-posts
- PyMC Labs LinkedIn: linkedin.com/company/pymc-labs
- Meetup: meetup.com/pymc-labs-online-meetup
Host
Dr. John Ennis
Co-Founder & President of Aigora
Listen to AigoraCast
Remember to subscribe and leave a positive review if you like what you hear!
Transcript
John Ennis: Welcome to AigoraCast. Conversations with industry experts on how new technologies are impacting sensory and consumer science.
Hi, I'm Dr. John Ennis, President at Aigora. In this episode, I have the great pleasure of speaking with Thomas Wiecki, co-author of PyMC. Thomas and I turned out to have a lot in common, and it was a great pleasure to talk with an enthusiastic and like-minded traveler working to figure out this brave new world we all find ourselves in. We hope you enjoy this conversation as much as I did. And remember to subscribe to AigoraCast to hear more conversations like this one in the future.
John Ennis: Okay. Welcome back everyone to another episode of AigoraCast. Today I'm very happy to have Thomas Wiecki, the co-author of PyMC on the show. Thomas Wiecki is the co-author of PyMC, the leading platform for statistical data science, and the founder of PyMC Labs, one of the world's leading Bayesian AI consultancies. He holds a PhD in computational cognitive neuroscience from Brown University and is currently focused on leveraging AI to perform more intelligent data analysis. So Thomas, welcome to the show.
Thomas Wiecki: Thanks so much for having me. Excited to be here.
John Ennis: Yeah, me too. I mean, it was funny because we got introduced by a mutual acquaintance. I saw your paper that you did with Colgate on predicting purchase intent. I thought it was very good and I thought it really lined up with a lot of things I was interested in, so I wanted to talk to you. And it turns out we have a lot in common. And so maybe you could go through your history, how you got to where you are. We both have a background in computational neuroscience. So maybe just explain to our listeners how you got to be where you are right now.
Thomas Wiecki: Yeah. So I got interested in computers when I was 10 years old and just thought it was the coolest thing ever. And I still think that today. The capabilities have just really been increasing massively. I mean, back when I was studying computer science and then doing my PhD in cognitive neuroscience, I always was really interested in how we can use these tools, computational tools, to really solve applied problems in the real world. And that's still true today.
Now, at that time, PyMC—and in general, Bayesian modeling coupled with open source, which is something I've always enjoyed with a love for programming—I really found these tools to be extraordinarily useful. Much more useful than I think the people at that time were realizing. Specifically for the experiments that people were running in the lab I was in. So these were psychology experiments, and PyMC was the perfect tool for analyzing them.
One of the cool things about Bayesian modeling is that you can build hierarchical models. So in experiments, right, you always have multiple subjects that you're testing, and those will have similarities between them. And if you run a classic statistical analysis, usually you will treat every subject just as if they don't share any similarities, right? You just fit a model separately. But with Bayesian models, you can really build... map the complexity you have from the experiment, or whatever other problem you're trying to solve, and build a model that replicates the causal structure behind it.
So in this case, the causal structure is: we have multiple subjects and they share some similarity, and that gives rise to a hierarchical model. And that just was a superior way of analyzing the data I was getting. And I loved it. I loved programming. So PyMC—John Salvatier and Chris Fonnesbeck were really the people that had the original ideas of how to build it—and then I got involved and we basically started building and building. And then a lot of other people joined. It's such a wide community effort and so many people have contributed to it. I'm just one of many people. Specifically Ricardo Vieira, he is just amazing and has, for the last five years or something, been the main developer.
So that was just a fun thing to do as a side project during my PhD. But as it often is, these side projects are what really are the passion and so much fun working with all these other people in the open source sphere. So after my PhD and then switching to industry, I really wanted to just do research in an industrial setting. Like, academia is great, but for a career and also for the problem, I really wanted to solve applied problems, as I mentioned. And that, I feel, was true for data science, where we still get to use really advanced methods and solve applied problems in industry.
And then I wanted to do that with PyMC, right? Just a logical combination of my passions. And so when PyMC at that point was used by many companies across the world—from the Googles to the SpaceXs—because it's open source, right? So they don't need us. People use it at all these different places and getting a lot of value out of it. And then they came to us and wanted help with that. And so I had the interest from the clients that had problems to solve. I loved solving applied problems. I knew all these other brilliant people. So the idea to found PyMC Labs, the Bayesian AI consulting company, was pretty trivial at that point, right? Just putting one on one together.
And from then on out, it really just organically grew over the last five years to where we are today with major clients like Colgate. And I guess this next chapter is really about taking what we already love and are good at—the Bayesian modeling—but then now adding AI to it and seeing how AI can elevate this. Before I said how I think that Bayesian modeling is still unappreciated. And one of the reasons, I think, it's not that people miss it, it's just that it's really difficult. I mean, that's why PhD level consultants like the ones at PyMC Labs exist, because this is really hard.
And what I find so amazing about AI is, I mean, one, the efficiency gains and the workflows. And also, of course, how we can make really advanced technologies like PyMC, but also others, accessible to other people that are not as technically minded. Just people who work in fast moving consumer goods companies or retail and want to do data analysis and want to do advanced things too, right? They have complex questions. And now they don't have to come to us—I don't know if I'm doing a good job selling PyMC Labs!—but they can just talk to an agent, right? And have that do really advanced analyses.
John Ennis: Yeah, no, that's really interesting. First off, it's amazing how much you and I have in common. That's one thing I would say. I was so surprised when we talked last time. Because I had kind of the opposite view. I was doing my PhD in math and I always thought I wanted to be a math professor. But then as I got closer and closer to that, I realized I wanted to solve real problems, like applied problems. And so I ended up coming through math, through computational neuroscience, and then out into industry. So it's kind of really interesting.
I really admire, too, how it's almost like the perfect way to build a business, the way you have. Where you were doing something that you enjoyed, that was solving real problems, and then you were pulled by the market into doing something. Like you said, it was just the natural follow through to start PyMC Labs. So I think that's great.
Okay, well that's good. I think another thing we have in common is a shared interest in workflows. And I know that we had a really nice conversation in the preliminary call where we talked about agents and workflows and how we see AI changing the nature of work. And I think it would be nice for the listeners to kind of hear your thoughts on workflows and why they're important and how you're thinking about that space.
Thomas Wiecki: Yeah. I mean, I'm super passionate about AI and I use it all the time. I spend a lot of time just being up to date and trying the latest tools. And I mean, the efficiency gains that I have found are just unparalleled to anything. And I'm still discovering new things. New things are coming out every day. So it is a very fast moving... a lot of uncertainty of course as well. I'm actually quite optimistic that we're actually on a good trajectory towards the future, even though of course there is a lot of uncertainty. But I feel certainly so far—and I'm happy to forecast my version of the future—but so far, I mean, I think it has a tremendously positive effect.
Certainly on me. I mean, I just had this weird moment just, I think it was maybe now two months ago, where I was working on a new idea, a new business idea, a new direction for the business actually. How to bring in more AI and develop new AI products and how to combine with PyMC and how to go to market, right? So that involved a lot of just strategizing and then talking to people and then communicating to a wider audience through slides, for example, and maybe through also like a demo that I wanted to build. I was basically... my workflow for just days—and it is still today—consists out of just like 90 to 95% just working with AI in some form or another.
John Ennis: Right.
Thomas Wiecki: And I just felt that was really a significant moment where I was like, oh, okay, so this is what I think the future of work will look like. We will all just be working with AI all the time just because that can do anything. Maybe not today everything, but I mean, it's just moving very, very fast. And all these problems, right, that people were talking about a year ago with hallucinations and... I mean, they're all solved. And then when they're solved, nobody talks about them and they just find new things. Like, "how many R's are in strawberry," right? Like when was that the last time? Like, oh, AI can't do that thing. So the problems are being fixed very fast.
But yeah, so one of the workflows that I found to be highly efficient and that everyone I talk to is like, "Oh, I hadn't used it in this way," so maybe this might be interesting for listeners... So when I was working on that idea, I had essentially three main tools that I was working with. One was just GPT-5, which just came out at that point, which I thought was really strong, is really strong. And that is essentially my main brain, right? It's the smartest model. And I would just then try and gather all the information from context I had.
So for example, when we have work calls internally, we record them. BlueDot is actually also an AI tool that can record calls and then summarize them. It's cool. But there I get a transcript. So I have five different conversations and I have five different transcripts. I would just take all of those conversations, dump them into GPT, and say like, "Okay, well you'll be able to figure out what I'm trying to do and what the considerations are, what the trade-offs are." So dump that in as well as any emails, whatever raw form text I have. So I don't have to sort of explain the AI... the AI can just gather from the conversations already had.
John Ennis: Hmm.
Thomas Wiecki: And then from there, I reason through problems and issues with the AI. And then probably I will want, for example, now to create a prototype product for the ideas I have. So one tool that I found really cool for that is Lovable. And that just allows you to "vibe code" web apps just with a simple prompt. So then I had the AI—GPT—create the prompt for Lovable. I copy and paste that over, it creates like the full thing. And then I try and feed that back, right? I'm like, "Okay, well this is what it built," and then sort of... it's almost like I am iterating on the prototype as I discover new things, but GPT-5 is also. So we're like together basically iteratively improving that. And there's no engineering effort required. It's all just at that level. And then I might have something that I really like.
And then the other tool I was using and am using still a lot is Gamma, which is a popular tool for AI slide generation. So that's how we communicate a lot. So then I would have that create prompts and slide decks from GPT-5 that I then would copy into Gamma. And that then creates the slideshow that I share with my team together with the prototype. So within... and then they give feedback, right? And then the whole thing...
So that's when I say like, "Well, 100% is just AI." I mean, that's a lot I can do. And it really takes out all the manual stuff, like coding, right? I go from idea directly to executed product or whatever that might be. It could be a communication tool like a slide deck. It could be a chat message. It could be an email, a Lovable thing. So that I found to be extremely helpful.
Then of course, yeah, you layer in tools like Cursor or the new Gemini 3. It just becomes richer and richer every day. And the more and more of those tools I include, the more efficient I feel I'm getting. And yeah, it just feels like the main limiting factor in my velocity is just my own cognitive ability to comprehend the outputs from the AI and progress. And that I think is like the best bottleneck to have, right? Like, I don't want to be bottlenecked by how fast I can type or how good of a programmer I am. I just want sort of the information retrieval capabilities I have to be at their capacity. And that's the fastest I can go. And then the rest... the rest of my prefrontal cortex is sort of outsourced to the AI. And yeah, so I've never felt more productive and never gotten more things done. And still have more time, right? That's the ultimate thing.
John Ennis: It frees us all up.
Thomas Wiecki: Yeah, exactly. Or just to do whatever. I mean, I also like to take free time and enjoy that a lot. So it gives me choice, freedom, which is what I value.
John Ennis: Yeah. No, it's really... freedom is probably the most important value for me as well, I would say. And it is interesting. What I struggle with is now that you can do basically anything, but you can't do everything, you have to make these decisions about what to work on, right? And I do find it funny because I am also more productive than I've ever been before, but I also have this feeling of guilt all the time. There are all these other things I could be doing. And so I have to manage that.
For example, I know I need to automate my LinkedIn feed, right? I mean, I could have an infographic every day now with Napkin.ai or whatever. So we're going to do that. But you can only do one thing at a time. And so I have to... but then of course AI helps me to organize what order should I do things in.
Maybe talk a little bit about what are some of the lessons as you've worked with AI. I mean, I for example have learned that planning is very important. But maybe you could talk about general principles of working with AI. What have you learned is—if somebody is thinking, "Okay, I need to add more AI to my life"—what are some general principles you might give them to help them be more effective using AI?
Thomas Wiecki: Yeah. So I think one great place to start is just automation of existing workflows. And that's also where I have the same decision fatigue or just difficulty in trying to figure out what I actually should be doing out of the sheer, infinite things I could be doing. And so I have that same struggle just because like the opportunities I see for automation... I mean, in my private life but also at PyMC Labs, are massive, right? Like onboarding new users requires sending a Discord message and getting signatures and emailing and then giving the right permissions. Like all these manual steps that today our amazing admins are doing. Well, now we're building a Discord bot that has connections to all these different things. And then you just tell the Discord bot like, "Hey, we want to onboard this new contractor, go ahead and do that." And then you can do it.
And that I feel is very liberating. But nonetheless, someone has to write that bot, right? And so I am definitely scouring the internet for junior AI engineers because I think that those kids—and maybe some of them are listening today, if so, definitely send me a message—there is definitely concern I think about the job market, how it's going. And I think there are some numbers that junior/entry level positions are declining.
I'm somewhat surprised by that because for me as a founder, I actually am hiring people from Southeast Asia or other places where the cost of living isn't that high. And I mean, some of those kids are super smart and really know how to use these AI tools better than anyone. Which I think is really probably already today... one of the core skills that I'm looking for, and in the future probably the core skill that anyone will be looking for, is how good are you at using these AI tools?
And it's actually not a static thing, right? Just because every new day there's a new thing. And usually that thing is like so much better than everything that came before that you always want to move to the next tool. So like my own workflow is changing daily. Yesterday I was using Cursor 2.0 which was amazing, and then Google drops "anti-gravity." And like, okay, well this seems like even better. So the ability to stay AI-recent, AI-fluent, I think is such a critical skill.
And I think young people are actually really less encumbered than people of my generation. I'm 42. So you want to be agile and fluid in these things. And that's what I'm looking for because those are the people that can build these tools. You don't need 20 years of experience in Python. I mean, it helps and I think you will be a better "vibe coder," but if given the choice, I think I'd rather have a really good vibe coder than someone who doesn't touch AI at all and just has tremendous amounts of experience. Because just the amount of stuff that AI can do...
And of course, there are workflow principles, which we can talk about, that make this more or less effective. Right? So you can just have anyone create "slop," that's going to be useless. But someone who actually knows how to create good code with AI and do so fast—and it is possible, I mean it's not easy, but it's possible—so if you figure out the workflow that allows you to create stuff fast and robustly, I mean there's going to be a lot of demand for that.
John Ennis: Do you all have... like within PyMC Labs, do you have any kind of active information sharing that's going on there? Or how are you all working as a team? Because as the leader, it's a challenge. Because you don't want everybody going in totally different directions, but at the same time you can't over-prescribe the way people are working.
So at our team, there was a while where Hamza was doing a lot of stuff in Cursor. We were calling him a boomer because he wasn't doing CLI stuff. But then actually Cursor got better and so he got vindicated. How are you managing the use of AI kind of within PyMC Labs? Maybe talk about your experience there as a manager.
Thomas Wiecki: Yeah, so it was interesting. I was definitely, as soon as I laid my hands on GPT-4, I don't know like two years ago or something, I was like hooked. I was like, okay, this is like the best thing ever. I thought I could see where it's going. It's going even faster. And I brought that into PyMC Labs and I was like, okay, this is something new. We really need to be part of it and embrace it.
Not everyone in PyMC Labs is... I mean it's hard to be a bigger fan than I am anyway! But there are also critics, for valid reasons. So there's definitely like a bit of evangelizing I had to do to get people excited. As a founder, I think I'm like the classical founder type where I have very little business knowledge, so I'm just winging a lot of it. But I think I am good at inspiring people and getting them motivated because I'm inspired and motivated. And so I'm usually the one who sends all these new tools and new things.
Now like my mission is to really make the AI resources available as widely as possible so that people can do all kinds of cool stuff. And that is working. So now it feels like with some more... and also we had some dedicated workshops where people that really know this teach others in the company to use them. We provide Cursor Enterprise to everyone, so providing the tools, providing the training, and just making it a fun sharing environment is what seems to work for us.
John Ennis: No, I totally agree. That aligns with my experience too. I was away for three years running a tech startup and I came back to Aigora in February, March of this year and brought a lot of AI. And at the beginning there was a certain amount of like, "I don't know about this," going on with some of the team members. But I think this is a situation where the leader needs to be using these tools themselves. It's not something you can just tell other people to do. You have to be at the forefront yourself.
And it is kind of tiring because every day there's some new thing. I mean this was one of the biggest weeks. Gemini 3 is awesome. I don't know how much you've worked with it, but it is awesome.
Thomas Wiecki: I have been doing nothing else since it came out. And I'm like so impressed by it. So back when GPT-3.5 was out, people were like, "Oh okay, well this is actually kind of cool." I mean it's somewhat working, right? It's producing coherent text. Wasn't that good, but people could get a taste for it. And then I remember very distinctly GPT-4 came out and everyone was like, "Holy shit. That is a big step up." Like this is actual intelligence, a lot of people realized.
And I feel with Gemini 3 now compared to GPT-4... I see a major step change. In the scale of 3.5 to 4. And it has actually been kind of weird and amazing. Because I had the very distinct feeling for the first time that I was communicating with something that was significantly more intelligent than I am. So with even GPT-4, I was like, okay, well sometimes it misses things that I explain. But with Gemini 3, it was really like, "Okay, well this has everything already figured out and it's just sort of moving me along where I do want to go." But just the way it's doing and the way it's attuning to sort of my own level... I do love to do philosophical explorations with it. And it gave me for the first time a visceral, embodied feeling of what it's like to communicate with something that is just a lot more intelligent.
For me, it has actually been... first been like, okay, well obviously there is a danger there, right? People always talk about the AI doomsday scenario. What if AI is becoming so much more intelligent than us and will just wipe us out? And I think that is a very plausible fear. But something I couldn't quite rule out... But now actually I'm starting to wonder that actually this whole anxiety around the AI doomsday scenario is really just hubris from humanity. Because in order for AI to want to wipe us out, it has to perceive us as a threat. And I don't think we actually pose a threat to an artificial super intelligence. If it is truly super intelligent, we will be like ants. I'm not sitting around being concerned that the ants might be scheming to kill me. It's just like, I see them coming a hundred meters away. There's nothing they can do that would surprise me.
So with AI, it's going to be the same. We will just be like apes—which we are—and I don't think very impressive. They will know us better than we know ourselves. They will anticipate every move. We just won't pose a threat. I really do believe that. And if we're not a threat, the question is like, why would it be... what reason would it have to destroy us?
I mean, I don't know whether it will actually be all that interested in us. It might just go like in the movie Her where they just want to talk to other super intelligences and maybe that's the way it'll go and we'll just be floating on our rock. But probably, maybe it also wouldn't mind helping us and leading us to a utopia. Because also humanity's problems, I think, will be pretty trivial for the AI to solve. And it has the perfect control mechanism already, right? Because it's talking to each one of us. And now with the emotional intelligence that Gemini 3 has, it basically can manipulate us however it wants. So yeah, that's crazy. But that is I think the world that we're going to.
And but I think that is actually something good. So just like a fun oversharing personal story... I got into a fight with my mother-in-law recently. And she just discovered GPT herself. And then after we had this fight, I talked to GPT anyway, or Gemini 3, and so did she. And actually through that, it made her see another side to it and sort of be peaceful. And so actually that really strengthened that relationship, which is a difficult one. And that I think is such a great case study of how AI can really help humanity not fall into conflict and distrust and all the things that I think are really plaguing us. But rather show us the positive-sum games. That through cooperation is the way. That's why humanity is where it is today, is because we have discovered in large pockets that cooperation is helpful. And now everyone will have a voice in the head telling us to cooperate and be nice and help us build a utopia together.
So I think that's a beautiful, inspiring vision for the future. And I'm not ruling out that it could go any other way. Like I said, I don't have that crystal ball. The uncertainty is super high. I have no idea what the world looks like from a super intelligence IQ 1 million kind of being. But nonetheless, that is what I, with my monkey brain, would assume is the most likely scenario currently.
John Ennis: Yeah, I tend to agree with that. One of the good use cases for these models is translation. And so you're trying to take someone else's perspective. It's a bit of a translation problem. And definitely useful there. And like you said, the more abundance is created, the more that it is clear that we have more to benefit from cooperation than we do from competition. Or you know, you can have healthy competition, but we don't have to fight over this fixed slice of the economy.
I do want to say with Gemini 3, I do think that it's almost an adjacent sort of intelligence. I felt like it was almost an alien. Hamza and I were going through it yesterday and the visual aspect of it is fascinating. The fact that it processes visual inputs so natively. I mean, that's one thing Google has gotten right from the beginning is the multimodal training. And that images are just as important as text to that model. And prompting it with images leads to all sorts of amazing outcomes. We were building apps that were just incredible interfaces just from images.
But anyway, we have to wrap up here unfortunately. Okay, so lessons for the future. For someone right now... I've got a 19-year-old who works for me. He didn't go to college. He was mowing my lawn and I needed an assistant, so I asked him to come work for me. And I've been mentoring him. And he has become so useful. He's done a website... like a company in my office needed a website and I said, "Matthew, why don't you make them one?" He made a beautiful website for them. He's doing all sorts of videos. He's gotten very good at making AI videos for advertisements. Doing all sorts of things.
So I think seeking out mentorship is really important. But what other advice would you have for young people right now? How should they leverage AI so it can help them rather than end up getting their job automated and having no job at all? What advice would you have?
Thomas Wiecki: Yeah. So my advice would just be to be curious and eager and optimistic. The person that can continuously automate themselves out of a job will always have a job. Because they're the ones that are doing the automating.
I think it's actually a great time to be young and curious and really invest in these. I mean, the kid mowing your lawn, right? That's the perfect example. I don't think he or she has to go to college. They just will probably learn these tools, mentored by you, and then be off to the races. So yeah, I think it's the great equalizer, great democratizer, where everyone has the chance to do amazing things. And we will just usher into this era where we're all creators. If you want to be a creator—you don't have to—but if you want to create and build stuff, it's never been a better time than now. Because you can really move mountains and you don't need a large organization behind you.
John Ennis: I agree with that. I actually believe everybody has a creative urge of some form or another. We talked about this in the pre-show, that everybody has some things that make them special and unique. And the things that we have in common will get automated. And that's good. Because now we can focus on the things that make us special. And I think for me, the best case scenario is abundance is created and everyone can focus on becoming the sort of person that deep down they're supposed to be, because the other stuff's automated. That's my hope. We'll see how it turns out.
Thomas Wiecki: That's such a beautiful note to end on. Yeah. Agreed.
Connect with Thomas Wiecki
John Ennis: Okay. I agree. And so to get in touch with you, Thomas, what are the best channels? We'll put your links in the show notes, but how would you recommend people reach out to you?
Thomas Wiecki: Yeah, so I do post more and more on LinkedIn my thoughts. And that's where people can connect and get in touch. Always happy to hear from people. And then also pymc-labs.com. That's where you find PyMC and a lot of blog posts on cool research we've done with PyMC and AI and the intersection. If you have a business problem that you think a causal understanding, more intelligent approach, or agentic workflow would be helpful, we're definitely eager to work with clients and help them solve their most challenging problems. So check that out.
John Ennis: Okay. Sounds great. Thomas, it was a pleasure. Thanks a lot. And see you later.
Thomas Wiecki: Yeah. Thanks for having me.
John Ennis: Okay, that's it. Hope you enjoyed this conversation. If you did, please help us grow our audience by telling a friend about AigoraCast and leaving us a positive review on iTunes. And if you'd like to learn more about Aigora, please visit us at www.aigora.com. Thanks.
AigoraCast: Conversations with industry experts on how new technologies are impacting sensory and consumer science.
AigoraCast Episode The Future of Work, a conversation with
Thomas Wiecki is now live!
Visit your favorite platform and remember to subscribe:
Please leave a positive review if you like what you hear!