Episode Transcript
[00:00:00] Speaker A: We are advancing AI capabilities faster than anyone realized we would be able to, and frankly, faster than I think we are understanding their implications.
[00:00:10] Speaker B: Hi, I'm Stuart Papp, founder of DN8 and welcome to Stand up to Stand Out. The podcast communication has changed my life and it will absolutely change yours, because breakthrough innovating deserves breakthrough communicating.
Every episode we bring you industry insiders, subject matter experts, and you can learn from my. My decades of experience to get the most practical and tactical advice that you can put to work. Now let's dive in to today's show.
Anoushka, it's so good to see you and thank you for joining me on beyond the Lab because we're really going to get into the science and we're going to look at healthcare. Your experience with brain scans and your all, your experience all the way through up to today. But first, just welcome, welcome to the show.
[00:00:59] Speaker A: I appreciate it so much for having me.
[00:01:03] Speaker B: We got the band back together and, and we're gonna have some fun with this. So one of the things, I mean, you're wildly impressive on on so many levels, and you have a dual PhD out of rice, but I really want to start here with something that I think is directly relevant and not only around AI and brain scanning, but something where the intersection of brain. Early days artificial intelligence. This is back in 2017. You were helping to use artificial intelligence in a different form than we currently know it to predict cardiovascular disease. Can you tell me about that research and how you were using AI to help predict cardiovascular disease?
[00:01:44] Speaker A: Yeah. Okay, so let me ask you a question.
[00:01:47] Speaker B: Ooh.
[00:01:47] Speaker A: You have two people who are faced with a stressful situation.
[00:01:52] Speaker B: Yeah.
[00:01:52] Speaker A: And one person does their best to appraise it in this objective, third party observer way.
And another person appraises it as scary, anxiety inducing, and extremely close to them in proximity and time. Who do you think is going to fare better?
[00:02:07] Speaker B: Oh, I thought you were going to say which one is me because I'm the second.
The terrified, hiding. No. So if I look at neutral third party kind of having some distance and the person who's a little closer to it, I mean, just from what I know, you know, from what I've learned in meditation and mindfulness and, and, and psychology, person A would probably seem to do better because they can at least sort of observe themselves. They have self awareness. That's my answer. That.
[00:02:35] Speaker A: That's exactly right. So, and it seems kind of obvious, but how we appraise and describe or talk about these adverse situations, it's directly related to our. Well being biologically and mentally.
So I'm sure we'll get more into this in more detail later on in our conversation. But when I was in grad school, I was really thinking about this connection between brain sciences, health, emotion, and it hit me that language isn't just communication. It's like a window into our cognitive processes. And these cognitive processes really have biological consequences. It's just a cool way to think about another way to think about what your brain is doing.
[00:03:16] Speaker B: So let me break that down. So what you're saying is language is obviously very complex, but you were saying that embedded in the communication are indicators that could predict what's happening inside of you biologically or, you know, health wise. Is that correct?
[00:03:35] Speaker A: That's right.
[00:03:35] Speaker B: Okay. And I feel like there's so many ways that could go. Like, you could probably launch like a million or a billion different research projects just from that. And that's really fascinating. And I'm curious why that particular intersection was so interesting to you. Because. And I want to go back on sort of who got you into science and why you, what you studied undergrad before that, but why that intersection?
[00:04:01] Speaker A: So I had a plethora of research experiences when I started college. When I joined Tufts in 2012.
Gosh, it's been a hot minute. I took a neuroscience yesterday for me.
[00:04:17] Speaker B: Fair enough. Okay, good. Yes.
[00:04:19] Speaker A: Took a neuroscience class. This was really like a tribute to a physics teacher that I had in high school, because she was the one who kind of encouraged me to pursue neuroscience.
And I joined the Emotion Brain and Behavior Lab that was led by Dr. Dr. Heather Uri. And this lab was really the trigger, I would say, to. To my interest in the relationship between brain science, which this lab specifically studied with functional magnetic resonance imaging. That's basically a fancy way of saying you want to visualize the brain and what it's doing and the connection between brain activity to emotion. And I was really interested in this language aspect. And as an added bonus, my now husband is. Was pursuing cardiology, and I think I was kind of subconsciously influenced by him as well.
So I was really interested in thinking not just about the brain and emotion via language, but also the applications to cardiovascular health.
[00:05:13] Speaker B: So from there, I want to get to the Ruth L. Kirschstein Award.
So walk me through what that is and what your hypothesis was, and then where did that lead to?
[00:05:24] Speaker A: Yeah, so I. One of the things I did in grad school was apply for the Ruth L. Kirsche National Research Service Award. It's basically this grant worth hundreds of thousands of dollars. And it was in pursuit of a pretty unconventional research question.
So the grant that I wrote was about the relationship between cognitive processes indexed via natural language being associated with less cardiovascular disease. So let's just break it down into like simple terms.
So there is a pretty well known process, well studied process, I would say, called cognitive reappraisal. And that is just a way of saying you're changing the way you think about a stressful situation. And so my theory was that if you incorporated lexical shifts in your language that were reflective of cognitive reappraisal when discussing a stressful situation or faced with a stressful situation, you would have less inflammation. And we were looking at inflammation, doing blood draws and doing some blood sampling of our participants.
So, for example, in very simple terms, if you used fewer first person pronouns because that means you're farther away from the stressful situation, something as simple as that, we would expect to see an association with lower inflammatory markers.
[00:06:39] Speaker B: So what that means is the less you talked about, like I or I am dealing with, then you had less.
You, you actually were indicating that you were less prone to inflammation or other negative impacts.
[00:06:55] Speaker A: That's exactly right.
[00:06:56] Speaker B: What, what about the opposite then, with people who were using like it appears or getting some distance, was that also a signal or what did your research reveal?
[00:07:05] Speaker A: So in our specific prompts related to the grant, we were asking participants to talk about a brief spouse. So a very, very stressful situation.
And losing a spouse is like one of the number one stressors if you're looking at a stress chart.
And when describing this, we would then look at their biomarkers after they would describe these situations. And we were able to see, okay, if we're able to look at the association between stress in these individuals and inflammatory markers, and then also run their transcriptions of what they were talking about when describing the brave spouse, we can run that through our algorithms to deduce a score that is reflective of cognitive reappraisal. We can then look at the association between cognitive reappraisal and our biomarkers.
[00:07:58] Speaker B: So I'm curious, this is back preceding ChatGPT, all of this, I think you used a technology called bert.
What does BERT stand for again?
[00:08:08] Speaker A: So BERT translates to bidirectional encoder representation transformers.
So I can explain what that means in English. It's basically a powerful tool that can consume large amounts of data. It can look bidirectionally. So what that means is the way humans process English by understanding the full context of a sentence. It can differentiate between a steel can versus the verb can.
And it came out before GPT3 and ChatGPT, which is now widely known, of course, but I would still say, I would say it's still relevant. I think it's really useful in the context of foreign languages. So the model that I built was off of bert, and what we did was we processed our participants data, their language, their transcriptions in these BERT models, and were able to train our models to identify language that was more objective and distant.
And then what we found was in relation to health outcomes, we found that people who had higher scores of objectivity also had healthier outcomes.
[00:09:18] Speaker B: What I'm curious about, I think it's really interesting and there's so many ways you could go with that. I'm curious. The technology's evolved quite significantly and I'm interested in, because I'm assuming that you were looking at transcripts and not tone, delivery, cadence, inflection, any of that. Is that correct?
[00:09:38] Speaker A: That's right. And you bring up a good point. That is a gap in our research, and that is something that I hope people.
[00:09:46] Speaker B: Look, let's go back. But I find it so interesting because, you know, my business is communication and I talk a lot. Just even inflection, if you take a, a very simple, you know, phrase where you inflect the words or the phrase, it will have different meanings. Right. Like he was late, he was late, he was late. Right. So you just have different meanings that come out. And I wonder how this would go if you were able to not only get just the word itself, but also the intonation, the delivery, the, the, the.
[00:10:19] Speaker A: Meaning behind the words, even things like confidence. I think all those things play, play a role. And I think one of the challenges we have with these algorithms and with these types of models is that it's really hard to capture that in the form of written data. Right. And that is a major downfall of some of these tools is that until we figure out a way to accurately capture confidence, intonation, where emphasis was made when something was being delivered, we're going to miss out on those constructs.
[00:10:57] Speaker B: Well, it reminds me of a phrase that in some parts of the United States, people will say, bless your heart. But what that really is, that's a. That's like, oh, oh, you're insulting me. Like, you think that I'm really pathetic and I can't. But if you look at the transcript, it's like, wow, that person was so generous. They said, bless your heart. They must be wishing the best. And so there's so much nuance there. You Know it's funny because my, my son is young and my, my kids are starting to pick up on like sarcasm and irony. But it, you, you sort of, you have a learning curve and then language is just so nuanced. So um, but anyway, really fascinating. I want to jump back, don't quote me on the chronology here, but I want to talk about problem structuring because in between your undergrad and your graduate degrees, multiple degrees, if you're not paying attention to everyone, there was an S at the end of that degree, multiple degrees, you spent a few years at Putnam. I want to talk about that experience but I also want to get to the, one of the things that you talked about which is problem framing or problem structuring. But maybe you could talk about the two years in between your undergrad and grad, what you were working on specific around healthcare and then just that sort of meta purpose of looking at how problems are structured.
[00:12:11] Speaker A: Yeah, yeah. So I'll actually go one step a little bit further just to give a little bit of context. So throughout college I had gained these very useful skills. So I was in Dr. Mitchell's social cognitive neuroscience lab at Harvard.
I had done a computational chemistry internship at a pharma company. And through these experiences I learned Python and C. So I was getting equipped with these like technical chops. And because of that I knew I wanted to do a PhD at some point. Like I knew that I wanted to build that skill set out further.
However, I also knew I wanted to get crisp on some fundamental skills like how to structure a large scale problem.
And one downside of academia is that it's really hard to get exposure to these real world problems. You need to go out into the world and do something like consulting resulting to get that experience.
So I really owe all of those skills, skill development to Putnam because that was the opportunity for me to learn how to determine what the right problem is to solve and then develop a structure on how to execute. This was important for me also because I knew I was going to go back into academia. But even when I was in academia I knew I always wanted to be considering what those real world implications would be of my work when I would inevitably go back to industry.
The large scale problem solving, I think it can be really rigid and malleable depending on the situation that you're in. But what I like to use is the scientific method. It is structured enough but flexible enough. It's really just breaking it down into having a very clear goal, making sure you're avoiding any scope creep and Making sure that you're aligned with everyone that you're working with on what it is that you're trying to achieve and solving the right problem. Then from there, having some kind of hypothesis and some research method to figure out if it's data that needs to be collected, what you need to be collecting to answer that question, and then taking those findings and being able to synthesize some meaningful recommendations as a result.
[00:14:07] Speaker B: I'm smiling inside, mostly, but here I am smiling on the outside, because that basically mirrors exactly this storyboard framework that I've shared with my clients. That is, I looked at thousands of presentations and communications inside companies that I work with. And then I also looked at what I knew about just presenting, even Monroe's Motivated sequence, which is something out of Purdue in the 1930s. Most political speeches are like this. And I was just looking at this sort of common thread, and it really kind of is, what's your goal?
You know, what's in scope? Like, what are you going to talk about?
Um, what is your hypothesis? What do you need to figure out? Right. So it goes back to this. What are your recommendations on best practices and what are next steps? And so it's basically, if you boil that down even more, it's sort of like, here's the problem, here's the solution, or the proposed solution. What should we do? And I. I feel like that is an elegant way of always keeping something moving and having sort of what I call horizontal motion. Because most of my clients in biotech and life sciences, they have deep, including you, and you have both. They have sort of horizontal expertise and vertical expertise. But a lot of people start their career, Anoushka, having what I call vertical expertise, like going deep, going on the models, looking at the science, like looking at cells, looking at the.
And they don't have the opportunity to then take that and translate across functions, across teams, across disciplines, industries. And so that requires a different skill set, but it still adheres to that structure that you learned there and then you brought to rice for your PhD. So I'm always trying to find connections between the two. And that one seems like a very clear one.
[00:15:55] Speaker A: Yeah, 100%. I think it's applicable in so many different situations. And whether it's corporate landscapes or academic landscapes, it really doesn't matter.
[00:16:05] Speaker B: So I would love to know, just to zoom out to a concept that is flooding the world right now, which is artificial intelligence. Obviously you're someone who's been working with artificial intelligence for a long time. I'm curious if we sort of look at this deluge or this tidal wave, this absolute sort of paradigm shift that's happening. We've seen going from chat and sort of simple sort of interfaces into kind of smarter and then more agentic. And now we're looking at, you know, generalized intelligence and then beyond that. And of course there's an intersection with robotics. But maybe you could just give us from your lens, you know, where are we in the story of AI in terms of humanity and where we are? Because, you know, it's been a short time, relatively. It also feels like there's been so much change that I think people are starting to feel weightless with all of this whiplash that's happening sometimes even daily. So maybe just orient us a little bit. And ultimately where you think this is going to keep going? Good, bad or indifferent?
[00:17:15] Speaker A: Yeah, it's a good question. So when I was talking about BERT, BERT is an LLM, a large language model, and that came out in 2018.
And we are advancing AI capabilities faster than anyone realized we would be able to, and frankly faster than I think we are understanding their implications.
So we went from BERT to GPT to racing models in just a few years and our frameworks for evaluating their impacts, I think on human behavior at least, are still catching up. So I would say if you're thinking about where we are in the context of OpenAI's five stage model, and I can really quickly walk us through what that looks like, I would say we're still in these young adult years because we're pretty settled, but we're still very excited and anxious about what the future holds and we are innovating at an extremely rapid pace. When thinking about OpenAI's five stage model, the first stage is LLMs. That is really simply where you're predicting the next word of any given sequence. The second stage is these problem solvers. So that's where the clods and anthropics come in.
That's when you know that your AI has the ability to reason and solve problems. And then I would say where we are today is agents. So this is the stage that my company is at, which is where we're innovating. Agents tend to be more goal directed, they're more context aware, autonomous.
So an example of where my team is using an agent, we developed an agent that can look at your data and can recommend a specific threshold of volume or something that a customer needs to meet before earning a rebate, for example. Then after this stage, we have a fourth stage called the innovator stage. And that is where AI can improve itself and become smarter and more intelligent over time without any human intervention. And then we have artificial general intelligence. So that's when AI has the intelligence of a human being. And of course, we're not quite there yet.
I would just say that there's no need to be fearful. This is something that is technology is going to take its natural course. And so we should be embracing this as much as we possibly can. Within reason.
[00:19:26] Speaker B: Do you want your team to have a seat at the table?
Would you like your team to speak with clarity, confidence and influence? Reach out to us DNA.com for more information.
So it's funny because a very close confidant of mine this morning is sort of an opposite point of view that, you know, it's just getting so powerful that like our irrelevance as humans that, you know, any job that is happening out there in the knowledge economy and it's all going to be replaced because, you know, look at what's happened to, you know, computer science degrees. Look at what's happened to coding. Look, you know, I think Zuckerberg said something that within 18 months, this is like two months ago, that all 90 of code was going to be generated by AI. And so you just look at this and it's like these iceberg, like cleaving off. You're seeing these industries fall almost in real time. So I would love for you to explore that because it's not going to apply to maybe your plumber or your carpenter, but you could see how people in certain professions might feel the, the heat much more quickly in, at at a speed that I don't think anyone can really understand. So how do we distribute that future model towards, you know, all industries? I mean, are some going to be safe and others are going to be, you know, irrelevant? Like, how do you see that?
[00:20:46] Speaker A: So I think there are a couple ways of thinking about this.
So I think that, number one, we cannot just assume that that AI is going to magically do our jobs or replace us, because that is certainly not the case. And you know, if you have a clogged sink, you still have a clogged sink. You still need to call a plumber. I also think that we need to think about when and how and in what context. You know, there's a time and place for everything. We want to be using AI. So for example, in my family, I am like the AI person in the house, but often I'm the one who pulls out my phone and I'm like tempted to use ChatGPT, but I'm like, you know what? No, I'm not going to pull out ChatGPT. Let's use our brains for a second. And an example of this is my family and I, we were brainstorming games to play at our last family gathering and people were so quick to pull out their phones and figure out what to play their ChatGPT apps. And I said, hold on a second, what are the games that we used to love playing as kids? What are the games that like, we want to pass on to our next generation that hold some kind of sentimental value? And of course, there's nothing wrong with using ChatGPT in this situation and we did end up using it for some game suggestions. But I think there's beauty in figuring out the time and place of when you want to use those things.
So I think that really understanding that there is a balance in life and that we can't run to it for everything, I think for someone like myself in software, it's really easy to do that, but it's really not like you still need to be able to build a house. And with the degree at which AI is innovating, maybe there's going to be an app that can build your house one day. But at least in the near term, we need to be very, very context aware of what its capabilities are and are not.
[00:22:29] Speaker B: I agree, but I feel like in that equation, you have to be very intentional about what you want. And so for you to consciously think, okay, I'm not going to use this, I'm going to use my brain or I'm going to brainstorm. That's a conscious choice that you're investing into your family, your creativity, your brain. But I, I feel like, you know, most businesses incentivize by bottom line, they're going to look for cost cutting, inefficiencies.
And I look at, you know, what happened with Kodak, right? They didn't adapt. Now they're just cute little like toys. At every third wedding you go to, someone has a tiny little sort of technology that spits out, you know, there's a lot of technology that's sort of been completely evaporated because it's now become driven by humans in the loop who want that. Like you have to, you can go buy a record player today, but you have to sort of seek it out a little bit. You have to be, and it's fun and if you want that in your house. But you and I know that 99.999% is going to be through the Internet, Spotify Whatever. Because the economics are just too powerful. And so I think what you're saying, I value that, and I value what, what humans can do. I guess my philosophical question is, you know, where are humans in the loop if they're, if we're going to have these massive leaps in innovation and AGI, you know, where do we count? Where do we matter still? And of course, I, I know there's a. A flood of it, but I'm. I'm just curious from your perspective, like, you could look at what anyone is doing and say, how much of that can be outsourced to, you know, large language models or bots or agents.
[00:24:07] Speaker A: So I think if I'm, if I'm understanding your question, I. I think it's really thinking about how to bifurcate what AI should be used for versus not.
Right? So in my head, you want to let AI handle large data processing, pattern recognition, but then I want to make sure that I'm showing my kids that, like, don't just take out your phone for everything.
Leave creativity apathy, making meaning out of these experiences. I want to protect my kids and make sure that they are maintaining their curiosity and asking questions that may not always have algorithmic answers.
Right. So I think it's really just making sure that we are, are thinking very consciously about what we. What kind of world we want to build and how to bifurcate the different uses for where AI is relevant versus not.
[00:24:57] Speaker B: That's fair.
So talk to me about Pros, where you currently are. So you've gone from, you know, brain scanning to building and predicting, you know, health outcomes, to now building pricing tools or using AI to sort of optimize pricing? Could you walk me through the high level of what you're doing at Pros and then how you've leveraged your experience and your training to do what you're doing today?
[00:25:26] Speaker A: Yeah. So I work for Proz. We're a B2B enterprise software company, and we have two main flagship products, our price management solution and our price optimization solution. And basically what that is is we have large businesses that come to us for pricing software, and they manage all their pricing in our software and also some use our optimization tool, which helps them determine what the optimal price point is. And kind of what led me here was that so much of my PhD ended up being in this computational space. So when I applied for my position at Prose, they were specifically looking for someone who had experience with neural networks, who could speak the science, if you will.
So I started my journey at Prose as a researcher Then moved into product management.
And at Prose, I've had the opportunity to productize all different types of AI. So predictive, agentic, generative, that is for this use case, so pricing and in the context of pricing and finance. So an example I can give you in the context of price optimization is let's say you sell LED string lights and you're in this B2B setting. You have color changing bulbs, solar powered battery is a remote control feature, it's a retail outlet, it's an urban geography in California. So that price point might look a little bit different if all those attributes are different. And things like seasonality, geography, all these things affect the way the price should be optimally delivered to the customer.
So our science is looking across these massive data sets of historical data to determine what the optimal price point should be.
So it's been a really cool experience being able to take these technologies and productize it into something that is applicable for real world applications like pricing in these B2B settings.
[00:27:17] Speaker B: So it would seem to me that you can leverage AI to test like a billion hypotheses and come back. And this is one of the things that I enjoy most is that it expands your ability to run with an idea and then ask the tool to iterate it, you know, a hundred times and then run those tests. So is that a component of it? Because I would imagine that, I mean, pricing is a very complex, I mean, it's simple in some ways, right? What do you charge for that? But what is the value? What are people going to value? And I use this with my clients, right? I talk about value as a concept and I say, okay, everyone here, imagine that I have where you are right now, a bottle of water to sell and how much would you give? You know, type in the chat or whatever. And people, you know, a dollar, nothing, right? It's always like almost nothing. And the reason is because everyone's hydrated. And I say, okay, now imagine that we're on a trip and you know, we were in a terrible accident, but everyone survived and we're in a desert and we're all three days without water.
And you're all, it's, you know, minutes before you expire and I show up with a bottle of water or somebody does, what would you value that exponentially more? Because the context has changed. And so I use this in communication that, you know, you have to frame things with context. And so a lot of pricing is so dependent on context, but the context is sort of infinitely shifting. How do you sort of control for that when you're optimizing pricing, yeah, that's.
[00:28:46] Speaker A: Really challenging because context is something that's a little challenging to quantify and enter as data, as I mentioned, unless there is some way to codify that.
I think there are.
You're really talking about value in that case, right? Because if you're absolutely desperate for water, you're going to be willing to spend a lot more money than a couple bucks on that water bottle. So I think those intangible things, things like tradition, exclusivity, luxury experiences, those types of things are really difficult to capture in an algorithm and to get some type of, like some type of optimal price point.
So I think what we do with our customers is we help them develop methods to be able to quantify that thing, factorials, that kind of thing. But ultimately the intangibles are really challenging to work with.
[00:29:36] Speaker B: But I do think that they drive a lot of that because you know, the context around.
Like I remember and I've shared this before, but in college I took a marketing course and one of the big epiphanies was the lack of price elasticity with high value projects like, or products like Jaguar. And the simple example is, you know, if Jaguar at the time was sort of a high, high end brand, when they discounted it, they sold less. Which doesn't make sense with traditional economics because you'd think, well, if you discount something of value, everyone's going to want to buy it. But that's not the case. People perceive that this is a prestigious item. And so I'm curious how that world will evolve over time because I, I, I, I, I just know that, you know, part of what drives the pricing is going to be the context and how people feel about it and the sequence and even just you know, like a simple concept like I, I'm sure, you know, like the anchoring effect, right when they expose people to higher numbers even that have nothing to do with the pricing, people tend to bid higher on those items than if they're exposed to low numbers, which again have nothing to do with the pricing. Like it could just be the temperature that somebody says, you know, oh, do you know that right now in the Middle east, in this country it's 120 degrees anyway, what would you pay for this bagel, right versus like oh, did you see what it was in Antarctica? Yeah, it's negative 30. What would you pay for this bagel? In, you know, the first scenario they'll build higher because they've been exposed to a higher number. So anyway, just something that I'm connecting the dots with.
[00:31:10] Speaker A: Yeah, 100%. And I think one thing that's really cool about elasticity is that a, is our models are able to take that type of information into account, which is really cool. But what is also really neat is that if we're able to consume that from our customers and be able to kind of see how different industries elasticities impact their, their win rate probability curves, like I think there's, there's still so much room for, for discovery in this space and so much more that science can do.
[00:31:38] Speaker B: Yeah, no, it's exciting. I'm excited for you. So I always like to get some practical tips and I'll be selfish here, but I always think about sort of creating better prompts or getting better outputs from AI tools, whatever you use. And I know you have some thoughts on that. I'd be curious because you know, we've heard a lot about prompting and prompt engineering and then steal these prompts and all this than that. How would you help somebody who's looking to get better outputs from their various tools? How would you coach them to think about that process to get a better ultimate output?
[00:32:15] Speaker A: There's two things I would recommend. The first is from kind of my researcher hat, my researcher lens, which is just validating the output like a researcher. So approach it with healthy skepticism. Ask for reasoning and sources cross check important information.
We've seen lots of different situations where professionals have asked for citations and the AI has just completely hallucinated and made up something and that can be really dangerous.
So I would just, you know, approach output with a good amount of healthy skepticism.
[00:32:48] Speaker B: Can I ask just. I'm gonna interrupt you mid, mid phrase because I think that's so important what you're saying and I've thought a bit about this, but I would love your expertise. Anoushka, like, do you think that it's useful to use a tool to sort of fine tune a sort of credibility checker that you would then copy and paste in every prompt when you finally get your output, saying now before you give me the final output, like this is always draft form.
Go through this checklist or some version of that, and if so it sounds like we're on the cusp of it. What would that checklist sort of be?
[00:33:27] Speaker A: Yeah, I think it depends on the context. If you're specifically asking for it to check calculations or check citations or check for certain phrases. I think it really just depends on what you're. You're looking at. What I would say is check the checklist just because like, just this morning I, I was trying to like hack and do some math in and it just pulls com. Like the completely wrong cells to. To do the math that I wanted it to do.
[00:33:55] Speaker B: Yep.
[00:33:56] Speaker A: Then I was like, oh boy. Okay, I just need to go back to the. Go back to the drawing board and check it myself. And that's like a good reminder for me that, hey, like, even if it's just a cell and you're specifically saying, hey, go look at G17, it may not necessarily be. Do that and listen to you.
[00:34:11] Speaker B: Right? Yeah.
I've seen this advice that you ask. You know, how confident are you in every data point? Explain your rationale, cite your sources, provide validation and you'll get these sort of, you know, here's where I'm confident, here's where I'm not. It's like, okay, now take this and rerun it and only give me the confidence items over 90% and ensure that all URLs are valid or. And then you have to check it. So. So that, that's a really useful aspect of that. So I interrupted. Please go on. So, other ways to get good output. Yeah.
[00:34:44] Speaker A: Yeah. Another one I would say is to. To think about your engagement with the AI the way you would be speaking to another human. So don't just say, hey, give me my entire marketing strategy. I would break it up into smaller tasks and that will help you get more thorough and cohesive answers. So these complex requests, if they're broken down into these chunks, then you can start with, okay, instead of saying, what's my marketing strategy? First talk. Ask about the audience and ask about the messaging. Then ask about the channels and just help your AI structure the thinking.
And that is what's going to allow you to, you know, get better prompts and more thorough prompts. And also it's going to help you digest the information that it's giving you. Be as well.
[00:35:29] Speaker B: Yeah. One of the things I've done is I say, you know, act as my co. Product manager or co founder or co.
In this area. And first I want you to interview me with exhaustive detail, everything I know about, you know, A, B and C. And do it in sequential questions, one at a time. And we'll go through that a few times until. And any gaps, let's see, flag those and let's fill that in with validated research. But sort of using it as a tool to extract what I know about something and then also expose what you don't know and then find a way to fill in those gaps.
[00:36:10] Speaker A: Yeah.
[00:36:10] Speaker B: You know, I can't help but think about the big existential question for me, which I wonder, to go back to the very human element of, like, where is this all going? What's the future? And I feel like, more so in the last few years, there's been this feeling of, I call it weightlessness, but it just feels like a lot of narratives have shifted wildly and everything feels a little unhinged when you look at creating a future for your kids and that you feel like would be exciting.
You know, I've heard people say that this century is going to be full of higher highs and lower lows than the previous century. And I think that's a good framework to think. Think about it. But when you think about what, what worries you or what excites you about the future, especially with this, all this automation and AGI, well, what's your mindset generally? And then where do you, how do you think about your concerns? How do you sort of test those? And then where are you optimistic?
[00:37:07] Speaker A: So, yeah, I, there's, there's, I guess, two things I would say here. So one of the things I know we've talked about is really ensuring that we're building AI that serves human flourishing.
So with my neuroscience background, I sometimes worry that we are moving so fast in AI development, we need to take that time to understand how these systems interact with society with fundamental psychological needs to make sure we're optimizing for the right outcomes. So I have two small kids. One is three, one's 10 weeks old. I'm constantly thinking about the condition of the world that I'm going to be leaving behind for my kids.
[00:37:45] Speaker B: Right.
[00:37:45] Speaker A: The other thing is that I am really intertwined with this idea that AI and technology very broadly can actually help people be their best version of themselves. And it sounds really abstract and overly optimistic, which, you know, I am the eternal optimist. But if we just start. From the beginning of my career, I was seeing how cognitive patterns literally change people's biology, right. Their stress responses, their health outcomes. Language wasn't just communication. It was kind of this window into people, into how people were processing the world. We were seeing this with brain activity as well, and how these networks of brain activity were also related to healthier outcomes. There's so many different applications for this, and this fascination kind of followed me everywhere. So in academia, I was obsessed with this question of emotion and wellbeing. And now in industry, I'm chasing a very similar question just at scale, which is really just how are, how do we build these AI systems that aren't just solving business problems, but actually help humans think better, decide better, live better. Right. I strongly believe that this technology is out there to benefit all of us. It would be very, very easy for me to just focus on things like efficiency metrics and profit optimization, but I need to also think about the human flourishing, not just human productivity.
And so if you constantly have this in the back of my head of how every AI system needs to be shaping how people think and interact with information, I think that's a pretty healthy way of thinking about the future. So you're cautiously optimistic, if you will.
You're very grounded in the capabilities that we have as human beings that technology will never be able to replace. But also recognizing that technology is a tool for us to be the best.
[00:39:36] Speaker B: Versions of ourselves, the way we talk and think about things sort of indicates how our future unfolds. And you saw that, literally, you saw that, that led to like, these.
And, you know, on a very personal note, and it's very sad to me, you know, my dad is 85, but he's in, he's in the late stages of, of Lewy Body Dementia.
And you know, I know that part of it is, is driven by his, the way he's used language to create his world. There's a beautiful quote by the 12th century poet Rumi that, you know, speak a new language so that the world may be a new world. I love this quote because it's, it's, it's about using communication and language to create your future. And you saw that, like empirically, you know, and what I'm sharing with you here, and I'm going to read off this page, so forgive the I shift, I, I posted this a while back, but it says expect blank, believe in belief to drive performance. So the research indicates that expectation can alter biochemistry.
And placebo studies show that belief alone can trigger analgesia, immune shifts and measurable behavioral change. Traditionally, this was linked to dopamine driven reward circuits. But a 2024 PLoS Biology replication shook that assumption, showing placebo pain relief even when dopamine was pharmacologically blocked. Bottom line, expectancy effects are real.
So I feel like this is correlated with what you're saying and it gives me hope. But you tell me if I, I miss, I got it wrong or I misattributed the connection.
[00:41:17] Speaker A: No, no, you got it, you got it. It's really just how we think about the world and we're manifesting that for ourselves, right? It's like all a self Fulfilling prophecy. So if we are doing good and doing good with the AI that we're building, then nothing like it.
[00:41:31] Speaker B: Well, it's a multiplier, right? I mean, it is, it's a lever, it's a multiplier effect. But I mean, to that end, I think I'm finding this optimism in our conversation that I knew it was going to be interesting and fun to speak with you. What I didn't expect was to feel inspired about, towards the future, around how we think and speak about it. And, and this is top of mind for me, Anushka, because I really think about what is my responsibility every day. To be the best version of me, to support my family, to help others, to be of service. I mean, this podcast is a mission to serve others, to hear this when they need to, to give people that portable message. And, and I just know that within ourselves and our language and our thinking, we can create the future that we want and believe is, is true. And you've seen it, you know, in the lab. So it's really exciting.
[00:42:26] Speaker A: And I think, you know, we're, we're both parents of two young kids and I think that's been really grounding for me to like constantly be checking myself and to think about, okay, like, what am I doing with my life? Is this, is this good? Are they going to be proud of me? And that is such a good, like, sanity check. And just to make sure that, that you're staying true to the values that I was brought up with, to make sure that I'm constantly being a good role model for them. As long as we are positive and like I said before, like doing good by what we're building, I think that can really make, make the difference that.
[00:43:03] Speaker B: That's a wrap. I want to thank you for being on the show and I just loved speaking with you and really understanding, you know, from the lab to real life, the applications, from brain scans to business intelligence.
Dr. Shahane, thank you for being on the show.
[00:43:21] Speaker A: Thank you for having me. Stuart, always a pleasure.
[00:43:23] Speaker B: Always. Hey, it's Stuart again. Before you leave, if you love this podcast, subscribe. And also if you go to dn8.com you'll find a sign up for our newsletter where we give you actionable and practical advice. And be sure to find us on social media. And don't be shy. You can give us a six star review, but we will settle for five. See you in the next one.