Can AI spot diseases your doctor might miss?
In this episode of Your AI Injection, host Deep Dhillon sits down with RJ Kedziora, co-founder of Estenda Solutions, to explore how AI is transforming medical diagnosis and why human doctors might soon be irresponsible not to use it. RJ shares how his company has spent 20+ years developing AI systems that detect diabetic retinopathy by scanning the back of your eye, often catching what human readers miss. They discuss the Joslyn Vision Network's deployment of 100+ cameras across Native American communities, ambient listening tools that free doctors from endless paperwork, and the challenge of creating "smell test repositories" to detect Parkinson's and Alzheimer's years earlier.
And as RJ points out, humans get tired, forget research, and take 6-7 years to diagnose rare diseases like lupus, while AI never sleeps. The two explore whether we're headed toward a future where refusing AI assistance becomes a medical injustice, and what happens when machines prove just as reliable as even the most experienced specialists.
Check out RJ's profile here: https://www.linkedin.com/in/rjkedziora/
and Estenda Solutions here: https://www.estenda.com/
And check out some related podcast episodes here:
Get Your AI Injection on the Go:
xyonix partners
At Xyonix, we empower consultancies to deliver powerful AI solutions without the heavy lifting of building an in-house team, infusing your proposals with high-impact, transformative ideas. Learn more about our Partner Program, the ultimate way to ignite new client excitement and drive lasting growth.
[Automated Transcript]
RJ: At, at some point it, it's going to be irresponsible to not use the AI systems. Yeah. We as humans are very fallible. There's a reason for second opinions. My mother-in-law decades ago now passed away from lupus. It's a rare disease. On the rare side, it takes six to seven years to diagnose because it's, it presents differently.
And so how do you use the AI systems that, don't forget, that don't get tired, that are more aware of all of the research that's taking place. So the biggest use cases now in, in, in the industry are really the lower risk ones.
Ambient listening. it's taking off the simple idea is it's taking notes just that you and I would use GoToMeeting, you know, or any other of these, you know, virtual platforms that can take meeting notes in the background. This is essentially what they're. They're doing and putting that information in the EMR.
'cause the doctors, the healthcare practitioners did not get into healthcare to click a box, check another box, and find some new medication and a dropdown somewhere. And they want to take care of people. Let the AI take the notes and record the information.
CHECK OUT SOME OF OUR POPULAR PODCAST EPISODES:
Deep: Hello, I'm Deep Dillon, your host, and today on your AI injection, we'll explore AI powered wearables, remote monitoring, and the ever evolving landscape of digital health.
With RJ Kedziora, co-founder and COO of Astenda Solutions, RJ holds an MBA from Westchester University and a BS in computer science from Duke Kna University At Aste, he leads the creation of tools that advance patient outcomes, streamline healthcare workflows, and enable evidence-based digital therapeutics. Rj, thank you so much for coming on the show.
RJ: Absolutely. Looking forward to the conversation.
Deep: Awesome. Cool. So let's, let's dig in. Um, maybe start by telling us When a typical customer calls you in, why do they reach out walk us through an engagement? Like what, what do they usually look like?
Xyonix customers:
RJ: Yeah, absolutely. We are very much, uh, as I think of it as the front end of, of healthcare.
We are working with startups, r and d departments of larger organizations. Someone that has an idea but doesn't quite know or understand how to actually make sense of that idea and execute it on it. From a startup perspective, it's usually a person who's exited a venture or two and has some money and wants to do something in healthcare but doesn't quite understand healthcare or doesn't quite understand software, or is very scared of what it means to develop software that has to go to the FDA.
That, that's where we fit in and very much on the patient care of things. So as data is coming off of a medical device, how do we use that data? How do we visualize it? Clinical decision support algorithms, or even as that device is being used in the field gathering real world evidence to, to make sure that it is operating as we intended, and what can we learn from that information.
So very much on the, on the patient health and wellness side of things, I like to think the applications we help develop for others are just improving, our, our health and wellness out there. And not just in terms of sickness, but that health span, you know, my personal goal is to live to a hundred plus with a good quality of life, not just make it to a hundred, to continue to be able to do those triathlons and racing out there that I do and love.
Deep: Are you mostly on the patient side? Are you mostly on like helping physicians and kind of making them more efficient? Are you on the hospital, like operational side, operational efficiency side and where do you see sort of the most action right now?
RJ: Yeah, so projects typically involve an MD and a PhD.
As, as we're developing solutions, it's one thing for myself or a physician, a PhD to sit there and be like, oh yeah, we think this idea works. It's another to make sure it, it actually does work. So as we develop a new clinical algorithm, a new way of, blood glucose treatment algorithms for someone with, with diabetes.
Does it actually work? Um, so very much on the health and wellness of, of individuals. We've never done anything on the billing side of things. Mm-hmm. Uh, you, you mentioned efficiency. So efficiency ties into that, from a patient population perspective, where do I have to spend time? Like which segments of the population does it make sense to really engage with, uh, is a key factor in some of the projects that we're working on.
Deep: And so maybe set the context for a little bit. It sounds like you know, maybe the MD or the PhD or both of them are entrepreneurs, they're founding a startup, it's pre-seed, you're helping them maybe building out a team, like, maybe they're new to the whole software product development side, or are they savvy operators and.
They're already aware of all of the, nuances from FDA compliance to, HIPAA and all of the usual, you know, regulatory environments that you operate in within healthcare and,
why are they reaching out to you in particular, and your team and how are you guys engaging maybe in the early days and then maybe a little bit later on?
RJ: Right. it's a good question and it makes my days very interesting 'cause I, I play as that high level solution architect helping envision, you know, what the product can be. if it's a startup organization, it's, we're bringing to the table, we are ISO 1345 certified allows us to do medical software development.
For those that are not familiar with what ISO 1345 is. It basically, it's a good, well structured software development process. It's what the FDA here in the US or you know, regulatory bodies, others around the world are looking for that. You have a good, structured process in terms of development that's repeatable.
So we get audited regularly by our customers, which is always interesting. But our typical engagement is exploring first that idea of what we're going to create. if you want to create another accounting system it's not us, it's okay. Accounting is, is fairly well established.
It's looking at your, your blood glucose information, your blood pressure information physical activity data and trying to develop new insights from that information. You might have a different approach to helping an an individual with diabetes, with congestive heart failure. Is that typical en engagement?
We've been lucky that we've been so much part of that solution creation. Customers over the years have put our names on patents. Um, so we're very much assist at the early stages in the ideation. What does this have to do? How's it going to do it from that care management perspective, from the health perspective.
And then we then have to translate that into the technology. Okay, so here's what we want to create, what we want to do, what are the business rules, the clinical rules, and then how are we actually going to architect and develop this system, you know, whether it's a mobile technology, cloud-based system.
and these days, how does AI feed into it? Um, yeah, that's probably like the first question, I guess. It's like, oh, we want to do something with ai. And I was like, hold on. It's really about the data first and foremost. And, and, and we start exploring the, the data that they have access or that they want to make use of.
Deep: It sounds like you're working a lot on the clinical side and you're also starting really early in the cycle, so it's kind of a long journey to get from that early stage to honestly even a lot of times fundable, startup but certainly to like product deployment.
So maybe walk us through some of like, what's your typical process? Like, you engage really early get pulled in as a consultant on one of the NIH grants or something that these guys are working with, maybe in academia or in a research hospital. and then that kind of buys time and a little bit of budget.
To like, de-risk things a little bit, maybe get further along in the FDA process so that you can raise private capital. Is that a standard journey or is it something different or?
RJ: Yes, our, our engagements are not three month engagements. They're, they're very long term typically engagements.
A number of our customers we've been working with for 15, 20 years now. On continual projects and improvements, new ideas and things like that. We're fortunate. We actually just received, and N-I-H-S-B-I-R phase two grant, um, working with Monell Chemical Centers here in in Philadelphia around the idea of creating a smell test repository.
So the loss of your sense of smell is a powerful indicator of potential issues of Parkinson's at Alzheimer's. But it didn't get a lot of attention until COVID happened, and we all started losing our sense of smell. Now everybody's like, oh, wait a minute. What does this mean? You know, how do we provide value of this?
What can it be potential indicator of? And there's just not that standardized repository out there. So. what was fascinating about being in this early stage of it, and, you know, we helped write the grant that that was submitted to the NIH and their initial reaction was like, well, this is an interesting enough.
It's like, okay, yes, there's a need for this repository, you know, dedicated to olfactory smell test research, but how do we make it more interesting? And, and what we came up with the, the group was this idea of conversational analytics. and it's just where you're using a gen AI approach to talking to the data.
To be able to understand and explore it. So one, trying to get new people into the field and understand what's going on. It's like, okay, you don't have to understand SQL or R or databases. You know, we have a repository here. You can just start asking your questions to start exploring that data. And then as researchers are more experienced and wanna look at this data across studies, they can start pulling out and, and extracting pieces of information.
talk about the journey. It was an 18 month journey from introduction to, okay, now the project has been funded. And we're now th we just passed a three month mark. We're three months in into the opportunity here where we've been developing the platform to collect this data and, and start applying the, the conversational analytics to it.
So yeah, it's very long term engagements.
Deep: I don't know if you want to double click on this project or another project, but I feel like we should double click on a project and kind of dig in a little bit. This one, you kind of open it up, so I'll, I'll start there. What exactly is in this repo, are these like, experimental results?
what exactly is this data that you're sort of conversationally interacting with? Mm-hmm.
RJ: it's very research oriented. So as there are standard smell tests on the market, there's one called the Sentinel Test the NIH toolbox. There, there's a number of sniff and sticks. These are, you know, standardized smell tests that are on the market. And then you're going to do research around the use of these and then try and find those correlations and eventually causations into, you know, what is the loss of smell mean?
How do you create a better smell test? And so when you look at the data that's going to be uploaded into the repository, it's going to be that study research data. So it's, okay, here's a research study, a paper that we're considering. We have a theory that we're gonna test. You know that the loss of smell is equated to, you know, as an indicator of future incidents of Parkinson's or potentially concussion.
You're gonna execute and the smell test, you know, actually administer it to the individual kind of thing, which is then going to generate data. Can you smell the scent of orange, of motor oil? Of grass? And the industry is standardized across a number of these different smells.
So we're gonna gather that smell information into the database. Can you detect these? Is that, is that
Deep: combined with like survey data? So, so you've got a population you're gonna commit to giving them a smell test at regular intervals, and then you're gonna ask them if you're sick, if you have the flu or you know, like all kinds of like maybe health related.
Yes,
RJ: exactly. How old are you?
Deep: And you'll have like a full longitudinal view on that population over time. But linked back to smell something like that.
RJ: Yes. And then be able to take this data. And the NIH has created what are called common data elements, CDEs for short. And it's a method of standardizing these questions, particularly those survey questions across various studies.
So as we're gathering all this information, this group is asking about age. This group is asking about, you know, mental health conditions. This group is asking about concussion history and their standardized methods of starting to ask those questions and gathering that data such that then you can start looking at it across studies.
To draw bigger correlations and, and start asking new questions and then generating new research grants.
Deep: So then the, the AI ish angle here is, so you take the results of those patient longitudinal questionnaire plus olfactory kind of test results, and you pull that all into some kind of maybe semi-structured data repo with, combination of generative stuff on the texty stuff and maybe generative stuff into sql to be able to go in and out of maybe a ba, a business analytics system, something like that, to get you plots and figures and have like kind of nice clean way of interacting with it
RJ: absolutely, and, and I'm not the first person to use the term, but we're calling it conversational analytics. and the key for us is, so you don't have to know SQL or RS SaaS, you know, some programming. Um, that you can just start interacting with this data.
And then, as you said, you generate a nice graph, a chart, share that with colleagues, you know, what do you think of, you know, where I'm going with this. And then using that generative AI capability. I, I love asking gen AI as I'm, as I'm using it, you know, what did I forget? We're, you know, what's the next step?
Or, you know, I, I probably almost never write a prompt just natively myself. I'm always like, help me write the prompts to do X, y, and Z. Yeah. So how do we incorporate that same idea here as someone's asking the question? We can then behind the scenes and be like, okay, what's the next question that you should ask?
And, and pro and prompting the user, the human to think about different.
Deep: cause you mentioned SBIR so there's a commercialization component. So is the motivation for the client to then charge researchers on behalf of like usage of the platform or something like that?
Or is it, you know, they get some kind of special access to the data or early access to the data or something?
RJ: Yes, there, there is, is part of the phase two is it is all about commercialization and how do you create a viable business model with this. So there is a commercial component to it. There's a free access, um, to be able to get familiar with the data, understand it kind of thing.
And then there's a paid model. You as an individual researcher are then gaining a deeper level of access. And then we have an enterprise subscription. So if you're a large, um, organization, you could be able to import your own data and start experimenting with it instead of building your own repository.
Let's, let's start experimenting with this. seeing those early results, we're also going to build a community around all of this and an educational component. That's one of Elle's the kind of chemical smell and center here in Philadelphia is their mission, is just education around, you know, smell test.
How do you do this properly and what are the options? So yeah, there'll be an educational communities aspect to it as well.
Deep: What are you finding with respect? to smell, the loss of it and the changes in it. Like what's the high level state of the art knowledge that we have today about this?
is it linked to? Why did we lose it during COVID? What does it mean? Like, why does anyone care about this? what is the. Clinical assessment of these tests and what are they indicative of, of if they, if you can't smell motor oil, like what does that mean,
RJ: yeah.
I am not the PhD in smell tests and olfactory research. That's some of our colleagues that we're working with at, they are the PhD that can do a deep dive on this. At the high level, it's in potential indicators. 'cause it's very early on in the research of this, it just hasn't gotten a lot of attention.
The idea of smell testing has been around for decades and smell test isn't new, but it's just not widely accepted just yet. It, it's potential indicators of things related to the brain. Parkinson's, Alzheimer's, concussions, um, and, and insight into, you know, what might be going on there. Why are you losing that sense of smell?
And then if you have lost it, there are capabilities of then. Retraining and regaining that sense of smell. What, what I found fascinating is, and this is as we get into our engagements with various different customers, I'm not an expert. The, the people on the teams are not experts in the early, you know, that specific field because we are very much on the experimental sides.
So we do provide, you know, some basic training to get people up to speed, but then it's working with that MD and the PhD and to really, figure out what does this solution need to do? How are we going to make it doing? So I've personally been learning a lot about this smell and, and research and you think, okay, I lost my sense of smell.
Honestly, at, at first I was like, oh, no big deal. But as you start digging into it and thinking about it, it's impacting your life. So if you can't smell and you go out to a restaurant with your friends and they're all like talking about. Oh, this, this restaurant smells greater. The food we're about to eat smells so great.
You're not experiencing that. So you're less a part of that social scene. But more importantly and urgently, if you're at home and there's a fire, and you know, if you smoke alarm's not working, you're not gonna smell that smoke. Or you have a little baby who, soils themselves, you're not gonna know that as soon as you might know it.
And so now they might get diaper rash and now they're gonna be crying a lot more. So a lot of indicators there that at first glance, I never even thought about. If you lose your sense of smell, it does have an impact on, on your life.
Deep: I mean, all the information ultimately that goes into our mind is coming in through our senses, sight, hearing, smell, taste, touch.
But, you know, we know an awful lot about sight and hearing and maybe Quite a bit about touch, but we know next to nothing about taste and smell. You know, taste maybe from the wine community or something and maybe like the food, uh, companies that are always making stuff, but smell is definitely like a laggard.
And you can imagine how much we care about loss of vision. Not only 'cause of the impact to the quality of life, but what it says about something going on in your sort of neurological state. so it makes sense. It makes sense that it's a gap. It, it was sort of curious during COVID how it was like the first time that that particular thing became such a huge indicator.
So, I don't know, like physiologically, like do you know the mechanism of smell is it like related to the, you know, the nervous system? any of that.
RJ: that's where you have to go talk to the PhDs. Now
Deep: I'm curious because I, because it makes sense that the NIH would be interested in us getting a lot more knowledge there.
'cause it's a potential entire new line of inquiry that could lead to, quite a bit. So I think that gives me a sense, are these like academic partners or this is a, a private company that you're working with?
RJ: In, in this case, there's an academic partner like Monell is an academic in institution and then there is a company called a HRSA Health, which is the commercial partner.
Mm-hmm. They actually licensed a smell test called the Sentinel from Monell, and that's what's driving their company. So it's the three of us. Um, as an entity that are driving the
Deep: overall project. And do you guys usually take like an equity stake in the startups or are you more like, contract for hire to
RJ: yeah.
We, we've always done, contract for hire and with a client and they, they own everything that that is developed. So,
Deep: yeah. Okay. maybe what's another example of a project that you can talk about? you don't have to named names of companies. I understand that. Uh, mm-hmm.
Yeah.
RJ: We, we've, we've done a lot of work in diabetes. so one of the projects that we've been doing for 20 years is called the Joslyn Vision Network, and it's a diabetic retinopathy tele surveillance program. Okay. And so if you have diabetes, you can get diabetic retinopathy, which is a leading cause of preventable blindness.
It's just if you have diabetes, you don't feel it every day, but it's infecting the microvascular small blood vessels in your fingers, your hands, your eyes. And so if we find the incidence of disease, then you know, we can make a difference there. So we've been running this project for 20 plus years now, um, within the Indian Health Services here in the us, which is responsible for the Native American population, probably about a hundred cameras deployed across the us.
And you're taking a digital photo of the back of your eye, the retina or your eye. And as you look at that image, you can find incidences of diseases. And it's, it started out, it was all, human read, but now we're training AI systems through machine learning to be able to do that and accelerate the, the reading of, of these images.
What, what's interesting and fascinating about the machine learning aspect of it, some of the earliest approved FDA AI algorithms for doing this were approved back in like 2017 in the area of diabetic retinopathy. But the challenge of using these AI systems as the technology changes and increases those old AI systems don't work anymore with these newer images, this greater density of, of pixels, wider range of field of view.
That's one of the challenges we're, we're facing. Now, as older technology, very narrow area of the back of your eye. Now you have ultra wide, which is just so much more information. The original AI algorithms were trained don't work with these newer,
Deep: Pre predeep learning algorithms.
RJ: Yeah.
So we're, we're getting into all that and we gotta make sure that it works. And, and put it out there.
Deep: Tell me more about that. That sounds like an interesting project. So like, what's the actual imagery that gets captured? Is this when you're at the optometrist you know, where's the light shine and what, where's the photo coming from?
RJ: Yeah. So as you are and particularly for this project, um, it's with the Native American population. So as you're going to a community health clinic, um, or hospitals that, that might be available kind of thing, you're gonna have a, a camera from, you know, manufacturers such as Optos, they're, they're one of the, the vendors that, that we work with.
you don't have to have your eye dilated, which is really nice. 'cause as people just find that, uh, annoying, I find it annoying. and it's taking, you know, you look into it and it uses a laser, scans the back of your eye. Takes the back.
Deep: You mean like
RJ: the back of your eyeball? Yeah. So your eye is, it's coming in from
Deep: the front.
So this is just like the standard. 'Cause like now, like some optometrists have, pretty high res images of the eye. Are we talking about the same thing? Are we talking about some Yeah,
RJ: you That's, that's exactly it. If you go to your optometrist, you can, might even see an opto camera there. I have, yeah.
When I've gone from my annual exhibit there and it's like, oh, look at that. I, I know what that camera is. And, and use it to take the eye imagery of there where they're more worried just about, you know, vision in, in general, we're looking specific at, at diabetic retinopathy.
Deep: that, you get the image, you've got the patient, do you have it connected to their, their longitudinal history too?
RJ: Yes. So we take that image where it's at, you know, a community health center in, in rural America goes back to our reading center in Phoenix where. Right now, people are evaluating those images and it's connected to the patient's medical record system. Their EMR. Yeah. So you're, you're creating this now comprehensive picture of what's going on with that patient.
And you first do like a, we think of it as a go no go assessment. Like, is there any incidence of disease here? 'cause if there isn't, then it's like, okay, move on, we're done. and we return a report back and to the physician, to the community health worker, to the patient to be like, okay, you're good.
Here's the follow up recommendation. You know, come back in a year, two years based on, on your history. Um, which is really nice. And then all that's stored in the electronic medical record system. So when you come back in years down the road, we now have this history of even what you know, your retina looked like.
If we do find incident of disease that it is, then the recommendation does go back. It's like, okay, you need to see a specialist. We need to take care of this. Um, and how do we then match you up with the resources to, to take care of that?
Deep: walk me through a little bit on your ground truth definition process.
'cause I think this is, um, this is interesting. You have like, this imagery. You've got longitudinal health history. You have a large repo of this. You can potentially go in and find. Like you could start to build out a training data set around particular diseases. So do you have like a clinical team of maybe ophthalmologists and maybe optometrists, but that that like are fed something that you think is disease X, Y, or Z and then they're validating or not validating based on the imagery and maybe some other at aspects.
And then you're building up that data set and then you're able to decouple model construction refinement and assessment against that data set. Something like that. Yes,
RJ: and that's, we as I, and we've been doing this for 20 years, so we have a huge repository of imagery. So you had to do that and, and we've developed the systems to take an image, a particular image to generate our training set.
And we would show it to two different, you know, trained specialists. what, do you see any incidence of disease? What level of incidence do you see? What are you looking at to be able to find that in, in this image. And if those two opinions correlate, like if they're similar, then it's like, okay, there's our gold standard.
If not, it then has in the past, we've, then it goes to a third person, a higher level of expertise to adjudicate it and say, okay, this is what it is. So over the years we've built up this gold standard of imagery that one we use for training purposes of humans. Here's, you know, what you need to see and look at, kind of thing.
But now it's like, okay, we can use this to train an AI system to be able to see this and evaluate it. And it's, been interesting because there are one multiple models out there and you know, you go and use the Google's in the cloud and it costs X thousands of dollars and it's like, okay, this for the Indian Health Services, the cost is just, it's exorbitant.
and for the project budget. It's exorbitant. How do we,
Deep: that imagery is highly specialized. So generalized models are not gonna do
RJ: No, they're not. You're not, you're not taking these images and dropping them into chat, PT or anything like that. And
Deep: Definitely. And getting
RJ: meaningful results.
Deep: I'm curious like, gimme a sense of the ground truth that size at this point.
Like how many diseases do you have? How many examples for?
RJ: So we have very narrow focus on diabetic retinopathy. Okay. That's made our life easier. cause when you talk about that drought ground truth, it's like, okay, diabetic retinopathy is what we are focused on. I know there are other vendors out there that are looking at the same type of images for just, correlation to cardiac issues to general health.
So, oh, there's
Deep: all kinds of schizophrenia, all
RJ: sources of kinds of things that you can assess from that image, which is fascinating. One of the things I saw a while ago, a research report where just looking at the retina, the AI was able to detect gender, but nobody was quite sure how, or what was it keying off of to be able to do that.
Yeah. But it just goes to show like the a IC stuff. You and I don't.
Deep: So in this case, does the IHS, like, is the data set made public ' or is it Like anonymized public for other researchers. So, so we
RJ: have anonymized that data for use in our controlled research studies for use in, in NEH.
We have partnered with third party companies for them to use this information, but that is very controlled situations with permission of the Indian Health Services are very well aware of when those data sets are, are shared. But it is anonymized in, in information to be able to do that.
Deep: yeah, I don't know of any super great eye data sets that, like that are on this kind of a scale.
so your disease count would be just diabetic retinopathy. That's, that's what we've focused on for this. Thousands, hundreds of thousands of examples, I guess, of various states of disease re retinopathy or something. It's in the
RJ: tens of thousands. Yeah. Yeah.
Deep: But enough to train up some deep learning based models that, you know, with some transfer Yeah.
RJ: To train it, to then evaluate it. So we have our, our gold standard training set and like an evaluation training set you know, how's it doing?
Deep: And so have you guys operationalized these models I assume the idea is like, patient comes into clinic and gets assessed like on the fly and then gets sent for further screening or the expert follow up or something.
RJ: Yeah. So the system overall has been operationalized from a pa from a human perspective. AI is the next step. So we're still working through that to create a, a good model and then roll that out or later this year to,
Deep: and do you guys do the model construction yourselves or or do you like hire it out or does your client do it?
RJ: yeah. We, we work in multiple models. In this case, we are doing it. In other cases we have partnered with third party companies to do that. very comfortable working in multiple models.
Deep: Cool. So, okay. we've talked about a few different, things.
We've talked a lot about eyeballs and, disease assessment there. So what do you think are sort of the main thrust right now? Like, it seems like you know, there's a lot of activity around diagnosis assistance I kind of tend to generalize that up a level like physician's assistants.
whatever a physician does, can we help them? This would be like heartbeat, anomaly detectors eye scan assistance, MRI assessments to help you know, radiologists, like that whole arena is sort of, is one area. Yeah. What are, what are the big areas that you think are, seeing the most promising beneficial aspects of ai?
And I, I kind of wanna anchor this conversation a little bit because I get asked this all the time, is, you know, an AI consultant, it's like, so do you think all this AI's terrible? Like that's sort of the default de facto positioning. And it's like, no, I definitely do not think it's all terrible. They're like, well, what possible good could it have?
We're all just gonna get addicted to talking to this bot, and then our kids are gonna grow up not knowing how to interact with real humans who don't say they're perfect all the time. I was like that will be a problem just like it was with phones, but you'll also will save a lot of lives at the same time.
So
RJ: that's absolutely it. At, at some point it, it's going to be irresponsible to not use the AI systems. Yeah. We as humans are very fallible. There's a reason for second opinions. there, there's a reason. My mother-in-law decades ago now passed away from lupus. It's a rare disease. On the rare side, it takes six to seven years to diagnose because it's, it presents differently.
And so how do you use the AI systems that, don't forget, that don't get tired, that are more aware of all of the research that's taking place. I can't keep up with one journal, let alone all the journals out there and, and all the medical researches. So the biggest use cases now in, in, in the industry are really the lower risk ones.
Ambient listening. It's, it's taking off the simple idea is it's taking notes just that you and I would use GoToMeeting, you know, or any other of these, you know, virtual platforms that can take meeting notes in the background. This is essentially what they're. They're doing and putting that information in the EMR.
'cause the doctors, the healthcare practitioners did not get into healthcare to click a box, check another box, and find some new medication and a dropdown somewhere. And they want to take care of people. And this is one of those mechanisms that is letting them get back to caring for the people listening to the conversation.
Yeah. Let the AI take the notes and record the information. And what's slowly starting to happen, a couple of vendors have this out now, is that you're marrying together all of that information with the research data. So one, are they diagnosed with a specific version of cancer and they have a specific gene.
Oh, we're aware of a clinical trial, which is looking for people like this. Let's bring these people together. Or there's a, an off-label use of medication that someone's using to, for treatment of a rare disease. Let's try and use that. So a lot of those instances where you're putting together this information where you and I just can't remember all of this, suffer, be aware of it is a powerful opportunity.
Deep: I'm gonna push back a little bit so we can tease this out, but I'm curious if we rewind like five years ago. 'cause you work with, healthcare startups. I work with healthcare startups. I think you're. Likely extremely well aware of how careful and cautious one is and how the FDA is in general, right?
anything that at all can be kind of construed as offering medical advice or if medical feedback or anything to do with the word diagnosis is like, is a flagged. I wanna reconcile that with the state of today where, you know, I was just chatting with my sister yesterday.
She's a physician. and she doesn't actually have a patient that doesn't come in with like a diagnosis in quotation marks from open ai telling them what they've got. And she's like, all roads lead to cancer according to open ai. But like, that's not actually what's going on here.
what happened there? How did we go from a world where everyone was scared to death of the FDA and anyone who's actually offering something that a physician actually uses, going through a credible review process to a world where everyone under the sun is just talking to a bot that, you know, maybe in the early days tried not to offer medical advice, but now has no problems whatsoever offering all kinds of detailed medical advice.
it seems to me like the FDA either totally dropped the ball or. it's just too big now. It's, it's, it's too intertwined with a multi-trillion dollar economy.
RJ: There are a lot of things going on. So in the small startup world, yes, there is still absolutely, I would, I wouldn't say fear of, but I, recognition of we need to listen to the FDA and follow their guidelines.
And you see this a lot, you know, not intended to diagnose, treat, or cure an illness. You know, this is general, feedback for informational purposes only. That's why we're putting it out there. And even in the early days of, you know, and I'll, I'll just generically talk about chat gt, but it's any of them is that if they did have warnings, you know, this is not intended to be a, a medical, you know, treatment.
We're not a doctor, we're not in healthcare, you know, use this with a grain of salt they are getting away from that. I think two aspects. One, you said it's, they are just huge now and they have so much money. I don't know how much they're really concerned about it now.
And,
Deep: and they, and they know now that no one can shut 'em off without shutting off the economy at this
RJ: point globally. And it's not even the us It's like globally, this would have an impact if you said, we're shutting you down opening. Oh yeah. You shut
Deep: down all three of them tomorrow. It would like shutting down power.
I mean, it would be a big problem. It, it, it
RJ: absolutely would be. So, it's not happening realistically overnight. The other, if you're in the US particularly the current administration is taking a very hands-off approach. really stepping back and reducing the number of regulations and saying, go run with this, which is a mixed bag.
Because as you take your hands off the till and, and you're not directing it, then you know, scary things can happen. And that's what we're seeing, you know, OpenAI specifically, they released chat GPT five, and as part of their push into the world, they had specific examples of people were using it for medical care.
That's, you know, I think it was an example of a woman with cancer.
Deep: Let's be clear. Like do you, have you ever not used it for medical care? I mean, like, I, I use it all the time. I use it all the time. Yeah. I mean, we all do this. We might say we don't, but everybody does this if for nothing else to figure out what the heck all the gibberish is that you were told by your specialist.
RJ: And, and that's, you know, we did this with Google, three years ago and, and now we're doing with this. I think the challenge here is that it. Very often appears authoritative. Which is challenging. so as a person in the field and working with these technologies, I wholeheartedly advocate to use it, but check with a medical professional.
Just don't take its, its advice. Oh, yeah. Blindly.
Deep: You've, You've heard of the, I think this paper came out like three or four weeks ago where they were, there's a particular class of, uh, mental health disorders where the worst thing you can possibly do is engage in the delusion with the mental health patient, which is exactly what all three of them do.
They're like, oh, you think you're Jesus? Well, you must be, you know, like, and they think it's like a joke. The models do, so they play along. But meanwhile, you could just look in these logs and just today, I think it was Hasbro, one of the Mattel, one of the major toy manufacturers.
Big surprise like announced that they've got this partnership with open AI and they're gonna embed AI and all the stuffed animals. I guess I'm curious, from where you sit, what do you think are the second order effects that are gonna be the biggest problems of the next generation?
Like, when I think of the second order effects of social media, for example, like, oh, social media in the early days. Look, it's so great. Everybody can talk to everyone across borders and you know, all of humanity's gonna be all lovely. And then meanwhile, you know, Zuckerberg and all of his zeal deploys it in Burma, And, you know, with no translators whatsoever. And next thing you know, there's like, you know, almost a million people being hunted down and killed, through rumors being spread on the platform. You've got like an epidemic of 14-year-old girls in severe depression in your suicide, uh, upticks in those rates.
It seems to me the second order effects of this are not understood, but are going to be profound. So much of our social dysfunction amongst our youth stems from the fact it's less about any particular app according to the research.
It's more about the lack of in-person interaction. As a result of it. And if anything is gonna take away in-person interaction, it's gonna be ai. So like, I mean, it already does, like I already have my more interesting conversations with one of the better models at this point.
RJ: I brainstorm with it 10 times a day where I used to maybe have called someone.
When you talk to more, it's just, it's so easy to be like, Hey, what do you think of this? Generate me 10 ideas around this and, and, and run with it. I think that that second order issue is, interesting and I think it relates to critical thinking, which is, uh, it's our humanity and our ability, you know, what makes us human?
And its ability to think critically, that's where it's gonna have this impact because yes, it's a perpetual intern, it's an 80% solution. Where it is today, it's capabilities today it's very capable. Um, it's very intelligent, quote unquote, intelligent. It's not human intelligence. I, I think of it as like. Uh, I was thinking about this over the weekend.
It's an alien intelligence ai. If an alien came to the planet and didn't quite know what was going on but it had studied a lot of information that it had heard over the airwaves, they could communicate with us, but it's not gonna get everything just quite right all the time kind of thing. You know, we do anthropomorphize it and make it seem human, but it's not.
And that's okay. It's that demise of critical thinking where it does, it comes off as authoritative. And if you ask it, you know, some obscure question, it's gonna likely come back with an answer. And then is it right? You know, lawyers have gotten in trouble a lot with this of like, it's making up, cases and then they're just not checking it and submitting it to the court.
And the court does the work, the administration they released here in the US of a huge report on the health of. The Americans and there was immediately people were like, well, your reference, this doesn't exist. This reference doesn't exist. Oh yeah. I mean
Deep: the critical thinking thing is interesting 'cause I feel like there's a pro and a con side. I'm not one of those people who thinks humans are that intelligent and capable in general. I think we have very capable of deep critical thinking humans, but I don't think that's the norm.
the level of reasoning and thinking you get from, these higher stack models like O three and and five, which is probably leaning on O three underneath is better than the majority or the vast majority of humans in the vast majority of cases.
Can it outperform a deep specialist, in their area of expertise. A lot of the times, yes. Depending on how much you're willing to pay and how much GPE you're willing to give it. But not all the time. So I think we have like maybe another year or two where we get to like pretend that we're better.
But I don't think two years ago I thought we'd be here and I certainly didn't think so four years ago, and I absolutely didn't think so five years ago. So I think there could be a benefit, right? Like right now, those of us who are like cognitively challenged get our brilliant ideas from the social media toilet that is like all kinds of, stuff feeding our preconceived notions of the world.
And so we run around and spout nonsense conspiracy theories, and sometimes we wind up in the Oval Office and the world is a lot more dysfunctional as a result, but. AI systems they're a lot better than the social media dribble. Like in general, if you ask these models a question, like it's anchored.
In fact, I mean, kudos to the, companies, at least at this state in evolution, for not playing along with the conspiracy mongering that, you know, that the social media companies blatantly embrace. And so maybe people get better, right? Like maybe there's an optimistic view here where, in 90 IQ person suddenly gets to lean on a one 40 IQ bot and and has less dumb ideas than before.
And maybe, if they learn nothing else, it's to ask the bot to like, assess the thing that the bot just said. and maybe they start to figure it out. The analogy I use is like, if you take a 2-year-old and you raise them in a family of conspiracy laden like non-rational, non-educated thinkers in a society, or let's say maybe a town like that, and then you take that same kid and you raise them, you know, with, couple of professors you know, like intellectuals, like deep thinker types, they're gonna have two very different outcomes in life.
And we suddenly took a lot of that second category and we made it at least accessible to that kid in the small town. I don't know, I mean, it feels to me like there is a potential for a lot of good I, I do think there's like 10 different ways till Tuesday that this will go south, and I think it probably comes back to business models and how these companies really make money.
Right now, I'm, I'm actually encouraged that. Google's a good example. I'm actually encouraged that Google is figuring out how to monetize their AI in a way that does not involve ads. 'cause I feel like that's probably a net positive for humanity.
RJ: I would say that is a, a net positive.
I love the historical perspective of this too, because we as humanity have gone through this with the printing press. Yeah. And it's like when we've all started having more access to books and reading material, it was demonized. It's like, oh, reading is bad. We're not gonna be able to remember anything anymore.
Deep: Yeah. That was like in the Play-Doh that was back in, I think it was way, way back about
RJ: this.
Deep: Yeah, absolutely. They were really pissed off about the loss of the oral tradition due to writing.
RJ: It's, you know, so it's not a new concept of this technology is, gonna be the downfall of humanity. You look at electricity, same thing.
The telephone. I've had this conversation. Numerous times of like telephone verse, text message. And when we all started text messaging years ago, oh, how is my kid like ever gonna survive in the world? Because he just, he text messages, he doesn't pick up the phone and call somebody. But as we all sort of move forward through time, it's like text messages is universally like the accepted method of communicating.
Now we've adapted to it. The challenge I think with AI and where we are today is it's happening so fast. Oh my God. We don't have the time to adapt to it. And that's what's making it much more challenging and, and that critical thinking skill is, is interesting. My daughter just today started her professional career as a seventh grade English teacher.
And it's like. How are you gonna deal with AI and writing and, and these critical, that's a whole, that's a whole other topic
Deep: that you can, you know,
RJ: it, it's, but you know, we still teach math, but we have calculators. You know, it's like we didn't stop learning math, you know, but yeah, I'm, if I'm doing math today, I'm using a calculator or writing a program to do it, or using Excel to, to do the math for me.
But we still teach math. We're still gonna teach reading and, you know, writing.
Deep: I think that your critical thinking phrase is still resonating with me, the question I ask is, what is the most important skill that people learn,
I think critical thinking is probably it, right? The ability to really play the role of the editor, the critic, and be really suspicious about what, because I find that's what I do a lot now with these models. Whereas before, you know, I was doing the bulk of the writing, the bulk of the reading, the bulk of the consumption, the bulk of the thinking.
I don't almost need to do that anymore. It's like I can, but I maybe we're the last generation that can even do that. But what I have to still do is be really critical with the models. I find myself doing code review from bot generated stuff more than ever now, and having discussions that are skeptical when it comes up with ideas.
but I feel like that ability to assess what's being told to you and whether or not it's really true is gonna be essential.
And I don't know, I feel like, this is going to be increasingly difficult. Right now, the bulk of our interactions with AI is coming from us knowing we're interacting with ai. But you know, there's millions of developers right now plugging this stuff into everything, from stuffed animals to, you know, a tri-quarter that our physicians are going to hold.
it's not always gonna be the case that you even think of it as I'm interacting with ai.
RJ: It's, it's gonna fade into the background. you're gonna be using it and not even generally be aware that you're using ai, you sort of broached an another topic, which is a issue that, that is happening is, we're gonna develop this gap in expertise.
You know, so we have these senior developers, architects, you know, people with years of experience that know how to program. can, interestingly, when they see code, you know, they're like, eh, something doesn't smell right. You know? Yeah. And, and can really look at it and start asking questions and digging into it.
So now as people are coming through various training programs, their educational degrees getting hit in the industry, the technology, they can do the job of a junior developer. And it's becoming, okay, how do I talk to a conversational programming? But you won't have that critical skill or knowledge experience to be like, oh, well that's, that's from a cybersecurity perspective.
Like, that's just horrible what it actually put out there. And so, you know, five years from now we have this gap of like expertise and knowledge kind of thing to question it. The hope is that the AI technologies get better and better and start incorporating, you know, the necessary cybersecurity and everything.
But yeah, it gets interesting of this, I think that gap in expertise that's gonna start developing.
Deep: this has been a really fun conversation. I've really enjoyed it. I'm gonna, wrap us up on one final question that I always ask everyone. So given kind of where you're sitting in the, healthcare arena bringing kind of a lot of this advanced technology to the fore, paint the picture for us you know, five to 10 years out on how, our interactions with healthcare is changed and give me the, utopian, but I'm kind of always also interested in the dystopian version.
Like what are the things that freak you out and what are the great things that are gonna happen as well?
RJ: I mentioned that I have this life goal of living to a hundred plus with a good quality of life. I truly think AI is going to be one of those things that enables it so much of our health.
Happens outside of healthcare. It's what I do day to day. You know, when I wake up and what do I eat, and going for a walk after a meal makes a difference. The AI is going to be able to engage with that wearable data, understand what's going on, be able to provide advice and recommendations that are useful to me as an individual, targeted to me as an individual.
If I don't understand something, hey, explain it to me like I'm five years old and I'm gonna have meaningful results. So I'm really looking forward to that. And then we're starting to see a lot of that, those capabilities today, that very personalized information and, and interactions that's going to continue to get better as we move forward.
So yeah, I think the AI is going to help enable me reach that goal of, of a hundred plus and with a good quality of life. So that's, that's the utopian vision that we all are able to reach that and expand our li lifespans and live well kind of thing. The dystopian sign. I don't even want to go there.
Deep: I think we have an obligation as like technologists to like think about what are the potential drawbacks. I mean, the obvious one is like, maybe we get it wrong. Maybe we don't live to be a hundred plus. Maybe we live to be 200 plus and now we have an entire planet of like super duper duper old people.
In a society that's biased towards that, like that alone seems problematic. I mean, it's a good problem.
RJ: Problem. There are a ton of problems with it, and as you can imagine, when we do take on projects, risk management is a huge aspect of it. Yeah. So you do have to think about these and you do have to think about the privacy of the data and everything that's related to it to make sure that you are mitigating anything that that might occur.
Yeah. I hope, humanity is gonna take some time, but we're gonna figure it out. Work it out. The technology is going to get better. This is the worst it's gonna be. Yeah. It's going to get better. And I love a quote by Bill Gates that says, we tend to overestimate what's possible in the next two years.
We underestimate what's possible in 10. I don't think we know we can even envision where this is gonna be in 10 years.
Deep: Awesome. Well, thanks so much for coming on the show. This has been a really great conversation.
RJ: I enjoyed it. Thank you.