Embracing AI in Business: Navigating Misconceptions and Implementation Hurdles with Elise Oras

In this episode of Your AI Injection, host Deep Dhillon and Elise Oras, co-founder of Wheels Up Collective, delve into the practicalities and challenges of AI in business. They discuss the hesitations companies may face when regarding AI adoption (particularly in client-facing content and legal constraints) while emphasizing AI’s role in enhancing marketing strategies, cost-saving, and creating a competitive edge for businesses. The two discuss the importance of aligning AI choices with company values, address common fears about privacy and data breaches, and advocate for a more embracing approach towards AI’s potential. 

xyonix solutions

Learn more about Xyonix's Virtual Concierge Solution, the best way to enhance customer satisfaction for industries like hospitality, retail, manufacturing, and more.

Check out some related content of ours:

Listen on your preferred platform here.

[Automated Transcript]

Deep: Hey there, I'm Deep Dhillon, your host. And today on our show, we have Elise Oras, co founder of Wheels Up Collective and a marketing specialist with over 15 years of experience. We're going to delve into the challenges of AI integration, strategize on positioning businesses at the forefront of the AI revolution, and discuss the transformative value AI brings to enterprises.

Elise, thanks so much for joining us. Maybe we can start with a kickoff question as a sales and marketing professional working with all sorts of clients. What are some of the common misconceptions or hesitations that you've encountered from businesses regarding AI adoption?


CHECK OUT SOME OF OUR POPULAR PODCAST EPISODES:


Elise: Yeah, so, um, what's interesting about Wheels Up is we are a boutique marketing agency that specializes in working with startups.

So startups often have maybe constrained budgets or, you know, they're constantly scaling quickly or they're Flip flopping on their messaging, who knows it's changing quickly, right? And so one of the things we want to do is figure out how to make sure that we're giving the most bang for our buck without bloat, without wasting hours and just figuring out efficient ways to do things to make everyone happy and kind of move quickly.

So one of the things we like to do is bring in things like chat GPT into, um, into strategies, into plans, into campaigns, into whatever, wherever we can, but we're getting pushed back often. And the pushback is not coming from the marketing teams directly, or they're not even really coming from the marketing org.

They tend to be coming from like a. Probably a lawyer, right? Somebody who has written in, um, you can't use chat GPT to do X, Y, Z, or very specific things. You cannot have any client facing content be written by AI. Um, so that's one of the common misconceptions I think that we have is, you know, it's one thing to have an ebook be written by AI.

Sure. We're not going to do that. But I think that, um, Having these like hard and fast black and white rules about when you can use AI, when you can't use AI, what type of AI you can use, puts companies at a disadvantage and it's something that we, it's a challenge for us to kind of help companies realize, no, this is your friend, this is a tool that you're going to use to one, save a ton of money, but also just be more efficient, be innovative and stay ahead of the game.

And we need to convince people that this is, this is the future. And this is not even the future. This is now. And if you're not doing it, you're going to get left behind.


Xyonix customers:


Deep: In your experience, what do you think are some of the key factors that businesses should consider when they're looking for AI solutions or partners to ensure that they align with their goals and values and particularly in the context of privacy concerns?

Elise: Yeah, so I think that, um, you know, we've talked about this, but what, what are they trying to accomplish? What at the end of the day are they actually trying to accomplish? And is privacy really the concern that they have? Um, what type of data or what type of information are they Right. Using when they're using an AM model, if they're just using, you know, if they're a e commerce store and all of their website is public and they're going to be using like a virtual chat bot to help customers find certain items that do we really need to worry about privacy there?

Um, so I think working with companies to Essentially connect them to like an AI consultant where they can have something built for them either custom or, um, probably not custom, probably not private, but. Have it built so that it works for them based on what their security concerns are. I think that's what we're trying to do right now is like bridge that gap to help them.

Essentially, at the end of the day, it's really about ROI and customer service and creating that great customer experience for their customers and prospects, but really making sure that they understand that, um, some of their privacy concerns are probably not as concerning as they think they are, especially if.

In our case, with a lot of our clients, the information that they're, they're kind of wanting to either sort through, or in some cases, like, I guess they're uploading customer information, but not necessarily private information. How do we help them realize that, like, ChatGPT is okay. Um, what you're doing is.

Is not going to necessarily have like a big data breach or it's not the really scary thing for companies is that they're gonna have a reputation management issue. So how do we kind of help them realize that that's not really a concern that might have been a concern a while ago, but now it's just kind of not worth the risk that they're going to lose if they don't move forward with some type of AI.

One example that I want to talk through is like Blackberry just released this huge study like two weeks ago, maybe, maybe a month ago, that showed that 75 percent of businesses want to ban. Uh, generative AI and that's a big statement, right? That's a huge statement

Deep: to their employees

Elise: internally. Yeah, from using it.

And when they actually dug into it, that's not what they want to do when they started asking like further questions. They don't want to ban it. They want their employees to use it. They see that it's productive. They see that it can be efficient, but they're afraid of one privacy concerns or data being breached.

And then the other thing is a bad reputation because it's The output of something someone did wasn't true or, you know, had discrepancies in it. And so then the company has a reputation management problem. So from like somebody like that, what, what would you build for them then to say like, Hey, your employees can use it and you can use it with your private customer data even, but this is what you need.

What's the, this is what they need.

Deep: That's a, so I think let's jump up a level. It's what I think is really going on here. What's really going on is. People love what they see. Executives at large Fortune 500 companies love what they see on OpenAI with CHAP GBT 4. Everybody's used it at this point, and they're like, this is spectacular, this is amazing, whatever.

So they go to their whoever, their bureaucracy internally in these giant companies, and they're like, I want that. But for my customer service stuff, I want that, but in there and they're in their point and then their internal people, it's like a big ginormous company. Now, if it's got like good D good tech company, DNA, like Amazon or Google or something, they'll figure it out pretty fast, but most of these companies don't have good tech DNA.

And so what happens is their it. Administrators start running interference and start freaking out about all these little line item chef things like little things, you know, they might be subject to various regulatory environments or government regulations based on and if they're selling into health care, they might have hip constraints.

There can be a million things. So the ultimately the the lawyers and their perspective is like bubbling up to these execs, and it's a bit of a conflict because everybody knows that we want to get this stuff that's quote public. All the way down into their world now. In the meantime, there's like another fight that's been happening for 15 years that the lawyers have mostly like their issues have been worked through.

So if we think about cloud computing, there is a there exists a universe of companies that still do everything in house and have all of their like, you know, computational researchers in house, but there exists a larger world where everybody's comfortable having Certain creditable people manage their stuff like a Microsoft or Amazon AWS.

And those companies have sort of proven themselves to be trustworthy or whatever. And then there's a size issue. There's like huge companies that open AI themselves will set up a custom GPT for instance, for, or Microsoft will on their behalf. And we've, you know, talked to a number of, uh, in interact with a number of companies like that.

But then there's like companies that are really small and they're not there yet. They're trying to get there, but they're not there yet. So what do we say? We, I never start with these conversations. I start with, what do you want to do? Right. What are you trying to achieve? So as an example, uh, a major, uh, mobile provider, like one of the big five mobile providers in the U.

S. Says, Hey, we want to take our customer service. You know, prompting, you know, like the little bots, the goofy bots that are on your web page that answer questions about your product catalog and all that stuff, and generally do a horrific job at it. We want that to look like gbt floor. And I said, Okay, great.

So why not. You know, you look up your stuff, you send it over to GPT 4, so you basically, like, the general architecture of the way this stuff works is somebody asks a question, you have the back and forth message history between the person and the customer service rep, you take that history now, and, um, now you have to, like, go figure out, well, what documents do I have that are in my private universe that answer that?

So maybe that's in your database or whatever, but you can imagine, There's a spectrum of the kinds of things you're answering. You may be answering things that are super sensitive, like, Hey, um, I, you know, my account billed me X dollars on this credit card last month, but, you know, it should have been Y dollars.

That's one kind of customer service request. And then another kind is like, Oh, I just want to know if whatever phone is coming out and going to be on the network, something that's public though, but typically they're handling like all of the above, you know, behind a fence. And so. There's a couple of strategies that you can employ like one strategy is let's pick off the stuff that's public, like safer.

And let's answer those and make that a much more efficient process, and we'll go ahead and we're not going to the public right we're going to, you know, to a Microsoft Azure. Potentially hosted version of GPT for that's got all of the Microsoft guarantees that we don't sell your data. We're not sticking it on eBay.

We're not, you know, selling people's retinas or whatever, you know, scans or anything. And people generally trust Microsoft everybody uses teams or, you know, and Google to, you know, for that kind of stuff. And so that's like, okay, you know, like, is, are you uncomfortable with that? And so some set of execs say, yeah, for, for the publicly stuff, we're comfortable with it, but we're not for the, for the private stuff.

A lot of companies say, yeah, we're comfortable with all of it because we know that the trade off is. We're not going to be able to be state of the art and then some companies say, well, no, actually, I want you to address that super private stuff and I don't, I, I want to get good at figuring out how to do all of this stuff in house.

So I want you to take the latest, greatest open source stuff, the latest, greatest open source models coming out of like places like Facebook and others that, you know, believe in giving the models to the public. And I want to like start figuring out what we can do with those lower level models. The lower level models are not stupid, right?

They're still better than anything we've seen for the, you know, millennia before five months ago, but they're better than what we've seen in the last five months, which it turns out is like massive jump. So then, so then you can say, okay, well, why don't we You know, set up an innovation program where we push the envelope with GPT 4, because that's the best thing out there by far, BARD's not even close, like nothing else is even in the ballpark.

And we'll push the envelope there and that's where we figure out what's possible. And that's what I talk about a lot. And then. Now, what we're going to do is we're going to figure out how to break up these higher level reasoning, things that require like a much higher IQ to answer, and we're going to try to break those up into smaller pieces and train a lot of pieces down in your little private Llama 2 instance, for example.

So we can basically follow. So you lead with GBD4, and then you follow with your private model, and you can kind of get somewhere that way. You're listening to your AI Injection, brought to you by Xyonix. Dot com. That's X-Y-O-N-I-X.com. Check out our website for more content or if you need help injecting AI into your organization

Elise: if you're doing it that way. So one of the issues that we have with some of our clients is they. From a marketing standpoint, even they want to take customer survey data. This is a very common use. And before people were going through it manually or using a very expensive, probably annotators to do it right.

And now they're like, Oh, we can take this customer survey data. Put it into, uh, anything, you know, some type of a program and then get this output. And we can do it in a couple of days versus weeks. And so what you're talking about, like they're using customer data and they want to make sure that that is not going to be breached at all.

But what you're saying is like, they could do this very easily and have this very almost inexpensive models. They're building it on kind of the lower level models, be able to do this and not worry about having breaches. Well,

Deep: they could, but that would like some by customer survey. I think, I think you mean like questionnaire kind

Elise: of data, like, yeah, like a feedback survey.

So they'll ask, I mean, they could do that.

Deep: I mean, honestly, like. Look, the first question is, do you feel comfortable having a shared drive hosted on Google or Microsoft or one of those, which they all do, which they all do. Right. So their survey data sitting over there already. If Microsoft wanted to stab them between the eyes and take that stuff out to the public, they could do it, but it would kill their multi billion dollar business.

Google would kill their business. They don't want to do that. They're trying to maintain that trust. So that's the first question. Some very, very and ever shrinking number of companies will say no to that, in which case, okay, you know, maybe they have contracts with whatever the CIA or the NSA or something.

And like, there's real security risks there. And they're super paranoid on that spectrum, but That problem was solved 15 years ago, like people are moving that. So then the question is like, okay, well now we're talking about like, like, let's say you're, you're, you're in your GPT 4 hosted Microsoft instance that's shared.

Do you not believe it when OpenAI and Microsoft say they're not going to take your contributions to train the model if you say they can't? And they'll say, no, I guess we believe it or no, I don't trust them. And so, you know, cause generally like companies, if they say something, they do it, I mean, there's exceptions, but it's usually, it's not malicious.

Typically it's just bureaucracy, bureaucratic mistakes or something like something happened. So if they say. No, that's fine. Like if, if we can exclude our stuff from contributions to the new AI models, then I feel okay. But usually there's like just some lawyer who needs to check something off behind the scenes.

And if they say, okay, no, that's one of my concerns. Okay, great. So, so far, so good. And then it's like, okay, well then, then what, are you worried about the log transactions that say somebody at this company? analyze this message and there's some employees running around internal to Microsoft that are looking at it.

If they say, yeah, I'm worried about that or no, but usually it's not driven by these kinds of concerns. It's usually driven by compliance to a particular like regulatory situation. Like it's usually like, I'm HIPAA so nothing can leave in which case, no, it cannot go to a non HIPAA. Hosted environment. And then it's like, okay, well, Microsoft one day we'll get to a HIPAA compliant GPT X version of GPT, but that's not going to be for a few years.

I'm guessing, you know, it's like not high on their list right now. So that could be a case where you've got to go private to a private, totally private hosted model.

Elise: Okay. I think what we're hearing is there's just a lot of unknowns. And I think that people, like when you're putting it like, oh. Do you mind having a shared drive in teams or whatnot?

Uh, companies are very comfortable with that, but I think that maybe because it's so new and they're worried about even just like customer emails getting uploaded into GPT 4, um, to sort or to, you know, to do like some duplicate, um, matching, that those, that's an actual real thing that, uh, was written in the contract that we can't use GPT 4 for any type of customer email, right?

Which is fine, we're not going to do that if that's in the contract. But I think that part of it's a fear that it is going to be used in either future versions or somehow going to be leaked and now all their customer data is leaked somewhere.

Deep: Yeah. I mean, you know, and like, look, OpenAI is like a new player on the block.

Right. They say, I mean, they didn't even think like, first of all, they didn't know it was going to take off and go insane like this. Right. So they're like just sleeping under their desk trying to ride the tiger here by the tail. And so, but like Sam Altman's an utterly reasonable guy. And months ago he said, we have a checkbox.

You have to opt in to let us use your data to train next generation of models. If you do not, we don't. So that now, then it comes down to, well, I don't know, like he said that, but does it, does people follow it? Like they don't have all the compliance and controls in house that make you feel comfortable the way that a Google or Microsoft does at this point after 15 years of maturation.

It's just

Elise: interesting how we're seeing more, um, companies add to the client contract of, well, on the really. Like one spectrum is you can't use AI. That was actually that was in a contract. And, um, well, if we. Everything we do is AI, right? Like, so we have a sonnet task list that we use and they go into Slack.

There's a bot that connects. I mean, technically that's AI, right? So having something like that in a contract makes no sense, but we are seeing more of this. Um, you can't use any GPT or any other models to do anything that would be client facing or customer facing. And again, is that. Is that because there's a fear there and, and also because are they just thinking, Oh, the final text that they're going to get, like, if we're writing an ebook or something, the final text is going to be written by AI or are they actually afraid that we're doing research on AI or, you know, I, I don't know what the fear is, or is it just a lawyer put it in there

Deep: and it wasn't really well thought through.

I mean, like, look, I mean, I, this was like 10, 12 years ago, I was selling my company somewhere. And I was at one point, you know, you go through all this diligence. I was sitting in a room with seven lawyers. And there's like three from the new investors and like three from the others. And there was a couple of consultants and they sat around for, I don't know, three hours arguing about some nuanced thing inside of the sun microsystems Java license.

And they, and then, and there was like half of them were like trying to basically conclude. That you don't actually own the rights to Java code that you write. So I kind of like, you know, and I was just tuned out. Cause I'm like, this is the boringest meeting I've ever been in my life. These people seem completely clueless to everything.

So I finally was just sick of it. I threw, took off my headphones and I looked up and I'm like, I don't know what the hell you people are wasting your time on, but 20, literally 20 million developers are using the number one, most popular language to program stuff in. In the entire world and the vast majority of those applications, the people writing the code retain ownership over it.

So I don't care what you're talking about, but whatever it is. It's utterly pointless conversation because lawyers much better than you guys have already figured this out. Otherwise, 20 million people wouldn't be using the software. So then that one of them is like, yeah, let's move on. That's what's going on with AI.

They just lawyers. They just need time to like. Have enough precedence get created so that they kind of come on board. I mean, it happened with the Internet to, you know, they were so freaked out about all kinds of stuff on the Internet, but at the end of the day, lawyers don't run companies like people who want to get something done run companies.

And if you want to get something done and you want to innovate, then you use the latest tools and you put your lawyers in a box. And you're like, you can come out when I say you can come out in certain scenarios, but otherwise just go away.

Elise: Yeah. And it's, so the other interesting thing that I think that we're seeing because some of our, like our clients that are larger tech companies, they get it.

There's, there's no issue there. Right. It's the smaller company. I think that there's this. I don't know what it is. It's like this, uh, hold on everything, hold on their data, hold on, on even some of their ideas. Right. And there is this fear that like they have this idea and it's proprietary and it's going to get leaked.

And so I guess what, what they want to use, they understand the efficiencies and they understand, uh, productivity with AI. So like, how do we help them realize? A, it's, it's probably honestly, it's probably fine one, but, but B, okay, if it's not fine and we want to do this, here's what we can help you build. We can, you know, we can bring in an AI consultant for you to help you work with this AI consultant to build what you need.

And how do we help them learn that one, this is out here. What they actually need is out there. It's affordable. And then how do we kind of like connect them to that so that that they're able to build an instance that they're comfortable with and that they feel as safe and secure.

Deep: I mean, I think everything starts with the problem they're trying to solve.

Right. So, somewhere they have a problem they're trying to solve, and that's The most important starting point for this. Like the, it's not that they have to shove AI all over the place. Cause I keep hearing about it. You know, like that's a really lame reason to go grab the latest buzzword and start throwing it around, but somewhere there's like.

You know, something, some significant inefficiency in their business, something that's going to be transformative, their business. I'm like, help them like generate a lot more revenue or save a lot of money or something. And it's typically something that they get pretty excited about. They have a lot of keen intuition about and all that kind of stuff.

Now, then the question is. Well, what's the risk reward trade off? And can we characterize the risk and characterize the reward? And so there may be a scenario where there's some risk, um, you know, associated with, you know, using an external service, and there may be insufficient reward if there's a significant reward scenario.

then the risks can be mitigated. But before you start putting risk first in like a lawyer first company, that's kind of what happens. But you have to put the reward and the benefit first, and then you mitigate risk. So if you don't care about how important this thing is to like address, Then you're naturally just gonna like come up with some goofy thing that says you can never use chat GPT for anything because you don't care about the thing you're trying to get to.

But if you do, then you start by prototyping, you know, you start by making a few assumptions in a safe area with safe parts of your data. And you go ahead and you start prototyping simply on the smartest engine out there. And you ask yourself, like, am I excited by what we came up with? Like, would our clients be blown away by this?

And generally the answer right now is like, yeah, I'm really excited. And my customers will be totally blown away by at least certainly anything we do. That's like. And so then, then you, then you can go figure out how to navigate all that other stuff and there's plenty of tactics and approaches and don't forget there's an entire ecosystem marching to, to make it possible for you to not have to use open AI, like a massive multi trillion dollar ecosystem right now is trying to make it so you do not have to use open AI.

It's

Elise: interesting. So, um, we're, we're working with a team right now that their company bans, it bans everything, but specifically chat GPT is what they're trying to use, but they can't use it from their VPN, from anything from their service. It's blocked, but the team still uses it, right? They just find their own laptop or they fire up their phone or they, whatever.

Um, and so they are just too

Deep: valuable. Exactly. It goes back to your, your neighbor's, you know, it's like high school's preaching abstinence. Like, yeah, sure. You can preach it all you want, but like at the end of the day, you know, the kids are going to do what they're going to do. And it's the same thing with this.

Well, and it's funny

Elise: you say that because the high school I, you know, I live in Raleigh, North Carolina, um, Wake County is our school system in Raleigh, they and, uh, the Chapel Hill Carborough schools, they both banned ChatGPT last year, among other things, barred everything else. And this year, they're like, wait a minute, that wasn't smart because this is a tool that the kids need to, to use.

They need to know how to use prompts. They need to know how to verify information to see if it's correct. Let's actually use this as a tool. There are other school districts that are outside of where I live, like, 15 minutes away, that it's completely banned. You get a 10 day suspension if you're caught using it.

Deep: Which just means they don't get caught. Exactly!

Elise: So they're just using it on their phone. And, you know, now it's like, it's very, it's very hard to detect, um, if something was written by ChatGPT4 or, A person, um, I wrote an email, a sales email recently, and I put it into a AI detector and it said it was AI, but it was my own writing.

Well,

Deep: now they'll say stuff like, you know, like we just took an article that we actually wrote a good chunk of it from scratch, stuck it in there. And, you know, and we use like 5 or 6 of them because I want to make sure, because mostly for SEO reasons, you don't want things to appear as if a bot wrote them.

You know, like they're, they're generally calibrated to say something like. We detect a 52 percent chance that this thing was entirely written by AI. It's like, okay, it's a coin flip, you know, like, but if you click here and pay 30 bucks a month, we could probably more accurately tell you what's going on.

Well, and

Elise: it's, you know, it's interesting because in the schools and in business and anywhere, like people are going to use it as we. Said they're gonna people want to be efficient. They want to go home earlier. They don't want to do the crappy work. I mean, one of my favorite things to do is my calendar with a I just like it handles it right.

I don't even know what happens with my calendar anymore. And so. You know, we get to this point where these companies that are banning it, what is, what does that cost them? Does that cost them employees leaving? Does it cost them not, their work isn't going to be good enough? Their employees are billing more hours?

All of the

Deep: above. You know? I mean, like, I mean, do you really, does anyone really want to be the last Luddite, like, sitting there advocating for a pick in a coal mine, you know, as the, as the sole way of getting coal out? Like, it's just, these are temporary blips. There's nobody arguing that the stuff's not transformative and utterly obvious to you that and worthy of use people were arguing that for a while, but they've mostly been shut down because like everybody gets it like it's just so unbelievably obvious how powerful the reasoning engine is.

I do think there's, there's a ton of legit privacy concerns and there are, there's a ton of scenarios where you have to build. You know, or host your own private LLMs in your own world. And the scenarios that I think are most relevant and most. Legit are the ones where somebody is subject to some regulatory scenario like HIPAA, like all every, you know, all health care data.

You can't just willy nilly be like taking health care data and putting it up into some random, you know, non HIPAA compliant environment like You can't do it. It's you'll literally go to jail. So need help with computer vision, natural language processing, automated content creation, conversational understanding, time series forecasting, customer behavior analytics, reach out to us at xyonix.com. That's x y o n i x. com. Maybe we can help.

Elise: And I think, you know, I use a lot of chatbots. I use a lot of, uh, If a company has it, I'm going to. It's amazing how bad they are. It's shocking. I think I've shared this before. My husband was looking for his dentist appointment. When is my next dentist appointment?

And used the chatbot and it literally couldn't tell what his next appointment was. It said, call the office. And just something like that, that's so basic, it should be able to pull. That's not, I mean, maybe it is. They need to be compliant or whatever, but I feel like the healthcare models that they do have with these bots are so poor and they're so bad and it's not even worth using them right now.

I would love to see those just in my personal life, get a little

Deep: bit better. Generally like there's, there's certain parts of the economy. that technologically trail by 10, 20, 30, 40 years, right? And they're, they're, they're usually the scenarios where they're heavily regulated, right? Like healthcare walk into any hospital and they all look like they're, you know, from a seventies movie, you know, they're just like horrible lighting.

everywhere, ugly paint on the walls, terrible tile, like everything about them reeks of the 70s, you know, all kinds of bad design decisions from the 70s. And it's not that they're like awful institutions, it's just they're like, they're subject to just an incredible amount of regulation, honestly, as they should be.

I mean, they're, you know, they're, they're like manipulating our lives. And so a law that seems utterly reasonable, like HIPAA, Gets overwritten or like, you know, it gets pushed in a certain way. And, you know, 30 years later, we're still paying the price for it, but it's going to be pretty stinking hard to get a lot of these AI systems, you know, even though they outperform cardiologists at diagnosis, even though they outperform, you know, you know, pulmonologists at detecting pneumonia, like it's still going to be really hard to get them through the regulatory process, but they'll get through, you know, they will make it through.

And then. You know, and same thing with like edu is like that in the education arena. It's just, it's like, there's a lot of regulation, there's a lot of stuff. And I would say as far as the places where private. You know, LLMs make the most sense. It's typically in those arenas where regulation is the other kinds of conversation items we've been addressing, you know, like just the like light and fluffy concerns.

Those are probably not worth as a business investment, spending a lot of time and energy trying to address, but the, but the ones that are genuinely not likely to change in the next 50 years. And there's a lot of those. And what about

Elise: like going the virtual concierge route though, if companies. They're starting to invest more in that customer service more and and also just the fact that like their, their customer success staff is probably answering the same 10 questions over and over again, right?

Like, what about going that route? Um, and it doesn't I'm. I don't know if the privacy concerns are as much, but it's really like training that model on knowing the ins and outs of a hotel, or if you're like Home Depot, for instance, and you're looking for a steel door with a, with a panel in it or something, and you want to ask, and it kicks you out all of the options, where do you think that kind of lands on these more private models, or do you think that that would be I guess, like, what are your opinions?

Like, what, where do we move forward with these? Cause I feel like that's kind of like the next wave of customer service. And I think that there are some companies starting to do it, but it's not, it's not there yet. They're not there.

Deep: Well, I mean, I think a ton of people are doing it right now, including us.

Like we have multiple projects, you know, going on there. I am again, as far as whether you go public or private, it comes down to control money. Latency, like how long can you wait for response and just kind of like how cutting edge or innovative you want to be. And depending on your answer to those four, you're either leading with GPT four, and then maybe due to stuff that happens as you scale and have success, pulling more and more stuff into more costs, uh, some cheaper arenas, easier to manage, more controllable scenarios, or you're just kind of closing yourself off from the latest innovation.

Going on, trying to address your problem with whatever you can fully control. But from what we've seen, I don't know if it's just the kind of customers that reach out to us, but everyone is in the same boat where they're like, we need to study this and we don't and we want to just go to open AI for now.

And make it as great as possible. And then we'll take it on a problem by problem basis and figure out how to pull things in house or downstream or get in more controllable environments or have redundancy on some other cloud LLM provider or whatever it may be. Generally, that's what's happening, but they want really strict.

Guardrails up so that like the thing that's more important on people's minds is when somebody says, hey, you know, what time does Home Depot close that you don't say. 3 p. m. when it's midnight, right? So that accuracy is way more important. And then there's a ton of techniques that we use to kind of put those guardrails up because the LLM, if you go to a high level LLM and you ask it about some something specific, it's very likely that it just makes up an answer that sounds reasonable,

Elise: right?

So if you're wanting like the actual true answers that you're feeding it, okay, that makes sense. What about like computer vision as well? So if, like, you have a car accident, right? You're, you're in a car accident, you have this chatbot on your phone for your insurance company that says, like, send the pictures.

Where do you see that kind of falling? Is that, again, are those, are there privacy concerns there because the person's uploading images and they're uploading, like, Things that they technically then own, or would that be, uh, another situation where you just kind of like have to trust that you're already using these tools are already using these like file storage systems, I guess, like, the, my question is, like, where do you kind of see this going with a computer vision mixed with the LLM so that somebody like a consumer who gets in a car accident can easily upload everything kind of get everything I need about what happened with this car accident without a lot of interference and a lot of phone calls to the insurance company.

Okay.

Deep: I don't see what's new in the AI world based Compared to what's old in the pre AI world for that,

Elise: right? Like are they just not using it that way then? Cause like the insurance

Deep: companies, are they using now? I mean, insurance companies are pretty old school. Like there's not a lot of machine learning going on.

And if there is, it's, it's more like they're just analyzing their collections of texts or whatever, but they're not going to be the companies that are leading the generalized AI models. That's not going to come out of an insurance company. That's coming out of the big five tech companies.

Elise: That's just my wishful thinking.

To make those type of processes so

Deep: much easier. Yeah, but I mean, like, I think, I mean, I think kind of what you're asking is there's like this question about rights usage around the data that goes into these models, right? And if you think about the very early Internet days of the Internet, you know, these questions came up a ton, you know, because everyone's sort of early instance, like, that's mine.

That's my paper. I wrote in third grade. Like, you can't use it, but didn't take very long before everybody wanted what's mine for everyone to know that it was theirs, which meant, yeah. That even though there was this sort of standard that emerged with the robots. txt standard that basically said, yeah, you can shut off all your content from being indexed by the, by the search engines.

But it turns like nobody really wants to, it's like, nobody loves living in a cave where no one knows who you are, right? Everybody wants you to, wants to know that they, other people know they exist. And AI is largely. Kind of gonna it's gonna have a similarly organic evolution to rights management, but there's going to be some high profile cases where opening eyes probably going to have to cough up billions of dollars to big providers that are like Reddit is going to be getting huge checks from open AI.

You know, stack overflow is going to be getting checks because that intellectual property of theirs, you know, has to be an open AI for it to work well. And they're just going to have to write the checks, but you know, my personal blog, like open AI doesn't care if they got that and I'm just going to water it in there anyway, I'm not even going to care.

I think

Elise: one of the hard things for me right now is when we. Going back to like our clients, like talking to them, really convincing them that this is okay. That, that you need to be utilizing some type of a, a bot, maybe not chat GPT, whatever they want to use that they're, they're comfortable with and that they need to be utilizing it and that they need to be essentially like not left behind.

And yeah, I

Deep: mean, how do you convince them? Yeah. I think you just tell them point blank. If you're not using chat GPT for 30, 40 times a day, you're getting significantly dumber than the, than your competition by like every day, you're just dumber and dumber and dumber and dumber. And I tell this to everybody and they're like, what's so great about it?

I'm like, well, for one, I have way more interesting conversations with chat GPT than I do with any humans. Like, I mean, and they, and they think I'm joking and I'm like, here, look at this, the dialogue of me talking to it about like, you know, the dream I had last night, you're telling me you could have a conversation with me.

That's even like a thousandth is interesting about that. Like there's no way you could, unless you're a trained union therapist, you're not gonna get anywhere with me. And even then you're going to suck most likely compared to chat GBT four. Are,

Elise: are you, are you just utilizing the 20 subscription or.

Deep: So we do everything like we have, we have all of our clients have keys, we have keys, we're doing API calls, we're, we're doing really sophisticated sort of dynamic prompting, like the systems we're building have like hundreds or thousands of different prompts that are kind of manifesting themselves in different ways.

And yeah, it's, it's fairly involved at this point, but. Like, think about the value you get from just intelligence, right? I think people underestimate the value of intelligence. Like, if you have an ADIQ, navigating the modern world is really hard. You can't even figure out how to pay your health insurance, you know?

You can't even get a job to pay your health insurance, most likely. And then you're gonna just make all kinds of idiotic decisions, like left, right, and center. And if you're like, 140 IQ, and I'm not one to get obsessed with IQ, but if you're 140 IQ, like You know how to invest, you know, how to like where to put your money, like, or how to get somebody to help you do that.

You know how to get your butt to college and get graduated and get out of grad school and like get a high paying job and a career like all kinds of stuff. You are a much higher marital marital prospect, you will reap all of the benefits of actually being married, most likely, and like everything about your life goes easier.

And now we've got a tool that can take that, I don't know about 80, but maybe like 90 IQ person and give them a 140, 144 IQ entity that's going to get smarter and smarter and smarter so that we, we've created like a level playing field if they would just use it and just be like, Hey, how do I pay for my, uh, insurance?

Like every single thing they would ask their smart, older sibling or whatever they should be talking to it about. And like, yeah, they should be careful. And like, like check stuff a little bit, but generally it, it doesn't make stupid mistakes. Like it just doesn't, except for math. It's horrible at math, but like, I'm not.

Elise: So don't use it for math. So how do you know what's a good way to fact check GPT to make sure that the output that you're getting is true, or at least not going to send you in a totally wrong

Deep: direction? I think that's a false concern. I think that's a concern. I had five months ago. I never bother anymore.

Okay. I really don't care. I mean, like, except when I'm like using it to write code, but then my fact checking is obvious. You know, I run the code. If it works, it works. If it doesn't, it doesn't. But like, yeah, I really just don't care anymore. I mean, sounds crazy. But the way I think about it is Read it and use your brain, but do you fact check everything you're, you know, your favorite buddy in grad school that you'd stay up late at night having chats with?

Did you fact check everything they said, or did you just like assimilate that information into your mental model of the world and put it somewhere? Yeah,

Elise: well, and I guess I want your opinion on this. So one of the things that I heard that I thought was kind of interesting was, um, chat GPT is cheating.

Right. Obviously for schools, whatever, that's probably a very, they have like guardrails around like what cheating is. But I had a friend telling me that another friend wrote a eulogy that Chet wrote like 70 percent of. First off, when you are writing eulogy, you are a hot mess anyway, right? You need, you need help.

You need someone to kind of bounce it off. And the eulogy was beautiful and wonderful and just perfect. And another friend said, well, that was cheating. That did come from your heart since it was cheating. And I don't, I don't look at that as cheating. I do look at it as like another tool. And if you prompted well, and you said, Hey, you know, chat GPT, ask me the 10 questions to have this eulogy be personalized and perfect and whatnot.

Like, is that really cheating then? Is that like, what's your opinion?

Deep: I mean, my opinion is

we've throughout human history, we have had access. To other sources of intelligence, namely other people and other groups of people, and we've wandered around and read their content and talk to them and assimilated their knowledge and information into our worldview. And then we talk and we say things that we do things, and we have largely sort of evolved with this false premise that we're unique.

That were actually interesting and that what comes out of our mouths as ours, but it's just wrong. It's not true. Like, ask any guitar player if they've accidentally kind of come up with like some really basic riffs that have Been like famous pop music. There's just not that many permutations of stuff that humans like when they listen to stuff, put on a guitar.

Like, you know, I mean, it's not that it's, it's a, it's a finite number. It's large, but it's a finite and there, and the, and if you start factoring the probabilities of where you would be on the, on the guitar, like down low, like you're just going to repeat. And this is why, you know, both Newton and Leibniz invented calculus at the same time, cause you're just looking at the same info and the same set of.

Inputs. This is why, like every day, there's like multiple startups tackling the exact same problem coming up. So I don't think that this check GPT cheat. Well, it cheats in the same sense that we human and humans have always cheated. It just can read a hell of a lot more than we can, and it can just assimilated a hell of a lot better than we can generally, and it's able to say things like and so then Then you asked, I think, at one point, well, how do you fact check it?

Well, that's coming, right? Like everyone's kind of everyone's and it already exists. I think there's there's like a few of them. I think it's like recognition. There's like a few LLMs that have citations on them, but a lot of the citations are done after the fact. It's like, right. Right. You use the LLM, you generate a sentence.

Now you go, you know, for sake, not literally like this, but you, you know, you go ahead and do a couple of Google queries. You'd look for stuff that's really close to what you just wrote. And let's call that your citation. Was it really like, I don't know, you know, like my 13 year old, you know, would like freeze up and can never finish an essay because he would work on them with the internet there.

And then he was constantly. Like, oh, somebody already disproved this thesis of mine, he'd throw it away. And I was like, yeah, you, you should not write your essays in front of a computer, like on the internet, like just turn off the internet and turn it in. And then afterwards, you know, cause like he would just get too overwhelmed.

I'm like at your age, you don't need to make unique contributions to the world. Like, you know, third year into your PhD program. Yeah. Go for it. Let's try to be unique. Yeah.

Elise: And I think, you know, the ethics comes up a lot and it's something we've talked about before, but the ethics of what you can use AI for versus what you should be using like your own brain for, but to your point, if your own brain is really just this conglomeration of everything you've learned, everybody that you're around, isn't really your own brain at this point.

Yeah. It isn't that much different than using an AI helper, I guess I'm going to call it, that kind of gets you to where you need to be, where if you're still able to prompt it properly and prompt it to where it's getting you the answers, then you're thinking of the prompts that's still using your brain to think of the prompts.

Deep: Yeah, I mean, like, like, if you think about plagiarism, for example, like, who in the world really cares about player plagiarism? Academics care about plagiarism. Educational institutions care about plagiarism. I would argue virtually no one in the business world cares about plagiarism. They plagiarize things all day and night long.

Everything anyone ever writes is just blatant plagiarism. They're just using the exact same words of other people. They're pulling quotes out from something they heard somewhere. They just don't care. And does it matter? Not really, you know, like, I don't think it matters because I don't think anyone's ideas are that original.

I mean, there's some exceptions in academia. Academics make their reputation and, you know, and their way in the world based on this notion that other ideas being unique, but. The vast majority of the time, they're not. Yeah. It

Elise: reminds me of, um, I've worked at a couple of companies where I used to work in a lot of website content and they would say, Hey, go look at X, Y, Z company and do everything they're doing.

Right. Because X, Y, Z company was in the top three of Google searches. And so that was the directive that we got from the CMO or whoever gave it. And that's what we did. It probably too. And it worked every time.

Deep: Being original is way more hypothetically valuable than actually valuable. It's a sad reality, but it is what it is.

I mean, it's like, it's just nowhere near as important. Like, fast followers do better, usually, in companies. Like, Second siblings, like just like fast followings got its advantages. You know, that's all for this episode of your AI injection as always. Thank you so much for tuning in. If you've enjoyed this episode and want to know more about the recent advancements in AI, you can check out a recent article of ours by Googling LLM efficacy assessment, or by going to xyonix.

com slash particles. Please feel free to tell your friends about us. Give us a review, check out our past episodes. Podcast. xyonix. com. That's podcast. x, Y O N I X. com.