Why Most Companies Are Failing at AI and How to Succeed with Tahnee Perry

Are most companies accidentally sabotaging their AI transformation? 

In this episode of Your AI Injection, host Deep Dhillon sits down with Tahnee Perry, AI advisor at A25, to expose why ~95% of companies are using ChatGPT but still failing spectacularly at AI implementation. Tahnee reveals how organizations are stuck in "Wild West" mode, with rogue employees experimenting randomly while leadership throws around AI buzzwords like it’s a magic bullet. She shares examples of teams cutting translation costs by $100,000 in 45 minutes and why the hunt-and-peck workers of today should be rethinking their career paths. Deep and Tahnee explore whether formal AI training is the difference between thriving and floundering in the next economic shift.

Learn more about Tahnee here: https://www.linkedin.com/in/tahneeperry/

And check out some of our related content:

Get Your AI Injection on the Go:


xyonix partners

At Xyonix, we empower consultancies to deliver powerful AI solutions without the heavy lifting of building an in-house team, infusing your proposals with high-impact, transformative ideas. Learn more about our Partner Program, the ultimate way to ignite new client excitement and drive lasting growth.

[Automated Transcript]


Tahnee:
 The World Economic Forum released its annual report in January, and what they found when they analyzed the workforce is we're gonna lose 92 million jobs.

In the next five to 10 years, but we're going to create 172 million. And what's happening is the jobs that have a lot of that repetitive manual work associated with it will go away. to give you a concrete example, if you are a bookkeeper and you are just logging transactions, your job won't exist.

'cause AI will do that. But a financial analyst whose job is to architect an entire solution and to monitor performance, there'll be a lot more of those jobs. So we think, yes, there's a lot of fear right now about AI replacing people, and it is going to happen. We, there are people who are like, don't worry, it's gonna be fine.

I don't know about that, but I, I would say that if you currently do a lot of like hunt and peck work, you should rethink your entire. Career trajectory 'cause that's going away. 


CHECK OUT SOME OF OUR POPULAR PODCAST EPISODES:


Deep: Hello, I'm Deep Dillon, your host, and today on your AI injection.

We're joined by Tahnee Perry, AI advisor at A25 Tahnee holds a BA in Communications, PR and Marketing from the University of Canberra, and served as a guest lecturer at Stanford University and go to market Strategies. Today we're gonna walk through her process for successfully leveraging AI in marketing and how to ensure ethical implementation in AI powered marketing and advertising.

Tahnee, thank you so much for coming on the show. 

Tahnee: Thank you for having me. 

Deep: Awesome. Well, let's, get started. I always like to start off with this question where, What are people doing before they have your services? and then once they've got your services, like what's different?

So maybe walk us through that. 


Xyonix customers:


Tahnee: I'm an AI advisor and what I do is I help people take AI from theory to practice. And I think, you know, given this is a, a new technology for most, I mean, I know AI has been around for a really long time, especially when you talk to the computer scientists. But for [00:02:00] people in a business capacity, especially in marketing, which is my focus, it feels relatively new.

And so it's very difficult for a lot of people in teams to figure out, okay, yes, we have this new innovation, but now what do we do? And so I help them bridge that gap. So before they come to me, what I'm finding is that the efforts that they're approaching or implementing today are very ad hoc. They have a couple of people who are innovators on the team, and so they're leaning in and they're figuring it out, but then they have other people who have absolutely no idea how to use it, and they're just not, they're going back to their manual workflows.

And there's also no coordinated efforts. The teams aren't sharing their knowledge and there's no oversight. So it's kind of like the wild west. They're out there doing whatever they want, and sometimes there's some benefits they're seeing, but it's inconsistent. So what I do is I help teams bring it all together, create an overarching plan.

Uh, a big part of that is formal training, because you can't just assume that people know how to use this technology. And we sit down and we say, what are [00:03:00] your goals? What are you trying to achieve? And then let's map out how to get there. So what I find is that after leaders and teams have worked with me and gone through my process, they're seeing a lot more consistent outputs.

The teams are happier because they're doing less of the repetitive manual work. They're focusing on higher level creative projects. And so you see all kinds of benefits, like they're saving on budget, their revenues go up and their team sentiment is higher. 

Deep: So. we kind of divide the world of AI into kind of a couple of buckets.

I mean, there's lots of different ways to slice it, but with respect to this one bucket is companies using AI in product to build features to like power innovation inside of their products. and then the other side is kind of on the more business operations side. It sounds like you're talking about leveraging existing tools and existing capabilities.

in how your team does their day-to-day tasks. It sounds like you're more on the latter case. Is that fair to say? 

Tahnee: Yes, definitely. 

Deep: and you're mostly [00:04:00] focused within marketing groups. 

Tahnee: Yeah, I'd say that my purview is in the connection between people. So what I find myself working with teams, either in marketing, sales, or human resources, because for marketers and sales, they're trying to reach a prospect for a customer experience.

They're trying to reach customer. And for hr, they're trying to reach an employee, but it's all around people. So all those functions and those motions, they're very similar. So I find that a lot of those departments, those are the ones that I'm working with today. 

Deep: It feels like there'd be a few different things that you'd want to do.

Like one thing that you'd want to do is try to understand kind of exactly what the team does today. and then you're kind of like trying to think about, well, how can I Interject some external tools here. Another way to think about it is like you're sort of maintaining a catalog of all of the tooling that's out there.

So not just the LLMs, kind of more directed, more specific tools that have AI powered features in them, like, you know, sales enablement or, outreach outbound style stuff. [00:05:00] so tell me like how you think about that and then how you typically like engage as maybe the first.

Few days of your engagement or something. 

Tahnee: AI is not a magic bullet. I work with some leaders where they're like, well, let's just put AI on it and that'll solve everything. And I'm like, well, let's take a step back because they need to understand like, what are you trying to achieve? What are your goals?

Because every company is different. Your go go-to-market strategy has different motions, your customers have different preferences. And so it makes sense to sit down and figure all of that out first and then say, what do we need to get there? And there are two elements of that. The first is the workflows or the use cases.

And you start with that because that's. Those are the activities and the programs and initiatives that the team are employing to get to the end goal. So what do they look like? What steps are people taking as part of those processes? And then you go to the tools, because sometimes I talk to people and they're like, oh, we want, this new fancy, you know, content creation platform.

And I'm like, you might not need it because you either have it as part of your work suite or it [00:06:00] doesn't fit your workflow. So let's figure this out in order so that each step follows logically. 

Deep: Yeah, I mean, that makes a lot of sense because you're trying understand how they do things today and then you're basically seeking efficiencies, within there.

And then you can have maybe more targeted, suggestions in particular scenarios. Walk us through like a particular scenario. Like you show up day one, what does it look like?

and maybe what's the team? What did they do? 

Tahnee: I have a, a good example for that, and it's a SAS client in the sales enablement space. Speaking of that, yeah. And they had a marketing, or have a marketing team of about 60 people, and they're across the globe and I think they had seven different departments.

we sat down with the executive team, so all the VPs of each of the departments to do that initial assessment. What are your goals? What are you trying to achieve? What are the real pain points that you have today? We conducted an employee survey to get a sense of what they were thinking and feeling and where their levels were.

And then we took that assessment and we broke [00:07:00] out each of the departments into what we called a knowledge share session. So it was basically a webinar where we said, here, here's the landscape, here's what's happening, here's all the nomenclature, because this was 12 months ago, and back then, like people didn't even know what an LLM was.

So you know, you have to explain all that. So that was really creating the foundational base knowledge for the entire department. 

Deep: be you're doing that before you've kind of dug in on what their roles and processes are. 

Tahnee: So I think we did those both in tandem. when we had the meeting with the leadership team, that's where we looked at things like goals and workflows and processes. And we used that to tailor the knowledge sharing. to give you an example, so for the revenue side of that department, we had very specific use cases. We walked them through, for product marketing.

then we did, you know, events ' cause each of these departments do things differently. 

Deep: Mm-hmm. And 

Tahnee: each of the knowledge shares, I'd say 30 minutes of it was like, here, here's the landscape. And, what's obvious for everybody. And then here's what's specific for you. 

Deep: For you. And you're usually breaking out with those [00:08:00] individual teams, just that team in the room sort of thing, or, yes.

Yeah. And then, like, I don't know, pick a team, you meet up with them, what happens then? 

Tahnee: After we did the knowledge share, we then had hands-on sessions and I'd say these were the most popular because I don't know, I mean, it's really great to learn things, but it doesn't sink in until people really get their hands on keyboards.

Yeah. 

Deep: Yeah. And they're actually doing specific things in their specific, like, this is how I, I don't know if it's the content marketer, this is how I typically write an article. And now you can kind of dig in. Yes. Double click on how everything might be different. 

Tahnee: Exactly. we called them open office hours.

Mm-hmm. And people would show up and they'd, they'd kind of come in with their, I'm having this challenge, I'm trying to do this thing, I know that AI can help me, but I need some guidance, so what should I do? And we would help them troubleshoot. So to give you a specific example, this team, they were producing all of their content in seven languages and they were working with a translation agency.

And it would take them about six weeks to translate all of their materials and get it out to market. We sat down and we [00:09:00] helped them build a custom GPT in, I don't know, it was like 45 minutes. That was 90% of the way there for them. And that reduced the time for translation and the full production cycle down to two weeks.

They still had a couple of translators in the loop to review everything, but they reduced their costs significantly. I think they saved a hundred thousand dollars in a year on translation alone. 

Deep: So how does that. Pencil for you though, because you have a massive impact on a business in 45 minutes, but you don't necessarily capture the value that you've given to them in 45 minutes of billable hours.

So like how does that work for you? 

Tahnee: Well, every team is different, right? So they see different levels of benefit and some, some are easier to quantify than others. Budget savings, growing revenue is pretty easy. But there's also the, the team sentiment, which is you can just see people are happier at their jobs.

And to me, that is the most valuable metric. Now for me, the way I work is I contract with a company [00:10:00] to bring this learning to them. So there's my program fee or workshop fee. I don't take a percentage of the benefits, and maybe I should reconsider the way I, I monetize my business. Yeah, you 

Deep: might want to, 

Tahnee: yeah, 

Deep: but it's hard to, it's hard.

It's hard. 

Tahnee: It's really hard to quantify, but 

Deep: it's kind of where you want to get to usually. So, yeah. 

Tahnee: A lot of the satisfaction for me is seeing these companies like Thrive and Grow, and then they, what they do is they recommend me to other people, and that helps my business in that way. 

Deep: So let's dig into the translation thing a little bit, because a custom GPT it's a quick hit, but it's super limited in terms of what you can do within a custom GPT.

You get a custom GPT together, you give it to 'em, they start using it. we do that kind of thing all the time. Just because if you're building AI capabilities into products, you tend to know this stuff really well. So things that we think of as brain dead obvious or usually not for a lot of clients.

so we'll hand some of those off, but like, one You can't like model, select within the, custom GPTs. you know, you're stuck with four Oh, which is a solid model, but it, it's not the best model [00:11:00] for all scenarios. the other thing is Maybe you could dig deeper into their process and get in further in terms of, maybe integrating with their editing environments because I imagine if you're, if you're leaving at a custom GPT, then at a minimum you're, still cutting and pasting and doing that kind of stuff, but usually there's a lot of ancillary capabilities that you can start to mm-hmm.

Pull into, higher stack tooling. Do you, find that they keep going after you kind of get this high level engagement and they start looking for more exacting tooling or? It depends on the 

Tahnee: company. the example I gave you, they were, I don't wanna say insulated, but they were unit as marketing and they didn't have a lot of cross collaboration within the company.

And so we helped them build tools that they could just go and do and use themselves, right? So you had to make it really easy and practical. I worked with another client in the media space and they had a much smaller team they had a, a bunch of editors who had all these ideas and they were working very closely with their developer.

And so they sat down and they said, we wanna create this new process where [00:12:00] we get press releases in and it takes us hours to read them and write these stories. So they worked with the developer to connect ChatGPT, The developer helped write the code for this, and the editor guided, this is what I want the output to look like.

And so they had a custom solution. But I think that company, it worked for them because they had that resource internally and they already had that system in place for them to work together. 

Deep: I'm curious, like what are you seeing as the state of defacto LLM knowledge today?

 cause it seems to me like I haven't met anyone in the last probably year that's not using, a GPT model for something. tell me what you're seeing in terms of the default level of knowledge in these organizations and like the kind of social IQ in the org with respect to AI usage before you get there.

Yeah. 

Tahnee: I do a lot of keynote presentations and depending on the industry, I I usually go into the room and I'm like, who's using what, right? Yeah. Because it, it gives you a good sense and when I first started this two years ago, you'd maybe get a quarter of the hands go up when I'd ask, you know, [00:13:00] are you using ChatGPT or something else?

Now I'd say it's like 95%. Everybody has tried chat, GPT, and I think the beauty of them having the free tier is that there's no barrier to entry. Everyone's tried it. But what I'm seeing now is that the leaders and the teams that invest in formal training are moving from, hey, I've used chat GPT and asked it to write me an email to way more advanced use cases.

So the client that we put through what we call our AI accelerator, the last time that we met with them, when we were wrapping up the program, they were already building agents 'cause they were a Microsoft. Organization and they were leaning way in on the copilot agents and they were building all kinds of things.

And my partner and I were like, wow, you, you need to tell us what you're doing 'cause you are now at the, you know, the bleeding edge. And so I think, you know, it depends on the industry and it also depends on the company and how much they've dedicated to understanding this technology where they are today.

Deep: that's interesting. Like so I'll kind of walk you through a scenario that I kind of encountered not too long ago. [00:14:00] So we're, working with a company and, I was sort of interacting with some sales folks and we were collaborating on, on getting leads and, they had a, private equity firm that they wanted to kind of go after and try to help all their companies Get kind of AI machine learning capabilities in it. and they were bringing me into like a lot of conversations. And then I finally just said, well, look, here's a bot. Just talk to the bot and then only talk to me after you don't get what you need from the bot. and then that cut down my, pull in by like, 99%.

and what happened was they were just sort of really surprised. ' and they said, can we see the prompt? I'm like, yeah, sure. Here it is. And they're like, wow, I had no idea you could make prompts this elaborate and detailed. And, I was like, well, yeah. hang on, just gimme a couple of hours.

And so then I said, he, here's all a hundred of the companies in that private equity firm, and here's your sales pitch for all of them, and here's how you, you should interact with all of them. And here's, the. Top five areas that, you know, AI can help them and, and I scanned over and it all looks pretty, pretty solid.

So go and go for it. And then [00:15:00] that just like kept them running for quite a while, you know, like the, they're still running with that stuff. So everybody knows how to like take GBT and get it to write something out of the gate, but people don't know how to go with a lot of back and forth and sequential and how to break problems down and like, compartmentalize them and use the LLM more strategically and reassemble the biggest thing that I tell people constantly is, ask GPT what questions to ask it for your problem.

that advice alone like makes a radical change in how they interact with the models. Are, 

Tahnee: are 

Deep: you seeing sort of similar, like lessons or anything? Or does any of that resonate or, 

Tahnee: yes, definitely. I, I think there's two keys to getting the most out of your LLM and the first is the training.

And there are a couple of elements to that. It's the, the prompt that you build in either if you're doing one shot or if you build a custom GPT. And the second is the knowledge documentation. And the more knowledge documentation you can give it, the better because then it understands what [00:16:00] kinds of examples and outputs you're looking for.

It has more nuance, more context it takes a lot more work, but you elevate the results everybody says, oh, you know, less hallucinations when you contain it. If you're giving it that knowledge documentation. I find that I get much better results from that.

Deep: tell me a little bit about what you mean by knowledge documentation. 

Yeah. Yeah. 

Tahnee: So as an example, if you build a custom GPT when you go in, and this applies to chat GPT or Gemini or copilot you. When you go in and you build your little configuration, you have a section called, knowledge docs.

And this is where you load up all your different files. It can be a pdf d, f Word doc for me, I have, I don't know, I've got like 45 different custom GPTs at this point. So I've got one called writer and it helps me with my writing. I have all of my samples of all my blog posts, my LinkedIn posts, I've done articles and when I then collaborate with chat gp t on a project to say, go back and reference those documents, or sometimes it'll give me something and I'm like, this does not sound like me at all.

Yeah. So I go back, go back and read what I've [00:17:00] put into your knowledge docs and then it'll come back and be like, oh, sorry, my mistake. And then it'll redo it for me. Yeah. 

Deep: It's, it's interesting 'cause I think, the concept of like just putting it all into a knowledge life that represents you as an individual and then just popping that in there.

I think people's instinct tends to be to like start from scratch every time and like put in, specifics with whatever the task is. But, sometimes that, like surrounding context matters, Open AI itself is getting a lot better at like long-term memory. And so I've noticed that they're starting to glean a lot of that in, in the last few weeks.

they're starting to like reference, you know, my writing style in clean prompts, which I find interesting, you can also 

Tahnee: tell it to remember things. So if you're having a conversation with it, and it, the output is something like, Ooh, that's really cool. You can say to it, remember this, and it'll actually go into, the memories and you can customize what it keeps in there.

Deep: Yeah, in the olden days, like six plus months ago, it used to just prattle on forever, like all the models did. at OpenAI was particularly egregious at this. And I'm, I had like a never [00:18:00] say, more than 300 characters ever, unless I ask for it.

Tahnee: I have a do a do not say list. That's like 90 words and phrases long. 

Deep: Yeah. What words are on there? I'm, oh, 

Tahnee: well there's just, there's so many dive into transformation. Game changer. Anytime someone says Game changer, now all I hear in my head is ChatGPT said that. 

Deep: Oh, okay. That's fine. 'cause there's just so 

Tahnee: many things, like it's, it's long and verbose.

It, over uses the same words and phrasing 

Deep: sometimes, it takes a while to get a clever hack too. Like I'm learning Spanish on the side and I built a, custom GPT for Spanish. my Spanish stinks to be clear. and I'm using the voice interaction largely, But it was just too advanced for me. And so I tried everything. I tried putting in like, you know, extra white spaces, periods, telling it a million different ways till Tuesday to slow down telling it that I'm an idiot. Like I'm, you know, I'm a first grade level speaker. None of it worked. And then finally I just sort of capped it.

I said, never use more than five words, and there's no way to not sound [00:19:00] basic when you can only use five words. So all of a sudden it worked. And I was like, okay. 

Tahnee: And the only way you got there, it was, it was trial and error was the only way you got there? 

Deep: you know, in hindsight it was, it was sort of dumb that I didn't figure that out faster.

But it's, a category of insight that I am sure the model's gonna, you know, be able to get better and smarter at that. But yeah, there's a lot of sort of tricks of the trade. There's, it's quite a few of them, and the more you work with the stuff, the more you kind of take them for granted and don't think about them.

Like a lot of people don't even know about few shop, you know, example, provisioning for the model so that it, it gets clear examples, but a lot of times that stuff works and a lot of times it doesn't, it's not always the case that the tricks in one context work in another, 

Tahnee: I think one of the thing, and I follow, AI expert, I love Ethan Molik. He's a professor at Wharton and he has some really great advice. And the one that really stuck with me in the early days is you just gotta get in there and use it because there is no [00:20:00] rule book, it's not consistent.

It interacts differently depending on how you craft the prompt who does the prompt. And the only way that you get to the best output is just by being in there and interacting with it. And to me, I, I love that 'cause that means that I just have to be there and I have to be present and curious and ready to test and iterate.

I find the people who really struggle the most with ai, they want it to be that magic, you know, wand. They want it to just be like, here's the prompt and done one and done. And that, to me, I don't know if AI will ever be. I think if you, well, creativity, it's, 

Deep: I wouldn't blame the models for that.

I think humans have a tendency to think that other people know what they're saying. and the models are already getting better at seeking clarification, but they're highly oriented towards question answering. So, you say something, they're trying to give you an answer in one blo, but there's a million contexts where you, you seek clarification.

Like therapists, you don't speak to your therapist and then have it just like deliver, you know, a page of verbal diarrhea. Like, you know, even if it's on the [00:21:00] mark, it's useless to a patient. So therapists quite often, probably a huge ratio are, you speak, they question, you speak, they question, and the models don't really yet have a sense of that.

But you can certainly give it that sense quite easily, you know? going back to like, I, I wanted to push a little bit on that idea of like, people have to jump in there and try things. I, I obviously agree with that, but one of the templates that I've found the most powerful is to, you know, leverage the model to improve the prompt there's another high level modality I think is very powerful, which is put the model in the position of being an analyzer, that like analyzes whatever you've done.

you write a prompt, or you ask for prompt, you puts it in, and now you ask the model, I need you to do a thorough analysis of this prompt. This is what I'm trying to achieve. Just analyze it and tell me where it could be weak, things that might get better or worse.

And then you can ask it to like, give it a shot. I'm curious if that level of input to your clients is something you do, where you try to get them out of this mode of just talk to it and into [00:22:00] this like a higher level template.

Tahnee: Yeah. I mean, dunno if I like the idea of template. I know that you can have prompts that work and you can store them and you can reuse them 'cause that, you know, makes it easier. But to me, working with ai, especially chat, GPT is more of a collaboration.

It's a back and forth and that's where you get really the best outputs. To give you an example, I worked with our SEO team and we have this idea where like, Hey, what if you could do an SEO audit with deep research? And we're like, well, let's give it a go and let's test it. 'cause it had just come out that week and we, we put in a prompt and asked, you know, run an SEO audit for the website.

Here are the competitors. And it went away for its 10, 15 minutes and came back and we're like, ah, it's okay. It, there's some decent feedback in here, but it's pretty high level. And the client was like, we knew all this already. So it's like, all right, well let's try, let's do it again. And what we said is we asked ChatGPT, how would you improve?

This prompt for a deep research and here's the output we want. And it ended up, we went back and forth a couple of times and pro, the end prompt is like three pages long. It's a, yeah, yeah. And it's got, it's got [00:23:00] segments we have to actually input items like the actual competitors and, you know, sample pages and examples.

But when we gave it everything and it, it did take, I'd say it took a, a full half an hour to just get it set up. But once it had all that, and we entered it into deep research, the audit we got back was like 40 pages long. Super deep. It did page analysis. it went out and actually tested the speed of the website.

It looked at keywords and competitors and a sentiment analysis. And the team were really impressed. This was very deep. Well, that's, 

Deep: that's kind of meta. I mean, let's dig in on the deep research question a little bit because I think, I don't know, my, my hypothesis that humans tend to be pretty, um. Now oriented.

And so using the, the faster models tends to be the default because we like to get stuff right away. but deep research takes as you know, quite a while and iterating at the deep research input prompt level, like now you're talking, you know, hours or [00:24:00] days to kind of make it all the way through. I think that's a fascinating use case because, I mean, this isn't something that we could have done nine months ago.

how do you advise people to go into deep research mode and then do you have like a process for, helping them be more patient and iterate and all that kind of stuff? 

Tahnee: Yeah, I think, patience. Patience is the virtue here for sure. I, what I do is I use the other models, so for, oh, I guess you could use three if you wanted, but I use the other models of 4.5 to do the collaboration in the back and forth.

'cause I feel like that that kind of instant feedback is really helpful And that's where a lot of creativity happens. The waiting is what kind of kills you. It's like you give it something, you have to wait forever and you come back and you're like, what was I working on again? Yeah. So when you want that really instant feedback, use one of the other models, and when you're happy with it and you feel like it's in a good place, then you go and you put that into deep research.

I don't know that I would do like, prompt collaboration in deep research. I mean, maybe I should try it and see what we get maybe it'll be, well, I, uh, 

Deep: I think it would be, I mean, [00:25:00] it's gonna be certainly time consuming, but it's hard 'cause you have to kind of put yourself in the eyes of the engineers that built the research.

you can imagine that model going back to the prompt in between, like so the thing goes out, gathers a bunch of data, goes back to the prompt, reassesses in conjunction with the new data. It's gone, and then it goes out and formulates a new set of actions and goes and follows them.

Mm-hmm. So you could imagine that the nature of that, the deep research prompts being more high level and if then oriented about sort of high level actions that, that they can take as opposed to in the moment, just using like four oh or, four, five or whatever in, Like the one prompt, one answer scenario. 

Tahnee: One thing I do recommend clients do is, pick your couple of LLMs and then do the same prompt in both and see what you get. And then you can also like pit them against each other. Like, ChatGPT said this, perplexity, what do you think?

Or Claude cause they all have a slightly different take. They, they access different information and sometimes you'll get a different bit of advice from one versus the other. So [00:26:00] that can be another helpful technique. 

Deep: Huh? that makes sense. what is the usage landscape that you see between, um, Claude and Gemini and the other models and OpenAI?

Like what did you see a year ago and what are you seeing today? 

Tahnee: ChatGPT is still the most popular everywhere I go. All the hands go up for tattoo pt. It's people's favorite. Yeah. And I think because it is, I think of it as the generalist, it has the biggest variety of functionality when you think about voice and image creation and just all the different types of models you can use for content creation and reasoning.

the next most popular, I think is probably perplexity and people use it more for search. Claude, most people are like, oh, you know, maybe I've heard of it. but it's a very small amount. I do think that Google and Microsoft, they're just the default because a lot of organizations either have Microsoft or Google Workspace, right, as their work suite.

And so Gemini is built into Google and co-pilot is built into Microsoft. But what I hear from people [00:27:00] is that it's not as good as chat GPT. So people are still going over to chat GPT to collaborate and then they bring that content over to their workspace. 

Deep: I mean, that's what I find too. It's like, if we talk about Claude, I've spent every few weeks or month or so, I'll go jump over and try all the models out again on my kind of reasoning tasks. I mean, Google's default, Gemini to me, like in the search box, is just complete garbage. maybe it's different if I kick them the, you know, the pro version, I haven't really tried, but we have found scenarios where Gemini's really strong, particularly at the API level when we're using the API and it's a very search heavy task.

it definitely, excels in those scenarios and if you're using the higher spec model, but their numbers look good. So, I mean, I think it's a matter of paying the money and getting up a level on models. quad I found is with respect to writing more like at four, five level.

I was kind of impressed with deep SE when it first came out. but you know, like OpenAI responded in less than 24 hours. So I think, I mean, if we [00:28:00] could buy stock, like, you know, I'd put it all on the open ai at this point they're by far and away the leaders we don't really bother with the other models unless it's, a really specific scenario where we need a search heavy thing or, and there was a time where Gemini was a lot stronger on imagery. I'm not seeing that as much now. 

Tahnee: I think the Imogen three, which is built in to Gemini, it actually does a really decent job.

So I think if you are, as a use case that might make sense, is if you're creating a deck. Yeah. You know, you can't, you can't create a PowerPoint or a slide deck in chat pt. And that's a really common example of what people have to do at work. So if you're in slides in Google and you need an image, imagine's pretty good.

And then that way you don't have to go over to chat PT and OpenAI launched image generation, which is amazing and the images are, excellent in comparison to what you used to get from Dali. But man, it's slow. It's like, 

Deep: yeah, 

Tahnee: you wanna talk about deep research being slow. I feel like you put in this prompt to get an image and it takes like five [00:29:00] minutes it just takes a really long time. And the output's definitely better, but I, I find that speed really frustrating. So I'm finding I rely on Midjourney still that that's my favorite. Mm-hmm. But I've also found that if it's just a deck and I need something simple, the Gemini text image generation's pretty good.

Deep: let's talk a little bit about the other modalities, like, you mentioned PowerPoints and slides and, I wasn't even aware that Google, in Google Slides had a prompt to get you slide generation. Can you tell me a little bit more about that? you know, and, and does that exist in PowerPoint too and, in the copilot version or, 

Tahnee: yes. I would say they have the ability, the quality is pretty choppy.

I'm in workspace, and not Microsoft. So I, I understand the Google functionality a lot better. What you can do they say you can do is queue up a document and say, create a deck. And I've experimented with this on a few different examples and the output's terrible.

And so I would say, I don't know, it's probably not worth it. You still have to go and build the deck yourself. What I [00:30:00] do like is you can ask it, rework this slide and it'll redo the layout. Or you can ask for an image and the image generation is pretty high quality, but in terms of like clicking your fingers and getting a, a full deck from, intro slide to end, it's not there yet.

I think if you really want one of those powerful deck creator ai, you need to go outside and look at something like gamma or beautiful ai. 

Deep: Got it. let's change, gears a little bit. So on the podcast, we like to talk about three things. We like to talk about what our guests do. I think we've covered that pretty well. we like to talk about how our guests do that. I think we've covered that pretty well.

and then, we like to talk about a slightly more challenging topic, which is should they do what they do? And, um, this is touching on the ethics of what we're up to here. So the other day I had somebody, reach out. they were a potential client.

They were pretty small, had a fairly. low budget, they were like a small company that did maybe seven to $10 million a year in revenue. But they had a role with three humans [00:31:00] that did a very specific thing. you know, they got a big list of stuff that they had to go and, search for and find these items and then package them all up and like ship them.

And then they would wind up, you know, in this pre remote area. and the question was like, Hey, can we just automate all this stuff? But it was in a very specific context, it was like, these three people will lose their job and the event that you do this, which I found kind of odd because it's not usually like what we do.

We usually help product companies bring AI capabilities to their products that's gonna grow a new market and grow new usage. And it's not usually a direct replacement like that. tell me. What are some of the ethical issues that you are confronted with in conversations, particularly with respect to, like making humans obsolete in particular scenarios folks are saying like, everything's gonna be gone job wise within 10 years, five years.

And then you have other people, you know, like Jan Koon. Well, you know, these models stink. I mean, this is just information retrieval you know, , humans aren't going anywhere. Our tasks will be replaced and refined.

what are you seeing out there with [00:32:00] respect to the ethical minefield and with respect to like humans in our role? 

Tahnee: Well, I'll, I'm gonna break that into two parts and the first is the, the job displacement issue. The World Economic Forum released its annual report in January, and what they found when they analyzed the workforce is we're gonna lose 92 million jobs.

In the next five to 10 years, but we're going to create 172 million. And what's happening is the jobs that have a lot of that repetitive manual work associated with it will go away. to give you a concrete example, if you are a bookkeeper and you are just logging transactions, your job won't exist.

'cause AI will do that. But a financial analyst whose job is to architect an entire solution and to monitor performance, there'll be a lot more of those jobs. So we think, yes, there's a lot of fear right now about AI replacing people, and it is going to happen. We, there are people who are like, don't worry, it's gonna be fine.

I don't know about that, but I, I [00:33:00] would say that if you currently do a lot of like hunt and peck work, you should rethink your entire. Career trajectory 'cause that's going away. Then the second question or the second, uh, topic was around ethics. So there's a really hot debate going on in content and art, at least in my field, where all of these LLMs are being trained on copyrighted material.

And now as a marketer or content creator, I can go in and I can say, create an image and style of X or Y and I get that. And so then there's this big discussion of like, where's the copyright around that? Who owns that? Do I have the right to be using that image? And I think as humans we have to decide like, should I be using this?

Is this the right. Course of action and also have to keep in mind, like as a marketer or marketing team, am I violating copyright? Because the AI is not responsible. You are responsible. So someone says to you, you've created this beautiful marketing campaign with all these beautiful graphics, and then the owner who has the copyright to that artwork comes to you and says, Hey, you've [00:34:00] just ripped off my, copyright.

Then you, you're the one who's liable for that. 

Deep: So what's your response there? To take the output and assess copyright, you know, after it's been created? 

Tahnee: Yes, you have, it's your responsibility to double check that you are not plagiarizing or stealing someone's work. And well, there are a couple of AI tools out there.

You can use things like originality.ai, Grammarly for things like copy. But for artwork, I think the best thing you could probably do is a, a Google image search. you know, it doesn't have to be arduous, but make sure that you're doing at least a quick overview. Like, is there anything out there that this really, it looks like this is someone else's work and, and should I be making this decision?

Deep: Yeah, I mean, copyright's an interesting, challenge here because I think there's probably. This isn't an endorsement by any means. I do think the copyrights have been violated, uh, massively by these models. And it's a real question. But when the world gets addicted to something, then you know it's gonna keep happening.

Like people will get [00:35:00] paid off. I'm sure Open AI is writing checks for billions of dollars, uh, as is Google and everyone else. and that will continue and probably those checks will get larger and larger and larger over time. But I think humans sort of historically have thought of themselves as original and have thought of themselves as, novel and unique.

And I think the models are teaching us that we're really just not, we're not particularly novel and unique. you know, Isaac Newton and l has both came up with calculus independent from one another like certain times and places and locations and inputs and influences. Cause, folks skilled in the art to come up with the same idea all the time.

This is why we have like 10 startups that do the same thing. I guess my point is like, I feel like these models are forcing us to come to terms with the fact that humanity is just really not that original. any one human is not that original.

We're all, at the end of the day prompt driven. we've got a sequence of history, imagery, vocalizations, we have five perceptual mechanisms, all that stuff goes in, just like it does into [00:36:00] open ai. and then we're presented with a particular context. And then what comes out of our mouth will be a function of that, right?

If I'm born and raised in a place, you know, where everybody is a, a maga worshiper, the chances that what comes outta my mouth in a particular context will come from that, lexicon. And if I'm, you know, coming from the other side of the spectrum, similar thing.

It feels to me like if we're gonna hold the models to this ethical standard of copyright, we should hold the humans to this ethical standard of copyright. And I feel like if we do, it's gonna be a lot harder than, to actually enforce these things.

Tahnee: Mm-hmm. Well, I think that's why there's such a big debate, and it reminds me there was this really cool video in the 2010s called Everything is a remake. I. Uh huh And it broke down, like music in particular, all the same, same chords and all of these musicians ripping off other musicians and then remixing it and remaking it.

And you're right, LLMs are just the new version of that. 

Deep: Plagiarism's an interesting concept, right?

Like universities, high [00:37:00] schools, they care a lot about plagiarism. Businesses don't care at all about plagiarism. I, I've worked in the private industry forever. Every marketing department I know, plagiarizes, perpetually everything. Constantly. You go and you read all these dribble articles, there's like hundreds of millions of them.

newspapers, sure. Maybe a handful of the credible ones, they're just copying and pasting and maybe they cha rearrange some words here and there. you know, the equivalent of like, hey, say the same thing, but differently.

I'm not justifying plagiarism. I think it's wrong, but I think drawing the line is hard. And when you build algorithms for a living to determine the level of difference between two particular artifacts, you find out that you put it on a spectrum between like, with a value between zero and one, and you have to draw a threshold somewhere.

And it's not obvious to put it at 0.71 or 0.72, or point, you know, like it's, it's just not obvious. And I think people in general, in the law, they're not used to these kinds of refined gradations and trying to figure out how to draw a line through it. 

Tahnee: Yeah, no, I think you're [00:38:00] right. an expert, I follow Andy Cina, and he has, uh, another acronym for ai and it's average information.

And it's because it's, it's consuming everything that's out there and then giving you the average of that. Yeah. And I think it does that because that's what we've always done. We, we always move to the average. 

Deep: final question.

Let's fast forward out five to 10 years. Give us your, both dystopian and utopian vision of ai. And I am usually more interested in the dystopian vision. This is, maybe my nature, but like, I feel like there's enough fluff out there. give us, how this stuff evolves, given everything that you're seeing and who you're interacting with.

And, if stuff goes wrong, what it looks like and if stuff goes right, what it looks like. 

Tahnee: I'm an optimist, so I really hope this all turns out well for us. 'cause I'm actually reading a book called Nexus, same author that wrote Sapiens and it's a little bit terrifying 'cause he does, he goes into, in a lot of depth what could go wrong and it's, it's a very real risk.

And I think we hear some discussion and debate about that, where AI can be used for good or bad. [00:39:00] And so I think in the, in the hands of the, the wrong people, there's all kinds of things that could go terribly wrong. AI gives you the ability to be everywhere and anywhere at all times. So I really fear a society like a totalitarian leader saying, I'm gonna use AI to watch what everyone's doing everywhere and to control the way you think.

Uh, I think another thing I, I worry about is AI is very convincing. So what happens if AI decides that it's, takes on its own worldview and that worldview is right and then it starts to manipulate us into believing the same thing. It's very hard to weed that out. So I worry a lot about that. Did you, 

Deep: my take on this is that it won't be the machine learning that causes the AI that causes the problem.

It will be the humans. the recent massive stock market drop. this is a hilarious but utterly ridiculous story, but, Apparently in the Trump administration, when they computed the tariffs that they were gonna apply against all of these countries, they didn't ask experts, they didn't ask, economists.

People couldn't figure out how they came up with this bizarre formulation. that makes no sense to anyone. [00:40:00] And eventually they've Apparently tracked it back to some interns just basically went to chat GPT and said like, Hey, given, you know, this bizarre view on economics the administration holds, we need a justification for these.

And, and that was the only place they got it from. They put it out there, thank the entire market overnight. And you wanna know what the first catastrophe of AI is? It's this, it's Ignorant people who don't validate things, refuse to go to experts, refuse to go to people who actually know things, and then just go to a bot 'cause it's fast and convenient so like, it's already happened.

I mean, it already happened. Yeah. We've already lost like $10 trillion in, uh, in value because of the most obviously stupid, idiotic thing you could possibly imagine being done with ai. But I guess we could probably think of dumber ones and we'll see them. 

Tahnee: I really do hope the good guys win because, you know, AI can be used for good as well.

I'm, I'm reading for that. Absolutely. 

Deep: Absolutely. And, you know, and technology just has this cycle where. Society requires time to develop an immune [00:41:00] response to a new tech.

And the immune response development for AI we'll still be fighting with it in 10 or 20 years, usually, we tend to figure it out and then the good guys eventually win. ' I hope so. Awesome. Well thanks so much for coming on the show. 

Tahnee: I think that was great.