Transcript
Dave Bricker (00:00)
Want to expand your speaking and storytelling skills and grow your influence business? This is Speakipedia Media brought to you by speakipedia .com. I’m your host, Dave Bricker, bringing you straight talk, smart strategies, and amazing stories from visionary speakers and thought leaders. My guest is the visionary CEO of Hive Interactive, where he’s in charge of integrating AI with human -centric experiences. His career began in entertainment.
which laid the foundation for a deep understanding of human engagement. He’s a global keynote speaker, author, and disruptive thought leader committed to redefining human connectivity in the AI era. His mission is to rekindle the passion and drive that makes us uniquely human while harnessing AI‘s incredible potential to elevate our lives. Please welcome Mitch Mitchem.
Mitch Mitchem (00:55)
Thank you. It’s good to be here.
Dave Bricker (00:57)
Thanks, Mitch. So let’s start with the basics. So many people, when they think about artificial intelligence, they see visions of terminators coming to wipe out humanity. Just how intelligent is artificial intelligence, and should we be afraid of it?
Mitch Mitchem (01:09)
Right.
yeah, it’s interesting. Those are two very powerful questions. think the first is what it is in its current form. And I know I was a CEO of a tech company, had a dating app for many years. So those algorithms 10 years ago are nothing compared to the ones today building artificial intelligence. They’re algorithmic, programs that essentially build a neural network in the cloud. That neural network is utilized to
to transfer and move information so quickly that the large language model, the AI, that algorithm can then take massive amounts of human information, data, knowledge, books, just the way we communicate with each other. And it’s able to process that and reformat it in a way that will get us a very intelligent result very quickly. So that intelligence has been worked on for many decades. And now in the last
you know, two and a half, three years, it’s really emerged as a new form of intelligence. It’s, it’s able to process information very quickly and get you an answer you need in milliseconds or seconds. And so, so it’s incredibly intelligent in that way. And it works off of a neural net in the same way your brain might work off of your, you know, the way your brain is structured. So, so in that way, it is intelligence. It’s, know, imagine if you could read a book,
you know, one book, if you could read it in three seconds and then tell me how to write an email based on that book, and you could write the email within a second, that would be a form of intelligence. And if any human could do that, we would argue that’s genius level intelligence. Well, chat GPT or AI in general, know, Gemini or whatever your model is, it’s able to do just that. And so that’s where I think it becomes very intelligent. Should we be afraid of it? I think we should be.
careful about the way humans use it by itself. It’s not this risky thing in any more sense that a hammer would be risky sitting on your shelf. It’s what you do when you pick the hammer up that matters. Are you hanging a picture or breaking a window? And I think for AI, we always talk about it, the hive that AI to us is a tool for augmentation. Number one and number two, we want to always have control. So we want to be in the middle. We want to
Use it as a tool, bring the content out, put our own human flair on it, and then, you know, use the AI again, possibly. So we like the human in the middle. And so being afraid of it, I think I’m more afraid of what humans do on a daily basis with or without AI than I am the actual technology.
Dave Bricker (03:59)
Well, I love your answer and I completely agree with you. Anyone who’s seen the AI generated responses at the top of a Google search page knows we’re all competing with AI. And then after the AI results, we see the sponsored results, a sponsored snippet, some similar questions, an academic answer, a selection of videos, and hey, there’s my lovingly crafted blog post way down at the bottom.
Mitch Mitchem (04:12)
That’s right.
Dave Bricker (04:26)
But the decision to put AI at the top of the page was a human decision, not an evil AI decision. Some business person decided to make that happen.
Mitch Mitchem (04:27)
Right.
Right.
Dave Bricker (04:41)
So my question then is, is SEO dead? And how do we compete with humans who put AI first?
Mitch Mitchem (04:49)
Yeah, it’s a great question. And you can answer that question in one interesting way. If you just search using an AI tool, like let’s use chat GPT, for example, in the paid version, if we’re searching, it’s technically using Bing as a search engine at this time, although open AI is building their own search model. But for right now it’s using Bing. And so what happens is it goes out, grabs the closest answer to what your question entails.
However, from what we’ve been watching, it bypasses paid advertising. So it tends to go for more credibility. So it’ll look for reviews. It will look for credible news stories. It’ll look for information to get your answer met. And it bypasses the things that can be bought like, like media or like a, like an ad. And so in that respect, SEO will have to change over time because you’re going to have to lean more heavily on reviews or more heavily on
the ways that the AI engines are picking up the information if they’re bypassing ads, for example. So, you know, how we rank on a website may be less important to how relevant we are on a website to someone’s query or someone’s question. And so, so I think that becomes far more relevant. It’s also going to put the power and the control of how a person gets information much more in their hands, regardless of where Google.
or anyone puts AI, let’s say at the top of a search. If I’m using an AI tool to search, then it’s going about it maybe very differently. so companies are going to be adjusting to this for a long time, because I think we’re going to enter an age where credibility, authenticity are going to be far more important to these AI engines than just how clever we are, how much ad space we buy. So I think it’s going to change SEO for a long time.
Dave Bricker (06:43)
It really creates a quandary for companies like Google that sell advertising in search results, sponsored placement in search results, because as more and more people go to AI driven tools, they’re going to be bypassing the very product those people are selling.
Mitch Mitchem (06:49)
Yeah.
Yeah. I mean, it could happen. And I, and I think, you know, our strategy is changing because of that. We’re focusing more on gaining more feedback and more positive, you know, positive results around how people feel about what we do. think that’s going to have much more traction. It is going to be a changing world though. And it’s going to change very fast. For example, if Google can make more money with a paid AI model than they make on advertising, that model could go away, which will be difficult.
for the end user, for the business. So I think we’re gonna have to pay attention to how everything evolves and changes for sure.
Dave Bricker (07:36)
So people are losing their jobs to AI. And again, it’s easy for people to think of AI as an evil force that’s intent on bringing about our downfall. Now, I laugh too, but this is, we run into this sentiment, right, in our work, and it may replace someone at their job, but there’s no more malice in that than there is in a heat wave or a flood or a rip current. If you ask the right questions, AI will happily help you compete with it.
Mitch Mitchem (07:47)
You
Right.
Dave Bricker (08:06)
So how are attitudes about what AI is and isn’t preventing people from getting the full benefits of it?
Mitch Mitchem (08:14)
Yeah, that’s a really good question. I think part of it, relies on how fearful humans can be. So the more afraid you are in general, the more you’ll see fear in how you interact with these tools versus we see them as augmentation. They make us better. They can make us smarter. We augment our experience. We augment our intelligence with AI. Now the bigger fear is not, my job be taken by AI? That would be a lot of work. And
That would mean a tool would have to be so dynamic and human -like that it could literally want your job, take your job, go get your job. That’s not really how the models work. In their current form, I think we’re in a golden age of augmented humanity. And I think a person who knows how to use AI could take your job. That’s for sure. A person who’s really good at using these tools will become far more valuable to an organization than someone who doesn’t.
So I think that part is seriously something everyone should consider. And that should put an emergency on everyone learning the tools as fast as possible. But oftentimes as humans, I think, I don’t know if you see this, I know I see it. We tend to equate in these arguments, we equate intelligence with malice. like AI is some evil genius. And so maybe I’ll be proven wrong on this, but I don’t think intelligence necessarily equals
malice or evil intent. I think as a model becomes smarter, it’s just becoming more efficient helps us solve our problems. But I don’t know about you. don’t think of AI as becoming this animal that wants to eradicate humanity. just don’t see that intelligence. Matter of fact, if you interact with it enough, you learn that it’s actually tried to solve problems, not make the problems worse.
I don’t see that evolving that way, but who knows?
Dave Bricker (10:10)
Right, but I think it’s, there’s certainly that reaction, AI is evil, I wanna stay away from it. But I think that there is some real concern that AI is neither good nor evil, it does not make value judgments. And again, to your point, it comes back to what are people going to do with it? And a great example is we have some of these image generation AIs.
And we’ve got a lot of horny guys out there creating these idealized, sexualized human female images. And I mean, I don’t even want to speculate on what AI is going to do to the porn industry, but there’s this idea of it’s not that AI is evil, but it comes back to your hammer example. What is somebody going to do with that hammer? And that opens up all sorts of questions about
Mitch Mitchem (10:49)
Mmm.
Dave Bricker (11:04)
How do we limit AI or how do we put limits on what people could and should and can do with AI? There’s a whole field of AI ethics that’s exploding.
Mitch Mitchem (11:18)
Mm -hmm. Yeah, I think there is. we see it as, again, I talk a lot about putting humans first in everything we do. to me, think humans could do bad things with just about anything. I mean, they find ways to be creatively evil. So AI is just another tool for that. I don’t think it’s any worse or better of a tool than a million other things. I’m more concerned about fake media and how AI can be used to leverage, you know, basically building a fake
video or a fake model of a world leader, that could be really dangerous. And that definitely should have some attention put on it. As far as what it can do to relationships, know, I don’t, I, I believe humans will tend to gravitate towards things that, that overall are better for them. Like for example, yes, AI might be used as a relationship technology where a young man or a man or a woman could
fall in love with their AI bot online. That’s certainly possible, but I would also argue that as much as that could happen, as many more people will become sort of human purists where they’ll say, I want all my interactions with a partner to be human. don’t want to have an AI relationship. I think the same will be true in media. The same will be true with movies. Yes, there will eventually be AI movie blockbusters.
But then I think other humans will crave movies only by humans with humans, just for variety and to have something interesting and more human. So I think these things are going to have lots of different gray areas. Certainly, you know, that industry or that level of human depravity, again, I think we could find a million ways to accomplish the same goals. It’s just another tool in that drawer versus using it for good. And I think there’s more good that comes then.
than the bad, for sure.
Dave Bricker (13:14)
Well, I agree with you. You could make the same arguments about GPS and say people are using it to send missiles to precise locations. but it’s fantastic. Well, you need to navigate to turn Waze or Google Maps on and there you go. Your GPS is, it’s pretty magic. So I like your analysis there. Let’s
Mitch Mitchem (13:22)
Sure, sure.
Right.
Yeah. And I also think too, you know, if you just think about humanity in general, we’re in this age where we kind of do need, especially after the last four years, I think it’s time for humans to sort of evolve around how they connect with each other and our relationships. So maybe this will also help in some.
Dave Bricker (13:53)
You mean you shouldn’t text somebody, I’m breaking up with you? There’s an alternative to that?
Mitch Mitchem (13:57)
Yeah, please. Yeah, let’s put that to an end. Yeah, send your AI avatar to break up with your girlfriend.
Dave Bricker (14:03)
Right, right. So let’s talk about content for a moment because any elementary school student can go to ChatG PT and start a weekly blog on say particle physics. They don’t have to know a thing about the topic to produce an eloquent, interesting weekly newsletter. So content in the blink of an eye has become worthless currency. In the information economy, we’re seeing massive inflation. So
Mitch Mitchem (14:15)
Ha ha.
Dave Bricker (14:31)
How do we stand out in the age of AI so we can lead an influence? Is clever the new currency? And can we use AI to amplify our unique human strengths?
Mitch Mitchem (14:42)
Yeah, to me, to me, I think AI, AI is going to always be a writing tool from this day forward. So I think there will be lazy creators that will use it in those terms. However, what your analogy is interesting talking about a young person who maybe tries to write an article about particle physics, that’s interesting, but it will lack the depth needed to make it more human related. And what I mean by that is
It takes a breath of experience to know how to apply things in the real world. so AI doesn’t have that mechanism. And so if you’re going to write content and talk about application, that’s a very human component. And so I think if I were a producer of content, my focus would be on application on the human side, on real world scenarios, things that are real.
things that are tangible. would get away from generalized content, which by the way, I think has been a problem long before AI started. I think we were getting into this place where everyone was just trying to look smart and write these long drawn out narratives that were never really about applied knowledge. They were, they were just sort of generalizations. So I think content focused on the real human experience and the real human elements.
and things that are in real life. I even think on video, the same thing is true. There’ll be so much fake video that if you’re a creator and you’re saying, you I, you’re standing on the fact that your videos are about real things and real people and real content, I think that will make a massive difference in how, how people relate to your content. I think it’ll be hugely differentiating. And by the way, if you think about it, and you know, this probably as well as I do,
Most humans are terrible at being authentic. So if you are good as a creator and a content manager at being authentic and real and driving content, that’s very human. You will stand out just by nature of doing that because most people aren’t very good at it. So that approach ability and authenticity will make a massive impact. And I think that’s what the future of content will be for real humans.
Dave Bricker (16:59)
And again, I’m loving your answers. I’m really glad I had you on because these are just the way I look at all of this stuff, but I’m concerned about some of the people who don’t because I think they’re missing out. Now, so many AIs are based on LLMs, these large language models, but let’s question for a moment. Does an AI that’s read 500 million Facebook posts, is that any better informed than an AI that has never read one?
Mitch Mitchem (17:04)
Me too.
For sure.
Dave Bricker (17:29)
Where do expert systems come into the picture? Why aren’t we seeing more AIs that have trained on the ideas of, 20 professional copywriters instead of 20 million rambling opinions, gossip posts, and cat videos in the general media stream? What? Go ahead. Yeah, the large language model seems to have its liabilities.
Mitch Mitchem (17:41)
Right, right.
Yeah.
Yeah
It can, I think it depends on the model. know, so if you look at really the models that we use a lot, we use Claude 3 .5, we use ChatGPT currently. We use Pi, we use Perplexity. And those models to us are the ones that have enormous current integrity. They also were not trained, especially Claude and ChatGPT, were not trained on Facebook posts or social media. They were trained on actual knowledge.
Actual important information physics science math Different thought leaders who wrote books. They were trained on real content real quality human content. I think that alone starts to Formulate how that model responds better if you look at other models like grok Which is part of Twitter if you look at llama, which is part of Facebook or Meta
And I’m not sure if Lama is trained on Facebook posts. may be. Certainly Grok is trained on the totality of the information inside of X or Twitter, but I think it’s trained on other content as well. But you can see the difference in the models. Chat GPT is not a sarcastic model. It doesn’t have any, in my opinion, it’s not negative, it’s not hateful.
But oftentimes using a grok, example, it can be highly sarcastic, almost rude, funny, goofy. So to me as a business professional and a speaker and a person has to run my company, it lacks the level of seriousness that I need to your point. I want it to have more training on real knowledge and wisdom and information and tests and surveys and, and real data that matters to the rest of the world versus.
sarcastic sort of quippy, you know, ex feed information. I’m not really interested in that matter of fact, I want to get away from that. I don’t want to be around that. So, so I think depending on the model, they’re very trained, they’re trained differently, you know, that chat GPT is batch trained. So it was, it’s trained and continuously updated on the collective of human knowledge that can be put into it at that time, including academic studies, including mathematics and science and things.
So that’s how I believe we’ll gravitate to the models that have that credibility versus the ones that are just as noisy as social media.
Dave Bricker (20:19)
Love that answer. Just as noisy as social media. Good way to put it. So Mitch, I share your fascination with the human side of AI interaction. And that’s been part of my work too. What are some of the tips that you can share that our listeners can apply on their own to get more useful results from AI? Because so often we get terrible results because we ask terrible questions.
Mitch Mitchem (20:23)
Right.
Yeah. I think my first piece of advice, I used to have a different piece. It’s now my second, but I’ll start with the first one. My new first piece of advice is before you ever touch AI, ask yourself whether it’s in your personal life or your business. What are the opportunities I want to solve for? What are the things I want to use this for? Is it for therapy? Is it for
communication, is it to write emails? Is it to come up with creative thinking? What are the things I need it for? And then that will shape which model you use. So for example, for therapy, I use Pi. For creative ideas, I use Pi. For things like for my work, kind of a catch all is chat GPT. It also is very good at being creative and ideation. Different models have different purposes. And so the key is to ask yourself, what is my opportunity? What am I trying to do with this?
Then I think the second piece is, you re know, when you’re starting to practice prompting, for example, how are you talking to the model? Which is my second piece of advice, which would be stop talking to it. Like it’s a Google search and start talking to these models like they’re people. And what I mean is you talk to them in a way that’s like, if you had a really smart assistant in your life, how would you talk to that person to get them?
to go get the results you need. Would you say, you know, go find me five things, or would you say, hey, for my vacation, I’m really interested in going to one of several cities. Here’s the kind of weather I like. Can you please help me figure out which one of those cities has the better weather in September? Which one has the better restaurants according to what I like? And then come back and report that to me. You would probably more articulate that way to give your human assistant
Dave Bricker (22:23)
Ahem.
Mitch Mitchem (22:38)
a chance to get the right result. This just tends to be your AI assistant. And if you want good results, you’ve got to learn how to communicate and prompt and talk to it like a person, not like a search. So you want to be thorough in that prompting you want to have, and then realize there’s no one and done prompting. There is I prompt, it communicates back. I then communicate with it. We go back and forth over and over until I feel good about the results.
So it’s a conversation and it’s building on top of my conversation. And so to me that those are the two big important pieces. What are the opportunities? Why do I want to use it? And then am I talking to it like a person because that’s going to give me the best result. And then I think the third piece is, know, am I, am I trying to prove it wrong or am I trying to let it teach me? And that’s a big problem because when I see people that are like, see, it’s not as smart as I thought it was.
they’re trying to prove it’s not as smart and they’re wasting their time because it’s probably their prompting that’s causing the problem. And so they’re trying to prove a negative bias instead of saying, Hey, today I want to learn about giving a new kind of presentation. And so I want to see how much I know versus how much I don’t know. I want to make sure I’m accurate. Hey, Chat GPT test me on this. Help me. I’m trying to build this presentation. Teach me what I don’t know.
Now I’m using it as a tool and a collaborator instead of as a, as a negative bias kind of position. And so that, those are the three things that I would focus on the most.
Dave Bricker (24:14)
And there’s an additional value to be gotten from treating the AI as if you’re talking to a human being, because as soon as you start talking to anybody or anything in a less than human manner, you’re diminishing your own humanity. And I think that’s one of the very things people are afraid of. There’s nothing to prevent you from making a polite request.
Mitch Mitchem (24:31)
Well said.
Dave Bricker (24:38)
or thanking somebody or something for good information. That just elevates you as a human being.
Mitch Mitchem (24:43)
Yeah, yeah.
Yeah. And these models are built the way they were designed and programmed often, especially like a chat GPT or Claude or PI, even Gemini to an extent. They do respond well to positivity. So affirmation saying, Hey, could you please do this? And thank you for doing that. they’re built with that kind of human interactivity. And so they are also very responsive to positive input. and, and keep in mind there, the algorithms are built to give you the right answer, the answer you need.
to be satisfied. So they’re trying to help you. And so I would never talk to a human being to your point and say, I know you’re trying to help me, but I want to prove you’re dumb. Like I would just, that would be a terrible way to live. You know, so as a human, I’m just not going to do that to other humans, especially humans that work for me. So, you know, or my family or something. So I’m, it’s more, I’m, I’m seeing you as a collaborator. I want to learn from you. I want to grow. want, I want to get the right answer. I want to be successful.
And I think if that’s kind of, call it openhearted, open -handed, you know, if that’s the way I’m approaching AI, then I’m going to get a better result. And let’s be honest, what I’m trying to do is make my business faster, my life happier. I’m trying to get things off my plate so that I’m more efficient, have more time with my family. So I want to use those tools to do that. And then I’m going to just interact with them in the most positive way possible.
Dave Bricker (26:08)
Excellent. So if you’re just joining us, you’re tuned into Speakypedia Media for aspiring and professional speakers and thought leaders who want to make more money by changing hearts, minds and fortunes. My guest today is AI expert Mitch Mitchem. So Mitch, I saw your presentation at the National Speakers Association in Denver and got to connect with you there. And in that presentation, you shared some AI tools.
Mitch Mitchem (26:31)
Yeah.
Dave Bricker (26:35)
that you just mentioned and not all of our listeners will be familiar with. beautiful .ai, chat, GPT, Claude, any others we should keep an eye on?
Mitch Mitchem (26:46)
Yeah, lots. mean, they’re change fast. So there are a lot of new tools coming out. So the ones I tend to mention first are the ones that we at the Hive use every day. So we definitely use beautiful .ai. It’s great for presentations, building slide decks and presentations. It has an AI feature, which will build you a six or eight or 10 slide deck with a prompt in about 40 seconds. It’s remarkable.
And so we use that a lot to template and build presentations, which if you’ve ever had to build a PowerPoint, you know, it takes an enormous amount of time. so, so we use it as a tool to streamline that. Then we use one of either Claude 3 .5 Sonnet or chat GPT. Those are large language models we use for both analysis, craft, you know, for writing, building agendas.
even analyzing our content, will analyze our facilitators and myself as a speaker. It will analyze our transcripts and tell us what we’re missing, where we could make a presentation better. And, we can’t go into all of it today, unfortunately, but they’re incredible tools that analysis, especially as a speaker, it can give you an incredibly neutral place to ask, how can I be a better speaker? You upload a transcript, you upload your slide deck and you say, I need to make all this better.
I need to make it more clear, more logical, and it will guide you on a process to do that. We love those two tools for that. And we use them in tandem. So some are better than others. Claude is better at data analysis, but ChatGPT has a voice feature I can use on my phone. And so I like to be able to talk to it sometimes in the car. That’s just me. I do that a lot. So that’s a lot of fun. And then we use Pi as more of an empathy machine.
It’s great for therapy. It’s great for creative ideas. It’s very human feeling. Also, it has a feature where I can talk to it, which I like. And then the other ones to watch that are coming and refining Google is desperately trying to fix Gemini and make it more robust. Can they accomplish it? We’ll see. But I think we’re going to see Google not fail because Google doesn’t want to fail and it has the money to not fail. So it’s going to put a lot of effort into it. So I would, I’m keeping our eye on.
on Google for sure, on Gemini, watching what Llama is doing with Meta. So these other tools will continue to emerge. I also believe what’s happening and for all of us to pay attention to what will happen. I believe that each model, although they’re unique and there are tons of them now growing all the time, I believe eventually they’re all going to gravitate towards being an all -in -one source. In other words, I think chat GPT at one point,
will have everything that all the other ones have and vice versa. In other words, you’ll have to make a decision based more on which model do you just like versus features. Right now, they don’t all have the same features. And so you may have to have two or three, but I don’t think that lasts for much longer, maybe another year or two before they will start to emerge as freestanding dominant items that you just pick between them.
But we’re not seeing that yet. It will happen eventually. They’ll all have search. They’ll all have more information. You’ll be able to talk to each of them. They’ll all have incredible image creation. And so you’ll just pick the one that you favor. But I think right now they’re different. And so they’re going to create different output. And so that we go on the ones we like. But I could go, mean, there are models that, again, I would keep my eye on Llama. I’d watch what Grok is doing. I don’t think Elon Musk fails easily. And so…
He will continue to refine XAI. And so we’ll keep our eye on that and see if it changes over time.
Dave Bricker (30:34)
So you touched on this idea about using AI to generate images and video. I’ve got to say, I’ve had really mixed luck with that. More often than not, the AI simply fails to follow directions. And I’ll give you an example. I have a client who has a dental AI. It reads dental X -rays and it finds all the stuff that dentists miss 30 % of the time. It’s a really cool company. And we know that AI is great at pattern matching.
Mitch Mitchem (30:39)
Yeah.
Dave Bricker (31:03)
I, for some of their slide decks, have needed some images of dentists. It always draws the dentist with a stethoscope. And I say, dentists do not wear stethoscopes, please draw the dentist without it. Then I get another image with a dentist wearing a stethoscope and it says, I have rendered this now without the stethoscope. It doesn’t realize.
Mitch Mitchem (31:25)
Mm -hmm.
Dave Bricker (31:27)
So am I doing something wrong or is AI just not ready for prime time in the area of image and video creation? And do you have any hacks you can share? Because I need them.
Mitch Mitchem (31:37)
Hmm. Yeah. Yeah. I think it depends on the prompt and the tool. That’s the short answer. We find that chat GPT kind of has a even in its current form, although it does have a, in the paid version, it has an erase tool. So you can erase something in an image and say, remove this. can basically highlight it. So chat GPT is getting better faster, but it is linked with, with another tool. And so, so you, you know, as far as freestanding, have mid journey.
Grok has, is working with, I think with mid journey, have Dall E which is inside of chat GPT. So you have these image renderers that are getting better and faster, you know, more than I think people realize, but still the key I’ve seen people put in images where they’ll say, give me an empty room with no elephants. And of course it’ll put it in an elephant. Well, because the question is, why did you even put the word elephant in there to begin with?
you know, just say empty room. So oftentimes when we have to lay out a description, we have to ask in this stage, why, why does it keep doing that? What could I say differently from the beginning to remove that? And why doesn’t it pick up on just me saying remove the stethoscope? Well, because the models aren’t, like you said, they are, aren’t, they all aren’t universally at their best form.
Creating imagery or video requires a whole lot more physics and artistic work than just creating text based on knowledge. So the two, they’re very different universes. And so that intelligence will take time to develop and to really build out. But a lot of the models are getting better faster. A lot of the video models like Flux are incredible now. And yet you still have to be kind of a programming minded person to get it right.
once we get to the stage where you and I can just put it on our phone and say, show me a dentist, an image of a dentist, and it gets it perfectly right because it has all that knowledge, that day is coming. I just think right now those models are still being built on and iterated and changed.
Dave Bricker (33:48)
Yep. The funny thing is I was able to take the image I got, put it into Photoshop and use its AI, use Firefly to say, outline that and say, get rid of the stethoscope. So I ended up with an image I could use, but I’m using AI to fix AI. It’s kind of a funny thing.
Mitch Mitchem (33:58)
Right, right.
Yeah, but that’s, mean, isn’t that where we’re headed, right? You’re where we’re, but think about that power that’s in your hand versus you having to give that over to a developer to do it for you or to a graphic artist or something. and by the way, I just did show me a dentist in chat GPT while you were saying that and no stethoscope. So there you go.
Dave Bricker (34:26)
Interesting. Okay.
Mitch Mitchem (34:28)
There you go. Lots of tools to carve out my mouth, but no stuff that’s good. So that’s Mixed results.
Dave Bricker (34:31)
Exactly, exactly. It’s just mixed results. And look, this has really been my experience. When I first started playing with AI, I got really lame results and I became one of those people who, like you said, kind of had a bias. I just want to show everyone how stupid this is because I really, I wasn’t asking it the right questions. I wasn’t learning how to ask it the right questions.
Mitch Mitchem (34:49)
Right, right.
Dave Bricker (34:59)
And I’ve discovered with the right prompts, I can generate some really clever material, including writing some code that I don’t have the skill to write on my own because AI has all the manuals for every programming language. doesn’t have like, I have to go and look things up. How do I do this? there a, it doesn’t, it knows everything. So
I think of AI almost as a power tool. Send someone to Home Depot to buy a table saw and they’ll be missing fingers by the end of the day. But in trained hands, that saw can produce precise, eloquent and creative work. So my question is, is skill with AI a new job requirement that employers are going to need to assess, measure and provide training in?
Mitch Mitchem (35:28)
Mmm.
Hahaha
Yeah, that’s a great, great question also. Yeah. I believe it is currently we’re in kind of this golden age of augmentation where, our company rolls out a ton of this training to companies all over the world. And, and it’s, it’s an incredible need right now to teach people the skills of how to use the tools. And that requires that you have got to really know how to use them. So that skill transfer has to be authentic. But, but I think there will come a time in the next few years where
AI will be in everything much like electricity. It’ll be very human -like. We’ll be interacting with it in a very different way. And so I think for now, at least for the next four five years, we’re going to have to really get good at interfacing with it. And then eventually it will evolve past that point where it interfaces with us better. But for right now, I do think there’s a skillset needed. You know, you have two kinds of people in a company just to simplify it. You’ve got people who are trying to use it, want to use it, or are using it.
or at least interested. And then you’ve got detractors who say, there’s no way I’m touching it ever. I think it’s terrible and it’s cheating and lying and all these crazy things. And those two groups are never going to really get it. They’re never going to see eye to eye. I’m more concerned about the other group because you know, imagine you go back 25, 30 years, if someone had told you they refused to use the internet because they think it’s terrible. If they said that today, you would think they’re nuts. And so you go back 30 years though,
mid nineties, a lot of people said that this is a fad. This is not going to work. It’s going to ruin our lives. Maybe that’s true or not. But the point is nobody wanted to use it. Now everybody would be thinks it’s silly if you don’t use it. So I think the same is going to happen here. I think AI is you need the skills to learn how to use it right now. You need to know how to get good prompting. You need to know how to get good output. You need to master those skills as fast as you can because the person next to you
who doesn’t want to do that. You have a huge advantage over them. And especially if you’re on your own, you know, if you’re a speaker, you run your own business, not knowing these tools will be a massive disadvantage around your marketplace for the people who do know it. And so the good news is you’re not really behind everyone’s trying to learn the tools. So it’s not like you’ve lost decades of time. And once you learn the tools, you, you gain so much wisdom, knowledge and information so quickly.
that you make up whatever time you thought you lost because now you’re just hyper fast and hyper efficient and really smart. And it will make you smarter if you’re willing to learn. So I think you need to know the school, the skills, you need to be open to growing and you need to let it be a part of what you’re doing every day. And then I think you’re to have great results.
Dave Bricker (38:37)
Yeah, it’s interesting because you and I have both been speakers for some time and we didn’t start out in the AI space. Now there’s an opportunity there, right? I talk about storytelling. So obviously storytelling and AI, there’s a natural relationship there. And yet, will we be speaking about AI four years from now, three years from now? I don’t know because it’s changing so quickly. I think that
Whatever we do, we may end up using AI to find where we’re going. But no matter what it is people are doing for work, think they need to keep the eye on the ball because it’s bouncing fast.
So let’s go back to this area that we’re both engaged in, which is that human side of the equation. Now your stated mission is to rekindle the passion and drive that make us uniquely human while harnessing AI’s incredible potential to elevate our lives. Unpack that a little bit more. What kind of work are you doing with this in the hive?
Mitch Mitchem (39:40)
Yeah. So the hive is a culmination of my past and now both the human element and AI, but essentially it’s focusing on the human skill needed for the future. That was really how I built it first about three years ago, three and a half years ago. And then coming out of COVID, I identified we have a human skill gap problem, which I think a lot of people did both in communication and just how we interact with each other. And that’s just grown worse over time, not better.
So we’re still, we’re still very bad at how we interact. We’re bad at empathy. We’re bad at emotional control. We’re bad at communication. The human race has largely slipped into this terrible place of how it articulates and interacts with itself. And so that was really my main mission. Then when I got into the AI side of it, about two and a half years ago, more fully, even before Chat GPT came out to the public. To me, it was about.
we got to leverage whatever tools will make us faster behind the scenes. But then we’ve got to sharpen these human skills. We’ve got to sharpen the human abilities. And so we are always focused on that. So if we’re teaching a presentation skills course, if my team is teaching communication, if we are teaching how to be a better leader, yes, we’ve got AI tools embedded into that. But at the end of the day, how are we helping them be a better human and a better interacted individual? That’s what we’re focused on.
And then on the entertainment division side, that’s about building experiences that are highly engaging and non -technological. So we’re actually pulling the tech out of those experiences and making it about just authentic, high energy, fun, engaging human to human interactions. And so I think those two things are going to coexist. And I, like you just said, I thought, you know, when I first started talking about AI a few years ago,
One thing I said to both myself and my team was, look, we’re not going to ride this AI thing very long. I just want to be really clear. We’re going to do it until there’s critical mass. And then we’re going to continue to do the work that we’ve always been doing, which is human element. So we’re never going to let those two things run separate. We’re never going to say, now we’re an AI training company. Nope. We’re always going to be the human element company. Cause I can tell you just from us teaching so many thousands and thousands of people.
on how to use the tools of AI, it always becomes about how do I communicate, whether that’s with the AI or others? How do I build my own ideas? How do I frame teamwork into this? How do I become more authentic? That’s still using AI, but for tools and for learning and for growth, but we still have these human elements that we haven’t tapped into. So we’re doing a lot more presentation skills training now on top of the AI training. We’re doing a lot more.
skill -based training around leadership than we’ve ever done. Because now people are starting to realize if AI takes a lot of this thinking work from us, we better get better at the human element. We better get better at the things that make us human, like communication, empathy, emotional control, community building, how we relay our ideas, how we talk to each other. All of this has to be honed. So for any speaker out there that has been teaching human -based skill, I think your golden era is coming.
I think there’s a whole new era of need that has never been expressed, especially with younger people who want authenticity and want to communicate better. So I think there’s a golden age coming for human skill. And so we’re going to need a lot of people out there on the battlefront teaching those skills. So I’m glad you’re doing it. I’m glad others are doing it, but yes, AI should at some point run out of need to be taught much like
going back into when my parents were in business, you go back to the eighties when they were at the height of their careers and email was introduced. Everyone had to learn how to use email. There were whole classes and etiquette and training until everybody knew how to use it. And then that went away. You still need to learn how to be communicative and talk to people. So I think those human skills will always be needed and now more than ever. And so I think AI will time out as far as education.
But I don’t think the human skill part ever ends. I think we’re going to be doing that forever.
Dave Bricker (44:01)
Well, I don’t think I could contribute anything to that answer that wouldn’t dilute it. That was wonderful. So thank you. Mitch Mitchem, where can our listeners discover more about you and the hive and what you have to offer?
Mitch Mitchem (44:14)
Awesome. You can just go to a human hive .com the letter a human like the human hive like the beehive .com a human hive .com. If anyone wants to reach out to me, I love throwing my email out there to Mitch at a human hive .com. You can see everything we’re doing. You can see updates and feedback and news and stories and, and content and all the great stuff we’re doing. So yeah, that’s the best way to find us. And, and we love interacting with people. So if you’re just saying hello or
or just letting us know what you think or giving us ideas or saying you want ideas. We’re always there for people. We want to put the human first. That’s what we’re about.
Dave Bricker (44:53)
Mitch Mitchem, I’ve so enjoyed talking about AI with you today. Thank you for being my guest.
Mitch Mitchem (44:55)
Same, me too.
Thank you for having me, it’s awesome.
Dave Bricker (45:01)
I’m Dave Bricker inviting you to explore the world’s most comprehensive resource for speakers and storytellers at speakeepedia .com. If you’re watching this on social media video, please love, subscribe and share your comments. If you’re listening to the podcast, keep your hands on the wheel, stay safe, and I’ll see you on the next episode of Speakipedia Media.
This website uses cookies.