The Calculus of IT

Calculus of IT - Season 2 Episode 9 - Part 3 - Emerging Technologies and Their Impact on Autonomy

Nathan McBride & Michael Crispin Season 2 Episode 9

The epic, unfiltered three-part saga is complete!

In Episode 9 of Calculus of IT, Nate, Mike, and Kevin wrap up their marathon exploration of IT autonomy with a bang, a glass of red wine, and a healthy dose of existential dread.


 We connect the dots from AI’s autonomy paradox to the dystopian wearable future, from the “search paradox” to the real-life horror of onboarding your new director of biologics into a world of federated AI, proprietary prompt lexicons, and company-specific search models.

This finale covers:

  • The rise and fall (and rise again) of IT skills—and why “learning velocity” is your only hope
  • Why onboarding in 2027 might include an AI agent interview and a prompt-writing test
  • The ever-expanding “digital sovereignty imperative” (say that three times fast) and how your data is only as safe as your vendor’s least secure country
  • The chaos and comedy of enterprise search (“How the f*** do I find anything?”)
  • Why the real future of IT might be adjudicating LLM hallucinations and arguing with your own agent

We wrap it up with a practical framework for surviving (and thriving) in the next two years of IT, a toast to autonomy, and a reminder to always be cool to your IT folks, your animals, and your elders—because one of them will be running your next onboarding session.

Next week: compliance, tokens, and just how far your digital karma can stretch.

Stay tuned. Stay witty. Stay autonomous.


Support the show

The Calculus of IT website - https://www.thecoit.us
"The New IT Leader's Survival Guide" Book - https://www.longwalk.consulting/library
"The Calculus of IT" Book - https://www.longwalk.consulting/library
The COIT Merchandise Store - https://thecoit.myspreadshop.com
Donate to Wikimedia - https://donate.wikimedia.org/wiki/Ways_to_Give
Buy us a Beer!! - https://www.buymeacoffee.com/thecalculusofit
Youtube - @thecalculusofit
Slack - Invite Link
Email - nate@thecoit.us
Email - mike@thecoit.us

Season 2 - Episode 9 P3 - Final - Audio Only

AI Trance Bot: [00:00:00] World where signals we compute our dreams, data streams and by balloons make us

the[00:01:00] 

IT

I waiting for.

Nate McBride: Oh, well I can feel it. Uh huh. 

Mike Crispin: Sounds great, man.

What are you eating? 

Nate McBride: Bit of honeys

AI Trance Bot: the first time, the last time we have met.[00:02:00] 

But I know the reason why Silence.

The hurt doesn't show, but the pains to rose some stranger. You.

I,

Kevin Dushney: and here we are. 

AI Trance Bot: Hello. Hello. 

Nate McBride: Hello. Hello. It literally, that was such a perfect coda, just as the song ended, you showed up. I was gonna listen to, um, Billy, you don't lose my number next if you didn't show up, but I, I spared us all the tragedy of that song. Well, thank you. Yeah. So my dinner is consists of, um.

Grand [00:03:00] Pastoni. Roso and bit of honeys. Yeah. Get that baby uncorked, 

Kevin Dushney: uh, tree house. Well, let's see. Not a focus but tree house for dinner tonight. So, oh, there you go. 

Mike Crispin: Well, did you have a dinner tree house. Oh, nice tree house. That's a good mix. Yes it is. 

Kevin Dushney: Ah, Treehouse and ai. 

Nate McBride: Peanut butter and jelly. Yep. I bet you Treehouse uses AI to make their beer.

I bet 

Kevin Dushney: they 

Nate McBride: do. They're like PP poop chat. GPT How you make beer. 

Kevin Dushney: They uplevel their chemists. Yep. 

Nate McBride: They don't even have people working there anymore. All the AI makes all the beer.

Uh, let's see. That was pretty awe inspiring event today. Maybe we'll get into that a little bit. 

Kevin Dushney: Absolutely. That was not worth the time sitting in traffic other than No, no, [00:04:00] other than the last part of just sitting around having some bourbon. But that was good. 

Nate McBride: That was good. We could have just done that. I know.

And then speculated about, about everything, but we didn't do that. Oh, well we're we're good customers. We did the good customer thing. 

Kevin Dushney: Yeah. We're just getting started. 

Mike Crispin: Yeah, we're just getting started. We're just getting started. I 

Kevin Dushney: love, I mean, literally we are because we just, we just. Also, it's just getting 

Nate McBride: started.

Gx It's taken us, it's taken us 16 years as a company, but we're finally getting started today. We've 

Mike Crispin: been getting started for a long time. 

Nate McBride: Yeah. Well, on this, on, on this podcast. We're just getting started. We are, we are just getting started. 69 69 episodes, by the way. 

AI Trance Bot: Ooh, woo. Oh yeah.

Nate McBride: Yep. Yin and Yang, baby. Wow. What are we 

Mike Crispin: [00:05:00] gonna, what are we doing to commemorate the 69th episode? 

Nate McBride: Well, I'm eating a bit of honeys for dinner with some red wine and 

Mike Crispin: Sounds good going. That's a good way to do it. 

Nate McBride: I'm going ham on this one 

Mike Crispin: ceiling. My ceiling hasn't caved in yet above me, so that's good. Oh 

Kevin Dushney: yeah.

What's going on there, Mike? I just noticed that now that you said it. 

Mike Crispin: Love that. Yeah, 

Nate McBride: we're on, we're on 

Mike Crispin: ceiling watch day 

Nate McBride: 50 water leakage. I don't, 

Mike Crispin: no. If it's just, uh, collapse. So if I'm, you guys don't hear from me next Wednesday. I've been buried. 

Nate McBride: You're in the basement under, with all my, 

Mike Crispin: with all my technologies and toys.

That's the perfect, I wouldn't throw it right here. Might save my life. I'll be like, I can't move, but my Vision Pro is on, so I can at least watch hours for the two hour and eight minute battery life 

Nate McBride: until the battery die. You know, what you can do is you can ask Chad TBT what to do. [00:06:00] 

Kevin Dushney: Just like,

Nate McBride: that'd be pretty cool if that happened right in the middle of the podcast. We should try and time that so that we have, we see. No, I 

Mike Crispin: don't think it would be very cool at all. That would not be, no. The rating, 

Nate McBride: the ratings would go through the roof. I'm, I'm telling you, 

Mike Crispin: we might make the local news. That'd be cool.

Nate McBride: Hey, where did we end up with creating, um, our own a IL version 1.0 AI transcription language. Did we, did we finish that project? No, 

Mike Crispin: I don't know. 

Nate McBride: I don't think so. I have 

Mike Crispin: no idea. Did No, no, we didn't, we didn't finish it. 

Nate McBride: So we didn't create the proprietary IP gateway that allows me to connect with you directly over ai.

Mike Crispin: No pathy yet. 

Nate McBride: Okay. Well, we'll, we'll add that to the list for next week, uh, to get that done. Put that right at the top of the list. I mean, I'll have your 

Kevin Dushney: agent call. My agent now has an entirely, 

Nate McBride: it wouldn't it be great if I didn't actually have to talk to either of you [00:07:00] anymore if I could just have my AI agent talk to you?

I really, honestly, I put up a good front so far, but I really don't like talking to either of you ever. I'd rather just have my agent do it and then just gimme a summary of what it, what was discussed, what, what's going on. Yeah. Yeah. 

Mike Crispin: I mean, there are. It just takes the role of the human agent. Right. We'll all have our agents and you'll, my agent will do lunch with your agent and yeah.

Leave a message on my machine. 

Kevin Dushney: Great. This is great. Well, we had, we had a virtual lunch today and we agreed upon the following things. 

Mike Crispin: Yeah, yeah, yeah. All right. Sounds good. Well, okay. Doing what, what do you want to do while they're doing that? 

Kevin Dushney: Uh, I mean, might as well just give Power Attorney to your agent while you're at it.

Mike Crispin: I thought it'd begin with an H eight, but, you know, 

Kevin Dushney: I want to, you want the agent to be able to determine when the plug pull, the plug, it'll be like, uh, Seinfeld episode. If I 

Nate McBride: send, I wanted, 

Kevin Dushney: you have no central nervous system, but you're still [00:08:00] breathing. Yeah. 

Mike Crispin: Transfer my consciousness to an AI so I can be an agent forever.

Nate McBride: Well, what, what if you were, what if you became someone else's agent? What if you were like, sold on eBay as a decent agent, middle aged male, you know, strong background in IT, leadership skills, and like p someone was like, uh, uh, for Bitcoin, I'll buy it. And then you became someone else's agent. 

Mike Crispin: We're going So Black Mirror right now 

Kevin Dushney: trapped inside a stuffed animal, right?

Mike? 

Mike Crispin: Yes. 

Nate McBride: Oh, Michael Rustin. 

Mike Crispin: Little, little egg that sits on the, yeah. 

Nate McBride: Or what about a little piece of shit Orange device called a rabbit? Mike, what do you think about that idea? 

Mike Crispin: Yeah, let's do it. That great. And there's a little camera that moves around. 

Nate McBride: Do you think I should get one of those that would sell?

You should get one. I should get one. Yeah, I'm gonna do that. 

Mike Crispin: I, I [00:09:00] think they should have been passing, passing out those jitter. What are those things? Jitter bug pins today? 

Nate McBride: Jitter bug pins. 

Mike Crispin: What was the thing that I sent you guys yesterday that the, um, 

Kevin Dushney: no, the, the thing that clamps on Limitless. 

Mike Crispin: Limitless ai.

Kevin Dushney: Yeah. 

Nate McBride: Yeah. What hell 

Mike Crispin: was the thing called? 

Nate McBride: Yeah. So because of that stupid piece of shit thing, now I have to put it a policy in my company about wearable recording technology. So actually it was inevitable, I suppose 

Mike Crispin: The penant, the pendant, that's what it's called, 

Nate McBride: dependent suppository. You can, you can hide it in places that nobody would look and record all conversations around you all day.

My god. This is gonna, your, your 

Kevin Dushney: workplace is gonna become like, to catch a smuggler and to we just won't have a workplace anymore. Why have, why even have a work Object to x-ray and, uh, sir, I believe you're smuggling an m mi a ai ai agent device in your body. Listen, 

Nate McBride: this is 

Kevin Dushney: the [00:10:00] 

Mike Crispin: AI guys. You just gotta, we just gotta get one.

Nate McBride: This is the future of AI and autonomy here. Why do we still have offices? 

Kevin Dushney: Mike's pretending like 

Mike Crispin: he. You can preserve conversations, you can reflect on your day. Mike, where, where is yours hidden 

Nate McBride: right now? There's no way organizing have a tracking number yet. Yeah. Mike, where is yours hidden right now?

Organize it. 

Mike Crispin: You can organize anything. 

Nate McBride: Yeah, it's game, it's all, you know what AI's about? It's all about organizing, streamlining, and efficiency sizing izing. That's what AI is. Time 

Mike Crispin: for a rush start. Just wear a pin around. 

Nate McBride: That's what it is. So welcome back to the End of Day's podcast. Tonight we'll be focusing on 

Mike Crispin: end of days.

Nate McBride: Tonight I'll be focusing on subterranean oxygen regeneration systems [00:11:00] and how to keep your DVD library safe during the Holocaust. Um, hopefully 

Kevin Dushney: autonomy, autonomy has turned into Doomsday Doomsday. Sorry that, 

Nate McBride: that's my other podcast script. This is the calculus of IT podcast, where we break down what it really takes to be a successful IT leader in a world that's basically going to eliminate all IT leaders within the next three years.

So, if you're gonna be an IT leader, you have about three years to be the most awesome IT leader possible. And then it's over. So read, read all the books, listen to all the podcasts. Hurry up. Hopefully we finish season two before, well, there's no more IT leaders. And then, um, it'll, it'll maybe be it agents before 

Kevin Dushney: Acal also has complete, 

Nate McBride: uh, cognition and control.

Season three of the calculus of IT podcast will be run by agents and it'll be about IT agents and, um, their agent problems. No, we're not doing it. The [00:12:00] agents are going to do it. So 

Mike Crispin: season three, we'll have three agents on, 

Nate McBride: we'll just do it. Yeah, no, we're not gonna have them on. The agents are gonna have themselves on, we're not.

I'm gonna be, why don't we have the agents interview 

Mike Crispin: us? We should do that. 

AI Trance Bot: Forget, 

Mike Crispin: it'd only be like a couple, maybe a hundred thousand tokens. No problem. No problem. Come right out of the budget. No problem. Right? Yeah, exactly. Yeah, we just, 

Nate McBride: yep. So the podcast, well, if you join, if you join, if you're joining us for the first time, I find that we're hard to believe.

But anyway, we're deep into season two where every episode is about one thing and one thing only autonomy or the loss of in IT leadership today. But today we're talking about tomorrow. Well, actually in episode nine, parts one, two, and now this one three, we've been talking about autonomy and IT leadership in the tomorrow.

But before we get into that, I had a billion dollar idea. [00:13:00] Are you ready? A billion. Billion. Okay. What is it? So you're at a concert at the Orpheum or say your favorite concert hall. And everybody around you decides to take out their phones and record the show and their shitty I iPhone camera so they can never watch it again and then distract you from being able to see your favorite artists playing.

What if you had a mini EMP that you could adjust the radius on? Ooh. Up to say, 20 feet. That looked like a phone. Behaved like a phone, but when you pressed it, it would EMP, you know, everything in that radius so that you could enjoy your favorite band, but without messing up the band. Yeah, that would be cool.

Billion dollar idea. 

Mike Crispin: Now, would it permanently destroy the devices or it just shut 'em off temporarily? 

Nate McBride: Um, I think there'd probably be some mods for like additional [00:14:00] destruction, but in this scenario it would basically render them useless and, and then it would power them down. You could power them back up, but then you could just keep running the EMP.

Yeah. Uh, blast as many times as you wanted during the show until everyone, you know, learned the lesson. 

Kevin Dushney: There you go. 

Nate McBride: And just watch the band instead of watching the band through their phone. Yeah. They no longer get the pellet and uh, they actually pay attention. Yeah. So anyway, just an idea. Billion dollar idea, by the way.

And you would call it exchange agram? No, you'd call it. Um. Would you call it? We call it the, uh, the banding. Disrupt agram. Disrupt agram. 

Mike Crispin: Alright, perfect. Disruptive gram. Perfect. Now, I think, I think if you wanted to do this today, you can use a flipper, have you're the flipper before. Yeah, that's a [00:15:00] flipper.

And I think there's still airplay or, um, airdrop vulnerabilities that it can exploit. So everyone's phone keeps popping up with, there's an Apple TV available. Apple TV, available Apple, and they have to click, click an okay. And they can't do anything else with their phone. And you can, I've read about that.

Nate McBride: Okay. Well, uh, until my billion dollar idea has evolved out of the, um, proof of concept stage, uh, everyone on this podcast or all, all the audience members should invest in a flipper. Next time you're gonna go see a concert, um, and then you can explain, you know, to everybody how much you enjoyed the show because you weren't distracted by everyone watching the show through their cameras in front of you.

Kevin Dushney: You can clone your badge with that too. 

Nate McBride: You could, you could absolutely do that. Yes. Uh, did you guys see, see the, um, the ciso, the ciso, JP Morgan wrote a letter on the JP Morgan blog called [00:16:00] an Open Letter to Third party, third Party suppliers. No. You should, you should Google it, but I'm gonna read you some snippets.

This is awesome letter. And finally, someone kind of called it out who's like a, a so and so. Um, it starts off with the modern software as a service delivery model is quietly enabling cyber attacker, cyber attackers. And as its adoption grows, is creating substantial vulnerability as weakening the global economic system.

That's an opening sentence. Love it. Three bullets. Software providers must prioritize security over rushing features. Comprehensive security should be built in or enabled by default. We must modernize security architecture to optimize SaaS integration and minimize risk. And security practitioners must work collaboratively to prevent the abuse of interconnected systems.

Um, goes on to say, SaaS has become the default and is often the only format in which software is now delivered. Living organizations with little little choice, but to rely heavily on a small set of leading service providers, embedding concentration risk into global critical [00:17:00] infrastructure. While the model certainly delivers efficiency and rapid innovation, it simultaneously magnifies the impact of any weakness, outage or breach, creating a single points of failure with potentially catastrophic system-wide consequences, uh, and therefore 

Kevin Dushney: loss of autonomy.

Nate McBride: Yep. That's why I, I, I, I love this letter. And so it goes on to say, risks extend, extend beyond concentration alone. Fierce competition among software providers has driven prioritization of rapid feature development over robust security. This often results in rush product releases without comprehensive security built in or enabled by default creating repeated opportunities for attackers to exploit weaknesses.

Most critically SaaS models are fundamentally reshaping how companies integrate services and data. A subtle yet profound shift eroding decades of carefully architected security boundaries. In the traditional model, security practices enforce strict segmentation between a firm's trusted internal resources and untrusted external interactions using protocol termination, tiered access, and logical isolation.

External interaction layers like APIs and websites were intentionally separated [00:18:00] from company's core backend applications and data. However, modern integration patterns dismantle these boundaries, relying heavily on modern identity protocols, EGO auth to create direct, often unchecked interactions between third party services and firm's sensitive internal resources.

I'll just read two more bits further. Compounding the risks are specific vulnerabilities intrinsic to this new landscape. Inadequately secured authentication tokens, vulnerable to theft and reuse software providers gaining privileged access to customer systems without explicit consent or transparency, and opaque fourth party vendor dependencies.

Silently expanding this same risk upstream. Critically, the explosive growth of new value bearing services and data management automation, artificial intelligence, and AI agents amplifies and rapidly distributes these risks, bring them directly to the forefront of every organization. Um, this weakness is known to attackers who are now actively targeting trusted integration partners.

Microsoft Threat Intelligence recently authored a blog post that Chinese state actors were [00:19:00] shifting tactics to simply target common IT solutions like remote management tools and cloud applications to gain initial access to their downstream customers. And he goes on to talk about the call to action.

It's a great letter. I recommend you read it again. It's by Patrick Oppe, who's the CISO at JPMorgan Chase. Um, it's published on their blog. Um, it's a great read. 

Kevin Dushney: JPMorgan Chase blog or, or something? Yeah, the 

Nate McBride: JP morgan jpmorgan.com/technology/technology-blog/open-letter-two-r-suppliers. Okay. So basically it's a, um, a drawing of the line for all JP Morgan suppliers, and I think it's a brilliant template for anybody else who's kind of grown tired of the lack of transparency in their vendors.

That same vendor, Mike, you were referring to the little, um, suppository recording device. 

Mike Crispin: Yes. 

Nate McBride: Um, their website had a single security page that was the most obsequious piece of shit security page We're secure, [00:20:00] totally 

Mike Crispin: secure. It's so funny. It's a picture of a, of a safe Yeah. That they, they gradient into secure by design.

Your data stays private. 

Nate McBride: Yeah. And so everyone's out there paying $400 for these devices. They are like, oh my God, it's totally secure. Look at the website. It says so mm-hmm. Sheep. It's, uh, ask questions. People before you buy shit. 

Kevin Dushney: I'm sure they, you know, took a thorough review of their soc too before ordering that.

Nate McBride: Oh, yeah. Before I let someone little device record every conversation that's once recorded immediately discoverable. Anyway, we won't give that. What's that? Keep going. Sorry. Sorry. No, keep going. No, sorry. No, I'm done. I was gonna go back to the, um, the script, the script script. So if you miss the first two part, oh, Mike, come on.

Mike Crispin: No, go ahead. Sorry. 

Nate McBride: No, no, you go. [00:21:00] Mike, say what the 

Mike Crispin: fuck you wanna say. I was gonna say third, third party risk is, uh, is a, it is been a huge issue for a while. It's only, I think, exasperates it now that there's many other ways that it can be exploited without even humans involved. 

Nate McBride: Yep. Yep. Yeah, so someone's trailing you and you're, you're like an executive, so, so and so somebody or other, and you have one on your lapel and you're sitting at an important lunch.

This person walks up and just takes it, uh, from you. Now, I know it's supposedly directly connected to the iPhone, and therefore, but you know, that's all breakable and, or you're wearing one and you say some nefarious shit and you get pulled over and the cops like, I'll take that. I mean, there's all, you.

Just think about all the potential scenarios of recording anything that you do, and then play them out. 

Mike Crispin: What about the, the Black Mirror episode where the couple, right, they were together and it records all of their history. It's recorded everything in their life and they end up or [00:22:00] not being able to forget anything and they're able to go back and reference their past even things they wanted to forget.

Yeah. Full disclosure, I only ever watched the first episode, the beginning of episode 

Nate McBride: with the guy with the pig, and then I never watched another episode, but I've, I don't need to watch the episodes to know as much as I know about Black Mirror because you're missing out Everything today is a reference to Black Mirror or a Roger Waters song, or both.

You would love it, dude. It's right up your alley. 

Kevin Dushney: I think you're, I think you're missing out, Nate. 

Nate McBride: Yeah, but I don't wanna watch a show that's about the terrible shit. I think about all the time. I just need to think about the stuff I think about all the time, and I don't need to validate it by watching a show.

You know what I mean? It's like, it just exists. 

Kevin Dushney: You don't need to feel the dystopia. Fire. 

Nate McBride: I don't need, I don't need to go any more dystopian than, than we're already there, so. Speaking of dystopia, if you missed the first two episodes of first two parts of episode nine, hit pause. Go back, give them a listen.

You can find them. They'll be right below this episode in the episodes list. Uh, and listen to them in order because trust me, what we're tackling tonight [00:23:00] is built right on top of what we've already covered. But here's a recap. Here's the TLDR recap for you. Part one. Uh, from two weeks ago, we dove into the autonomy paradox of AI integration.

Eeg, how artificial intelligence can simultaneously increase your autonomy by automating routine tasks and freeing up resources. We heard a big thing about that today from a big time vendor, but it also quietly creates a whole new set of dependencies and constraints. We challenge the hype cycle and asked, are we using AI to solve our problems or just papering over broken processes?

I. A big fan of the latter vote, uh, that we're just all trying to, uh, compensate for the fact that we don't know how to solve the problems. But, um, myself included, but at the same time, who knows, maybe it is more than just a clever toy. We talked about how AI isn't a magic bullet. It can amplify what's already working, but if you're automating chaos, you're just gonna get faster chaos.

And we also hit on the evolution of it skill sets. So the shift from technical execution to AI orchestration. And again, uh, [00:24:00] we were at a thing today and, and of course the theme keeps coming back to you gotta have a charter, you gotta have a council, you gotta have a committee, you gotta have a, all the things, um, that's AI orchestration and how the real value for IT leaders is moving from, can you build this to, can you direct AI to build this and understand what it's doing?

And finally, we broke down, um, data sovereignty in the AI world. And so part two, we picked up last week with the conversation about speed versus control, trade off, certainly, uh, generative ai, no one would disagree with this. That lets you move at machine speed, or it lets the, you direct an agent that moves at machine speed, but what do you lose when humans can't keep up with the decisions being made, which we can't.

We discussed circuit breakers, oversight layers, and how to find the right balance for your risk profile. Then we looked at the evolution of vendor relationships. I just referenced the JP Morgan Chase letter, which is part of this evolution. Model lock-in is the new vendor lock-in. So if you're going to go ahead and settle up with Claude Gemini or chat GPT for your enterprise AI plan, just know that in a year you could be switching the entire [00:25:00] vendor, uh, because someone else can better comes along and then everything kind of goes up in flames because how do you port your AI world over?

We gave practical strategies for maintaining leverage and flexibility even as you personalize and fine tune your stack. We got into the risk of transformation and risk management in an AI world. How things like algorithmic bias, emergent behavior, explainability gaps, or creating new classes of risk that old frameworks just simply can't keep up with.

And we closed with a lively debate on quantum computing. An edge or the edge and not the edge like guitarist, but like the edge of the internet. What is the edge of it's the edge of what? Just the, we discussed the edge. That's how deep we went. Quantum computing, quantum's looming threat to encryption or looming.

Boom. If you're a hacker waiting for a quantum to decrypt how edge computing is changing the locus of autonomy and putting processing closer data, but also [00:26:00] creating therefore a more complex distributed world for IT teams to govern. And with that, tonight in part three, we are gonna close this mother out.

And we're gonna land the plane. We're going to talk about maintaining and building expertise. And this is more than just upskilling. We're talking about the real future of search. I cannot wait for this part. We're talking about digital sovereignty trends and regulatory shifts forcing you to rethink your autonomy strategy.

And most importantly, how do you as an IT leader turn all of this shit, all this turbulence, ence, and do some semblance of a lasting autonomy edge for your organization, at least for the next three years while you're still employed? We'll wrap up with a practical framework for building your autonomy strategy, and perhaps we can get to an IT archetype.

We promised to do this many episodes ago. We have not support yourself. A drink, go a notebook or your favorite i AI transcription tool suppository, and let's bring episode nine, home with a bang. [00:27:00] Woo. Love it. But before we do that, I had a side question for the two of you about the future of autonomy and our little AI agent buddies that we're all gonna create What's gonna happen to email in the future?

Your, your, your hot takes on. 

Kevin Dushney: I I think it's on the decline, right? For a few reasons. You can take it as two approaches. One is just, uh, a generational thing, right? Where the desire to use email at, you know, even millennials, but certainly Gen Z is. Going like this. Yep. Also, as technology continues with, with ai, I think you've got, you know, uh, uh, force multiplier here, technology and lack of desire to use email.

Uh, I just, I don't think it goes away, but I think the reliance on it goes down significantly over the next three to five years, maybe even less. But certainly within that timeframe is my [00:28:00] prediction. 

Nate McBride: And do you think that, that it will be because that's, it's our chosen methods of communicating or that email is too slow or that it's, it's, it's a major gateway for security problems or some, I 

Kevin Dushney: don't, I think people don't wanna communicate that way anymore.

Nate McBride: Yeah. I agree. I don't, 

Kevin Dushney: it's, it's like you're slave your inbox versus like, well, if I'm on Slack or Teams, I can set myself like I'm away, do not disturb. 

Nate McBride: Yeah. 

Kevin Dushney: And if I want to chat, we're collaborating in real time, either directly or with a team and it's more efficient. Yeah. 

Mike Crispin: Mike, I definitely work that way, the way that, uh, you are both mentioning.

I, I love to have the instant back and forth in the instant message world and much better collaborative model with people. I, I feel it's great big proponent of it. Right. What I will say is that I think email has, it'll be around continuously over time as people. Still [00:29:00] have it now. I thought it would be gone already, but we still have it.

And I think there's a couple reasons why. Uh, and it's not that there aren't better tools and better ways to, to, to do it. One is, it is sort of the middle point where you can collaborate internally and externally where very little friction. Mm-hmm. In asynchronous manner. And I think there's still a number of people who work on a, like a task list basis.

They want to have everything organized in their little folders. Yeah. It's a database. And email is, I can put this over here, I'm gonna do this in five minutes. I can put this over here. It's gonna happen in two months. And people still like that list that can be, uh, listed over and it's internal and external, all in one place.

I still think that's why it's lived on as long as it has, even though I don't think it's as efficient as everything that we have. Yeah. But in, I think largely it's efficient for a, a lot of us because it's still [00:30:00] what everyone on both ends of the spectrum use. They have to use it. Yeah. So once there's a way for those messages to be managed at each end.

Yeah. 

Kevin Dushney: And beyond communication, there's also an entire ecosystem of, think about all your platforms like NetSuite, Coupa, CLM, you name it, where all the, all the workflows 

Nate McBride: Yeah. 

Kevin Dushney: Are predicated upon email. 

Nate McBride: I mean, I think about that exactly that scenario. And I think, okay, I mean we already, we already know there's ways to convert, you know, SNDP and POP and IMAP and whatever to other forms of communication text.

Um, and so that's been around for quite some time. The, I think the, to to Mike's point, you know, it's having this list of things I have to do this, if, you know, if there, there's probably, well, no, I can think of places where you can also have email sent to lists. I mean, Google has been doing that for [00:31:00] nicely for a while now as it add this, as a task, um, Microsoft, to the extent that it does it as well, um, does the same thing.

It's the, uh, it's that step that we have to add as a task. Yeah, you can get to a point in time, I think, in the not too distant future where you simply like, I'm gonna send something to Mike's task list and rather than hit him with an email, I'm gonna send it right to his task list. And then that's, it's kind of like an inbox, but it's also an opportunity for us to immediately begin a chat from that, that email that or that new announcement that's shown up in your, I don't know, inbox.

But I dunno. I think that for a lot of people, I can think of, certainly both of my kids, they who don't use email, um, communicate almost entirely outside of it. And. I think it's only gonna get, it's, it's, it's only going to escalate very quickly in a short period of time. Agree. 

Mike Crispin: I, I think it's, [00:32:00] it's got another, I think it may be there because of those legacy system communication methods, certainly that Kevin and I kinda mentioned is it's just gonna be there as long as it's still kinda sitting around for, with, with old systems, integrated systems.

But, um, certainly the, the current generation and next generation, they're just not gonna have the patience to use it. The, it's gonna be, I need to get something done, I'm gonna ask someone to do it and it's gonna get done. Yeah. And that's it. 

Nate McBride: Well, I mean, speaking of the age groups, the other thought I had, um, the other day was, you know, the divide for AI and how fast it's growing and I kind of put it into four tiers.

So I think, again, I have these two wonderful analogs in, in my kids who, um, for their use of ai, both in college, it's part of their daily life. Mm-hmm. Uh, but they don't look at it as a game changing thing. They look at it as another [00:33:00] way to, to essentially achieve the same goal. So, for instance, my daughter still does hand notes.

She still goes to class, she still listens to things, but then she's using AI to do like, build quizzes for her. Uh, on certain topics and te and test her knowledge. Yeah. Uh, of things. Um, not like write this paper for me because obviously modern day 

Kevin Dushney: flashcards, right? 

Nate McBride: Modern day flashcards. Yeah. That kind of business.

And, but it, but it wasn't even a transition. It wasn't like, oh my God, what's this new AI thing? They just kind of slid into it. Yep. It just kind of naturally progressed from their Google world right. Over into this thing. No second thoughts about it. And then I think about the next group above them, which is essentially our group, which, you know, kind of still has two, two sort of subgroups.

One is people that are all in, people that, and people that are just starting to figure it out, or at least aware of it. People that are sort of in the group above us who are, I wouldn't say pushing back, but definitely averse to making a change. They have their [00:34:00] way, you know, they're doing their thing. And then of course you have the final group, like my parents, um, that level who have no idea what's going on with, are they getting 

Kevin Dushney: phone calls from, uh, 

Nate McBride: they get the phone calls?

No. 

Kevin Dushney: No. 

Nate McBride: Getting I not using the grandparent service. Hey Nate, 

Kevin Dushney: they, I talked to my mother this week and I mentioned Chad GPT, and I got a what? Yeah. I mean, they, they just don need to, you've never heard of chat, GB T's like, oh yeah. I've heard of it, but 

Nate McBride: yeah. So, you know, with, with all these major advancements, obviously the biggest group that's most involved is those that are in the corporate world.

Kevin Dushney: Yeah. 

Nate McBride: Doing corporate work. But then think about the split that's happening in society. You know, for sure there is going to be a group of people that knows everything about this or knows enough and there's a group that's going to be completely disenfranchised from what's happening with this whole 

Kevin Dushney: Yep.

Nate McBride: Thing. Yep. And they [00:35:00] will only grow further away because the rate of trying to keep up is extraordinarily high. 

Kevin Dushney: Yeah. 

Nate McBride: Agree. So this is, this is kind of like a, when you think about autonomy and it and autonomy in the future, um, it's almost feels like, I mean, we talked about wearables a little bit ago, but it almost feels like we are going to continue to erode the consumer side of it autonomy.

And as a result, the question will be still, how will that bleed over into corporate IT autonomy. So, and, and, and speaking of the skills part, that's where I wanted to jump in tonight, which was, um, IT skills in general. So there was a, I, maybe I'll show it on the podcast if I can find it, but I found a chart, uh, I think two days ago that showed where skills would be in 2030.

Based on a, a huge survey. I think the n was in the thousands [00:36:00] on where HR leaders feel that skills we needed the most in 2030. And unfortunately in the bottom left corner were things like, um, uh, skills that required dexterity skills that required emotional intellect, sort of all the EQ skills would be gone, uh, skills that required relationship building would be gone.

And the things that would be replaced with, which is of course, as you can imagine, is the ability to interact with, um, computers. And so, like I think about it this, in the past IT skills had pretty long half lives. I mean, the things that we all three of us learned 25 years ago, oh my god, still important, right?

But um, it may be still important for another five to 10 years, maybe even longer, but still, um, think about what somebody who was got an IT five years ago learned and they didn't get the things we got, they got almost none of that probably. Um. You know, the, the idea that it skills have some kind [00:37:00] of half life, uh, and the older, the farther back you go, the longer that half, half life is, I think is a true thing.

Once you, once you learn a programming language or a system, that knowledge remained valuable for years or even decades. And in our case for sure, that's absolutely true. Today, the half life is shrinking dramatically. Uh, particularly in things like ai, quantum computing edge architectures, because you only need to know what's happening at the moment.

You don't really need to rely on what you know in the past because the past is going to just essentially simply dissolve. Uh, and most of what you're gonna learn is not gonna be relevant. Um, and this is mostly related to like the bigger things, you know, telephony, server administration, but there's a few others of course we could name.

So then this creates a fundamental autonomy challenge, not a paradox, by the way. This is straight up challenge. How do you maintain the expertise needed to make independent decisions when the knowledge base is changing so rapidly? So this a general question, but think about this for a moment. If you need to create autonomy, and you're thinking like, okay, cool, I [00:38:00] got this.

I just learned this big thing, it's given me all this insights. And then six months later, that whole thing that you just learned has been upended by a new thing. Yeah. How do you maintain autonomy? 

Mike Crispin: Oh, man. 

Nate McBride: I mean, so what I, what I had, what I had written was that. I'll just answer my own question and drop it over to you guys.

But most people in this case will then shift over their expertise to external consultants. 

AI Trance Bot: Mm-hmm. 

Nate McBride: Because they're the ones who are a little bit more agile and able to move from technology, technology and shifting vendors. And this can provide access to expertise of course, but then it comes at the cost of autonomy.

That's the challenge. So either you stay up on it or you rely on a vendor to stay up on it. But either way, you have to do this at the cost of autonomy about decision making. So I don't know, I dunno about you guys, but these days, I mean, who hires somebody for a very particular niche skill unless they're doing something so archaic.

So like when push comes [00:39:00] to shove, I will rely on external vendor if I have to, who has a special skill that I don't have. But then the goal for me is immediately to learn it from the vendor and or add it to my Google list of things I'm gonna learn on my own later. 

Kevin Dushney: That's the thing is do you, do you load up NLM and just learn it yourself and then it becomes a function of time just, you know, versus, you know what, I don't have time to learn this.

I have a need now, so I'll just outsource 

Nate McBride: something and learn it later. I think. I think the learning part is certainly something that lets you retain autonomy, but Yep. Then you repeat that process every certain period of time, which means that. In order for you to stay relevant, you have to fall into this pattern.

And we have to go way back to beginning of the season. But when we fall into these, these repeatable patterns of having to do something in order to, to maintain a currency, that's actually a loss of loss of autonomy. [00:40:00] It's not keeping autonomy. Mike, 

Mike Crispin: I'm chewing on this a bit. I, 

Nate McBride: okay, 

Mike Crispin: well, I had another, I had, 

Nate McBride: I had another point.

I mean, if you want go. Good. Well, to the point about relying on an external consultant, right? So we all use external consultants for various projects and things that we simply don't feel it's necessary to hire an FTE for sometimes. Mm-hmm. So in terms of like, let's use fucking generative AI without internal expertise, any company, any company would be immediately completely dependent on their vendor's recommendations about an approach architecture and implementation, which is akin to a straight up zero decision.

And when a problem arose, you would have no way to independently evaluate the vendor's proposed solutions. You'd literally be okay. You know, whatever you say. So you would trade autonomy a hundred percent of your autonomy for access to expertise. In that case, [00:41:00] you can like go to 5%, which is, oh, I, I know some, some I know about that a little bit.

And the vendor's like, yeah, yeah, whatever. Anyway, we're gonna do this stuff. You have a little bit, but you can kind of talk the talk, but you still don't know. You have to go to some like le a higher level, which then begs the question that Kevin just talked about, which is, well, why not? Why not? Then go all the way.

Mike Crispin: I think that's where it's headed 

Nate McBride: all the way. Do you think that everyone in your department will eventually have to learn every single skill that everyone else in your department knows at the same rate or velocity or will you segment it out? I was talking with someone today who's a colleague of ours who is trying to rethink his and reimagine his IT organization and um, I said, well, you know, there's just, you got the four common buckets, right?

You got, um, identity governance, [00:42:00] um, uh, employee experience and operations. Mm-hmm. And I was being kind of generic there and maybe people can think of more, but I said, but every single person in each one of those groups should be able to slide to a different group. I said, that's the next, the next big trick.

Someone's gotta figure out how to pull that one off because every single person that you put in those groups is going to have to know what everyone else in the other groups is doing in, in a year or two, in my opinion. 

Mike Crispin: I think when we, so in the first season we talked a lot about kind of the number, number one slash number two, how we want to put it.

Yep. And we talked a lot about employee experience. 

Nate McBride: Yeah. 

Mike Crispin: I think as time goes on, the more that we lose sort of the autonomy from a skill perspective or from even, not so much from a decision making perspective, I guess, but from, [00:43:00] um, the problem solving perspective maybe that we leverage AI or an outsourced firm.

The same progression that's happened with like service desks and help desks in it, is that we hire these strong interpersonal connectors and people. Yeah. The hardest thing for AI to replicate, I think, will be the delivering mechanisms in which we deliver and enable organizations by just being human.

Yeah. And I think that more and more of the problem solving the technology, the automation, the stuff that we still hire, sort of hands on keyboard or we outsource our hands on keyboard type tasks, uh, whether it be coding, development, big data, you name it, more and more of that will be automated. And the important thing will be how it's delivered through, in, in, in the aspect of through humans.

Mm-hmm. Do we have good humans that can translate the messages that can help people along or that can present, [00:44:00] or that can. Just think or ask the best questions and spend most of their time on that. Now, we may not need as many of those people, which is the scary thing to think about, but I think the hardest thing for AI to replicate, at least they'll, I'm sure someday it will.

But robotics and other things is just the delivery mechanism in which results are communicated in which problems are solved. We, we use all the machinery in the backend, the ai, the tools, and that communicates as well. But 

Nate McBride: yeah, 

Mike Crispin: being here and now in the physical space, I think still humanizing and having a human element, those skills are gonna be the ones that you're gonna need the most.

Yep. Even more so as we move forward, those are the ones that are gonna be the people who deliver the most value in any function in a company. I, 

Kevin Dushney: I, I don't think you can take that away. It's like people that are invested in the [00:45:00] company, the culture, you can't, I mean, it's even hard to replicate that with other humans that don't work for you.

MSP right. People, right. Yeah. And they don't, they're not gonna care as much as somebody that actually works there as an invested in it. So, you know, and AI is even another step removed. So I, I, I agree. I think you cannot eliminate the human component, but I think you're right, Mike, that you may need less of them.

Right. There was an article I was reading about, I think, I think it was Pfizer, another big pharma that, um, in their medical writing group and regulatory, they weren't eliminating positions due to ai, but they were hiring less of that. 

AI Trance Bot: Mm-hmm. 

Kevin Dushney: So I think that's perhaps the tip of the spear, but it's an indication that hey, we can be either more efficient or upskill or a combination of both with ai, even [00:46:00] as it sits today.

Yeah. Produce, you know, highly controlled regulatory documents. Now the key is obviously the human review at the end, but the prep can now be sped up and automated and made so much more efficient that you can zip through to the point where, okay, the expert human now is reviewing it before it goes into the FD, which is really what you care about.

Right? Yes. 

Nate McBride: The, uh, I want you to think about this maybe tonight or tomorrow at some point in time. Think about this. Uh, but I have been, in terms of what you just said, Kevin, 'cause there's a lot of ways to, to, to sort of think, think about this thread. So let's suppose that you're able to reduce a group of people 

AI Trance Bot: mm-hmm.

Nate McBride: By automating a process, uh, thereby improving outcomes and improving effectively the quantity of outcomes. Um, I happen to think that. And, and having [00:47:00] thought about this quite a bit, I happen to think that you'll actually come back to hiring those people back. 'cause what you'll want to do is, is create more outcomes.

You'll find that you can only create so many outcomes with three people, but if you had those other two people back and had five, you could then create, say, 40% more outcomes. And so what it may take is a bit of a downsizing first to, uh, reach some kind of like, uh, FTE parody for what you're doing today.

But the second that you start realizing velocity and you're, and as a result of realizing velocity, you're therefore creating more content, you're just going to find yourself back in the same place where you left. 

Kevin Dushney: Yeah, that's a good point. Potentially 

Nate McBride: even need more people. 

Kevin Dushney: Maybe the pendulum swings too far and then there's a, a course correction back towards equal, right?

Yeah. Yeah. I think that's what you're saying. Yeah. [00:48:00] 

Mike Crispin: So, and I do think, let talk about the course correction. Sorry to jump in is No, um, we're still, we, we still kind of in the, uh, still in the enlightenment phase. Maybe we're moving too fast that ai, maybe we don't even, it just keeps being a bumpy road as we go forward in terms of people being psyched about what it can do and then scared and psyched and scared.

Yeah. If there's a major AI driven mistake that impacts thousands and thousands of people, hopefully that never happens. But if it does, that will cause a big pendulum swing back as well. 

Nate McBride: Why? Well, why would you say hopefully it will happen and you want it to happen? Because that tests the resilience of this AI machine.

Mike Crispin: Yeah. Well, no, I, I mean, if it's a disaster that, you know, costs human lives or if we don't want Oh, oh, like existential 

Nate McBride: disaster level? Yes. Yeah. Yeah. 

Mike Crispin: Something that is truly knowledgeable to all generations and understands, like some things that are small within a company. Yes. That's gonna, it could happen to make a second guess ai, at least from a company perspective, but 

AI Trance Bot: [00:49:00] Sure.

We're 

Mike Crispin: talking like 60 minutes or like something that AI did that's bad, might make the whole kind of commitment to auto automating the world through it. Um, revisit it. It's so often there's a lot of optimism until something goes sideways. 

Nate McBride: You don't even have to think existentially, Mike, you can boil this down to a city level.

Imagine a city is unable to get food. Give them one week. Give them, give them one week. And then you have a, you have a Stephen King novel on your hands. Um, and to your 

Mike Crispin: point about loss of autonomy, is that if AI's doing it all, nobody knows how it's been done, probably. Right? Right. There's a bigger issue.

There's no experts anymore. 'cause it's all been delegated to something. Well, we don't know how it works. Like even when a, when Chachi BT came out. Uh, at least 3.0. And there was all the excitement. There was a lot of concern that when they asked open AI [00:50:00] about how it was doing, what it was doing, and Google DeepMind actually as well.

Similar questions. I'm not quite sure how it works. 

Nate McBride: Well, that was, that was the, that was the old Netflix story. That's still 

Kevin Dushney: the answer. 

Nate McBride: Yeah. Like 

Kevin Dushney: from Al, you know, Sam Altman, they're like, well, we're not really sure. They just know it does 

Nate McBride: well. So the underlying question to all of this then is the, like, maintaining this expertise, right?

Maintaining this autonomy. So how do we do it? That's what this podcast is about. Or at least that's what the season's about, which is maintaining that, that autonomy in it, decision making. And so, um, I put down a couple thoughts. One of those is that we have a new metric. Kevin, you and I had talked a few weeks back about, you know, how do you really measure the value of ai?

Like, what's the secret sauce? Like, how do you get there to derive its value? It's, it's nearly impossible right now. I think it's so subjective on so many levels, even to capture a baseline and then try to do some kind of delta, [00:51:00] uh, again. But I think there is a metric you can use. It's, and I, I call it learning velocity, which is, um, the ability at which your team is able to learn new tech eeg, you know, when Kevin sits down to learn a new language that takes him six months and it's good for six months, but then he has to learn a new one six months later.

That's it. Learning velocity. Do you have, um, does your team experiment with new tech? Are they given the freedom to do so? Do they have a culture of knowledge sharing or people being siloed? Um, or are you just chasing the latest vendor certs? I mean, ultimately that, that velocity effect of learning is a huge asset to departments.

How fast can your department learn new things? And that is both quantifiable, it's qualitative, uh, as well as, uh, quantitative. And you should think about that. You can even use it today. You don't need to wait for this giant AI cataclysm. You can go ahead and start measuring today. How fast can your department pivot?[00:52:00] 

You know, how, how fast look around your people if you needed to give them a brand new skill to learn, how fast could they do it? That's your metric. Um, don't stockpile experts build expertise. So both inside and outside of your company, you should have, um, a very, very strong group of people who have very, very diverse specialties.

Mike Crispin: Mm-hmm. 

Nate McBride: And. That's way more valuable than hoarding arcane skills. You may still need that one person who's an absolute genius when it comes to, who knows, SAP Hana or some shit. But, um, even, even then, maybe hit, maybe hit Glassdoor and, and or whatever. What's the, what's the site that hires people by the hour?

It's not Glass Story either. Fiverr. Whatever. Fiverr, Fiverr job spot. Uh, I don't know, whatever they're called. Different website, Mike. Yeah. 

Mike Crispin: Oh, I'm [00:53:00] sorry. 

Nate McBride: Yeah, it's, uh, yeah, man. Go to Manscaped. That's where the, like experts are. 

Kevin Dushney: Mechanical Turk. 

Nate McBride: You can create structured learning loops, which is, uh, just basically, you know, it's actually falls outta the change management camp.

You know, if you're not doing postmortems and all of your projects, shame on you. But you can take postmortems and you can actually, um, build them up and you get into continuous improvement. And ci um, at, at its core is all about running loops. I did this, what worked, what didn't do it again, what worked, what didn't, and so on and so forth.

Kevin Dushney: Yeah. 

Nate McBride: Um. You can double down your core skills, but build processes for rapid context skill acquisition, which means everyone in your department should be on parody with some set of skills. So if you set the bar, and I, so my, all my departments, I set the bar, you must have ITIL four foundations. You must have a plus network plus security plus to be any role in my department.

That's the baseline. Now, [00:54:00] beyond that, yeah, you're gonna have like your special little secret sauce, but then I expect if one person gets a cert, everyone else gets to that same cert. You get AWS associate, everyone gets it. Everyone gets associate, you get blah, blah, blah python. Everyone gets Python. So everyone has to be part of this too.

If you get someone on your team who's like, I'm not wearing that shit, 86 them, or maybe they're super special and you wanna keep 'em around, they're good at getting, you know, a special organic beer or something. Um, they have a really high score. Miss Pac Man, whatever their special skill is, maybe you wanna keep 'em around.

But, um, just in time learning is moving from comprehensive training to modular contextual learning that happens closer to the point of need, almost like edge learning, which is, okay, I have to learn how to configure a Meraki MR 56 right now. So how do I do this? And I don't have the capacity to go take a uh, Cisco class.

I mean, we've all been doing this for years and years. Just Google it, right? But this is [00:55:00] the kind of thing that you can bring internally, uh, collaborative learning, which is, as I was just talking about, instead of one person getting trained, get two people trained, um, and let them share with each other what they've learned.

And then lastly, meta learning. So teaching teams to learn more effectively. So this is a, something I, I haven't done for a very long time, 'cause I haven't had big teams, but we were at Amag when I was at amag. Um, I taught people through a whole bunch of different methods how to be better learners. And it's actually something that if you think about it again as an IT leader, do you do that?

Do you sit back and say, you know what, your learning style actually kind of sucks. Let me help you learn better and faster so that you can adapt more to these other models I'm talking about. So I don't think the goal necessarily is to have every single bit of expertise in the house. I don't think that's ever possible.

But you should have the most that you can get. And those people that you have should be able to then go out and talk to any vendor and [00:56:00] learn or be knowledgeable enough about the topic that you don't need to have the expertise in the house. And I'll pause right there. My mouth is getting dry. I have to drink wine.

That'll dry you 

Mike Crispin: out, man. 

Nate McBride: Mm. Love. I love the roso. Still good the way it, the way it hits your lips. 

Kevin Dushney: Fill up again. 

Nate McBride: Thoughts. On any of that. Was I right? Was I wrong? Am I close to the mark? Off the mark? Who's, mark? 

Kevin Dushney: I guess one comment on, you know, so the, the certifications right on the a plus network plus security plus like in, in current date.

Do you think that's still relevant? 

Nate McBride: Nope. Not at all. Um, here's why I make them do it. Number one, because learning about a, a scuzzy port or a parallel port is absolutely useless information. Yeah. But learning about something is not, and I don't know when the last time was someone took a class to [00:57:00] learn a thing.

I can't think of four worst classes to have to sit through and learn. I mean, oh my God, that network plus exam, here's your DNS tables and here's how you do subnetting and learn 16,000 different ways to do your subnetting routing. Um, and by the way, here's IPV six, which no one's ever going to use in our lifetime.

Yeah. Um, go ahead and learn that too. This is the kind of thing that once you learn it, you can forget it. And I specifically tell my team, you are learning ITIL four so that you never have to use it. 'cause I do not use ITIL four. But by knowing ITIL four, you can pick the parts of it that you like the best and insert those into your own decision making about how you would do a thing.

Hmm. I wouldn't pay for anything beyond foundations. If you want to go down the ITIL four track, that's in your own, your on your own dollar, but ITIL four foundations. Yeah. I mean they teach you the things there that you [00:58:00] can say, huh, that's interesting. Nate doesn't do any of that. I wonder why, but I like this one thing that they said.

So I'm gonna go ahead and find out like what is I like about that thing and then do that. But in my own, in my own model, 

Mike Crispin: it's the willingness to learn too. Right? And to It's the willingness to learn. Yeah. And to go and to learn some and some of my go, here's a specific area, or what do you wanna learn about?

And then nuance the training based on something you're interested in that can, you know, that can add value for their career even more so than for our team. I, 

Kevin Dushney: I agree. I think, I think it's also like a, a, a lit bit of a litmus test of are you willing to learn or are you just gonna say, Hey, here's what I know, this is what I bring to the table.

AI Trance Bot: Yeah. 

Kevin Dushney: That's a, that's, I mean, a huge tell, especially in a small team, you want Oh, totally. Maybe be lifelong learners. Right. 

Nate McBride: I want, I mean, yeah. I'm not looking at people to be like, oh my God, a plus. I [00:59:00] cannot wait. No. At the same time, I want 'em to realize that there's a level of suck in learning that's gonna come, and you can't always just learn cool things, number one.

Number two. These are like baseline functions and you may have somebody and I have hired people, you know, 20 or tenure people who don't know basic networking. Mm. Which I think is a major gap. I don't care if you're like a top shelf. PMO. Yeah. Why are you in it if you don't know basic networking or, or how to do a support ticket?

You can't understand what's going on at those levels if you don't know these basic operational tasks, in my opinion. 

Kevin Dushney: Well you have your black belt, so what do, why do I need to know all that 

Nate McBride: dude, Kaizen. That's all I need, man.

99% dude. 99%. That's my target. 99 out of 99% outta out a million. That's what I'm going for. By the way, that whole, that whole black belt thing was a big, giant joke. I don't know if I ever told you this [01:00:00] story, but I'm gonna segue real quickly and tell you this. When I was at Orchard, I, when I was at Orchard, we were gonna build this 270,000 square foot manufacturing plant in South San Fran.

And I'm like, oh, well this would be a good opportunity to actually go and get our Six Sigma black belts because they'll be useful in a giant manufacturing plant. Um, it's actually, they kind of go hand in hand. Good point. Yeah. So I got the whole team on board. I'm like, okay, we're all gonna pay for this. I got everyone, their green belts, right.

Green belts were pretty easy. I think it was like a couple classes, an exam. No problem. And I was like, okay, who's ready to go get their black belt? And everyone was like this. No, I'm done. This is terrible shit. But I was like, okay, well I know this is kinda like a lead by example thing. It's like, I'll do it and I'll come back and report how it goes then.

So I signed up the black belt through Vanderbilt. No Villanova. Sorry, Villanova. And, um, what began [01:01:00] that began a six month march through hell, that fucking class. Oh. And of course then I left Orchard, uh, to go over to, um, Ohana. Oh. And uh, I had to finish the class of course, just on principle and my cod just awful.

And then of course the Orchard never had their manufacturing plant. They never, they, they canceled the project. But, um, anyway, that was actually not a fun story. I thought it would be a cooler story, but as it came outta my mouth, I realized even the telling of a story about getting a Six Sigma black belt is a bad story.

So I'm just gonna maybe cut that out of the whole recording and maybe I'll leave it in so people can learn. Don't take the Six Sigma black belt process. It sucks. Yeah. Um. Anywho, anywho. All right, moving on. Uh, keeping an eye on the clock here. We're doing good. We're doing good. [01:02:00] Okay, so we got two more parts.

So the future of search, I cannot wait to talk about this. So, uh, before we dive into digital sovereignty, I wanna talk about something that's been coming increasingly critical in this emerging technology landscape. How's that for an opener? The future of search and information Discovery. Yep. And again, what we just saw today from this big giant collaboration storage vendor mm-hmm.

Uh, has me scratching my head a lot and not just 'cause it's itchy. Uh, this, this idea is coming up a lot and along with clients that I have, uh, through my consulting arm, but with other people that I interact with in both the corporate and non-corporate worlds, which is, how the fuck do I find anything?

That's the question. Now it has variations. Like, how the fuck do I find that thing? How come I can't fucking find anything? Uh, you can just take your pick. But that's really the general question. And further, as search becomes more AI driven, it's the last few years has been mostly machine learning driven.

Now it's becoming AI driven. [01:03:00] The answers are gonna differ based on who's asking and which engine they're using. Mm-hmm. Um, I love the fact that Slack and Box both use machine learning searching or have 'em until now. And because that algorithm is customizing it, search results based on you and your experiences, that's going to change and probably suck a little bit.

Well suck a lot, a bit suck. Just a lot. Um, it's not just a technical problem, it's a new kind of autonomy risk. You actually might lose control of the narrative inside your own organization with regards to finding things. So search has been the backbone, um, of how we navigate digital world for decades. I can think all the way back to NT four, even before that actually.

So Windows 95, um, using that little search magnifying glass and generally you'd get a billion things and it was all unorganized, but you could at least find things. But right now it's undergoing a fundamental transformation that has profound implications for it, autonomy. So here's the reality. Search is becoming [01:04:00] more powerful and more problematic simultaneously.

Yeah. Ding ding. Another paradox for us, the search paradox, the AI autonomy, calculus of it officially branded search paradox. As information volume grows exponentially, our ability to find exactly what we need is both enhanced by AI and constrained by mounting complexity. 

Kevin Dushney: Yeah.

Nate McBride: The challenges are multifaceted. First, we're dealing with unprecedented data scale and diversity. There was a chart today that we saw this vendor thing that showed these big arrows going well, basically straight up. Uh, that's the algorithmic scale we've been talking about for a couple years now, Mike, about how Yeah, it's great.

It's great that you can, um, generate, uh. Gigabytes and gigabytes of data slop, and you're just increasing the amount of shit that goes into your SLM or LLM, therefore increasing the rate at which that curve goes into a straight line. Um, traditional search [01:05:00] approaches will struggle with the volume. So if you have, say, I think there was one person we talked to today that had, or we heard say from 55 gigabytes, no, 55 terabytes of unstructured data in their box environment, I can't imagine that that search is gonna run very fast.

Um, then, I mean, traditional search tools already struggle with the scale and complexity. Then you add in metadata and all the other factors, and it's just gonna be very, very difficult. Second, we're seeing the rise of ephemeral data. Um, ephemeral data has been around forever, uh, but it's, and it's also sometimes confused with metadata.

It's not, femoral data only exists temporarily, or in very specific context. Um, think of streaming analytics, IOT sensor data, or real time collaboration environments. Uh, so Slack data is a great example of this. It's ephemeral, right? 

Kevin Dushney: Yeah. 

Nate McBride: From a chain of custody perspective 

Kevin Dushney: on purpose. 

Nate McBride: Yep, exactly. The data may never be formally stored.

[01:06:00] Indexed data contains tremendous value. So how do you search for something that no longer has, um, its original form, and then third. We face increasingly fragmented data access. Again, that chain of custody problem, it's breaking, it's breaking quick, it's breaking constantly. Information is siloed across multiple systems, platforms, and jurisdictions.

And of course, again, we saw it today In those slides, uh, vendors are proposing, oh, just have an API, it'll just fucking talk to all the other platforms. There's no problem. Data will just transfer across and you'll be able to use one agent to search other agents and talk to the agents and have them search their agents.

Holy shit. From a access control and governance perspective, my mind is blown. Will we be able to create a unified search across operations from data privacy regulations very so dramatically between regions that no single research approach can be compliant everywhere? I, I'm sure we will and everybody will just lose their mind over it.

We'll get there, I'm sure, but it's not gonna work very well. And then finally, we're confronting the [01:07:00] challenge of search literacy. The gap is widening between those who know how to craft effective search strategies and those who do not. 

Kevin Dushney: Yep. With 

Nate McBride: ai, um, I was talking with somebody who's got about 25 years of experience in it, uh, not too long ago who did not know that you can use air quotes or, sorry, use quotes around a phrase to find that specific term.

Oh. Um, that is a search literacy problem. 

AI Trance Bot: Yeah. A search 

Nate McBride: becomes more sophisticated with semantic understanding. So what do you mean by when you say find the thing with the thing I. In context awareness, the skills needed to leverage it effectively are becoming more specialized. It's, and it's almost like on the level of becoming a prompt query master, just to write a general search query, you will now need to, to do this.

And why? Because all you have to do is look at Google search. Google search is now Gemini search. Mm-hmm. Gemini search requires that you write a perfect prompt. [01:08:00] Um, of course, Google's kept in the dope prompting, which is, uh, oh, just how do I do this? And, uh, it will still work, but that's not for the, not for long in this world.

So for IT leaders, I came up with a couple things. Uh, in terms of risks. One, you get knowledge fragmentation. Two, you have dependency and proprietary search technologies. We come full circle on this, on this. By the way, I remember putting a Google appliance in, at, 

Kevin Dushney: ah, I remember that. 

Nate McBride: At, um, TKT. No, no. 

Kevin Dushney: Yeah, probably.

Nate McBride: Yeah. Or no, Cuba. Cuba, no, no. I I think it might have been, it was Cubist or Amag, the big yellow box. Yeah. That's proprietary search technologies. We're coming back around to that, which is how do I search my own set of core data? Yeah. Um, and relying on a very specific vendor approach to searching that [01:09:00] data, you'll start to see, I think the, the sector reemerge of search bots that are specifically designed to be purchased and put inside your system to search and find different things based on that search bots capabilities.

You'll have search quality variability, which is the inconsistent ability to surface relevant information across different data domains. We all know that when you use an AI agent, you will never get the same result twice. And then we have search governance gaps. So the lack of oversight in how searches implemented and who can find what.

And the more stuff you put into a thing that can be searched upon, the more you find out just how poor your access controls and governance are about the data that you put into that very thing. So I'll pause there so I can, my whistle 

Mike Crispin: thoughts. I think, uh, a lot of these things that we mentioned were true in the enterprise search conundrum.

I think search, enterprise search never [01:10:00] really took hold at really any organization that I know of where it was really successful. Um, and I think largely because of the data governance and concerns about permissions and access. And I think that AI even exasperates that further because what not only are permissions gonna be exposed.

Um, or you can't necessarily find things because the AI is going against data that's not source of truth or it just hasn't been organized correctly. Similar to what we'd have an enterprise search scenario that's gone wrong is that it's going to reason and find things that you now need to explain why they found them.

Yes. So when they search across a large base of data and you've got AI search going and it reasons that a certain product shouldn't be X, Y, Z, now you've gotta go figure out why the AI is saying, Hey, this product actually is faulty, or it doesn't work, or it's not going [01:11:00] to work. Or Here's some risks we didn't think of, and you've already made the decision to move forward to sell that product.

AI Trance Bot: Yeah. 

Mike Crispin: And now the AI is giving you information that you weren't expecting. It's gonna uncover things that you now may need to explain. I think that's one focus on all this that may get forgotten is that AI tries to reason and give results, right? Not just show you the data. It's trying to give you the answer and what it gives answers that businesses don't like.

Um, whether it's because the, the data isn't managed correctly or because it's Right. Um, I think it. It, it's gonna expose things and, uh, I don't know that companies are gonna be ready to react to that. Uh, not, I'm not, I'm talking like the ne ultimately negative situation here. Obviously don't want something like this to happen, but Right.

You know, that, I think that's a real risk because the AI is [01:12:00] going to reason against your data. It's not just gonna search for things. It's gonna try and put that data together and put together a result in a lot of instances. And companies be, Hey, all our scientists didn't come up with that response, but our AI did someone searched it a consequence?

And now do we need to explain why our, you know, $2.5 million AI investment came out with a different answer than our clinical team. Yeah. Um, two years after a product came out. I mean, there's, that's gonna be a 

Nate McBride: question I think a lot of people ask. Yeah. I I think that's why, why is there a discrepancy? 

Mike Crispin: I'm just stacking on the enterprise search challenges I think are true.

A lot of things that have been mentioned tonight are very true in the enterprise search realm. You go up one more layer and it's the results and answers that come out from a, an AI search strategy. Yeah. That is a whole new area of, uh, examination that'll need to be required [01:13:00] when you're putting true material data on the line against an AI engine as it grows.

Will you be obligated as a company to explain why an AI gives an answer differently than what you have? Well, you have decided as a company. 

Kevin Dushney: Yeah. So think about it through the IP lens, and this is a conversation I've been having internally. Is there ha there's a burden of proof of human intervention or interaction and creating the invention that goes into a patent application.

Mike Crispin: Sure. 

Kevin Dushney: Right? Including the prompts themselves, like the failures and then the prompt evolution, capturing that as part of the patent application. I mean yeah, I just found that as very profound. Yeah, 

Mike Crispin: very profound. 

Kevin Dushney: Easy trip up too. Like, oh great, I discovered this. Like prove it. Prove you did it. Not just AI found it.

Nate McBride: Let, let's [01:14:00] assume that in our, um, company of 2027 that we've absolutely fucking killed it on all the AI stuff and we've got this amazing search platform and we've got federated search architectures and we've got AI enhanced search literacy and context to where discovery, how do you explain all this to your brand new director of biologics you just hired that came from a company that was completely search illiterate.

How do you train somebody to come into your world? Yeah. That you've done this work? 

Kevin Dushney: I think it needs to become part of the onboarding. Right. Because even if you think about in your existing employee base, the have and have nots, which came up today. 

AI Trance Bot: Yeah. 

Kevin Dushney: Um, people are gonna be resistant to [01:15:00] AI and using it for, for anything, including search for how long?

I I think that time is limited. Mm-hmm. 

AI Trance Bot: So, you know, 

Kevin Dushney: it it, to your point, it's either, you know, internally you achieve some sort of a acceptable level of parody where most of your user base is now AI literate for search. But yeah, it's, it's, if you need to bring in somebody brand new from the outside who's not, I think that's part of your onboarding curriculum.

They need to learn that that's, that's part of your culture and your company. Now 

Nate McBride: I started making a list, uh, yesterday about the things we will need to teach incoming new staff in 2027. It should be on 

Kevin Dushney: a jd. Sorry to interrupt you, but I mean that's, no, it's gonna become table stakes, right. So that's Well, hold, hold, 

Nate McBride: hold on though, because I'm just gonna play devil's advocate and say if that should be on a jd.

They should also have in their JD what training they had to allow them to [01:16:00] create anything new. Like you, you shouldn't be able to create a new Word document anymore until you have proven your capability of creating a new Word document. Like you should never be able to create. I mean the, the, we're talking about 

Kevin Dushney: search, but you're right.

It, if it's, if it's beyond that, then yes, I, I agree with you. 

Nate McBride: So let's take this a step forward. If I hire this amazing, this person's got all kinds of molecule background. They did a amazing PhD study. They get all this great stuff. I'm hiring as a director because of their brain. Yeah. But they are just dumb as a stump when it comes to using prompt queries and using the search the right way.

It's gonna be more than just an onboarding session. Yeah. I mean, I have to take this person aside and put 'em to an academy to get them to be on the same level of, and, and, and let's, let's broaden this. Let's broaden this whole fucking thing [01:17:00] for a second. You hire anybody in 2027 and they're gonna come into your new digital, digitally enhanced AI world where everything runs and agents and automation and smooth as shit.

How that hell do you train somebody because they're coming from a, they might come from a world actually that's using the same engine as you, but the whole world is absolutely different. Mm. It's not a difference of like Outlook versus Gmail. Now this is like, oh, no, no. We don't use that kind of terminology in our search queries here.

You have to unlearn all that stuff because that was only specific to the digital narrative of your prior company. Ours is totally different. Here's a book or a video or a new agent that will teach you how to do this. 

Mike Crispin: Think about that. How do you feel about Nate, about someone who comes in the organization and they ask 12 questions as [01:18:00] opposed to one perfect question.

So they're chipping away at a, at a response. Like I'm talking from a user experience perspective. Yeah. I think if, if one company or a, a number of our peer companies come at it from an angle where you ask the question however you want, you ask the process to be done however you want, and it will get done, I, I think that's what's gonna win out.

I think the companies that try and say there's a lexicon to all the queries you need to make, there's a prompt methodology you need to follow, they'll go somewhere else. Especially as as, 

Nate McBride: yeah, the federated search framework idea is one that I've been thinking about and this idea that let's say I come from company X, Y, Z, and I'm going to start a new job at company A, B, C, the ideal.

Mike, if you think about, this is when I started company A, B, C, and I type in the same [01:19:00] prompt I would've used at X, Y, Z, A, B. C says, I understand what you're trying to ask, but you need to ask it in this way. Let me ask it in this way for you for the future. Please ask it in this way. Like it's going to train you right then and there that it understands what you were getting at.

But actually this is how we do it. And, and it can just keep doing this over and over and over again. Like you can maybe never learn and just say, I'm always gonna write it this way and it's just gonna always retrain you. But then you, of course you have the ai, HR development bot that's watching you be inept and sending notes back to your manager.

That, 

Mike Crispin: but I think you're gonna hire people based on the uniqueness of their prompts and how they communicate with ai, not on the structure in which your company operates ai. I think you're gonna hire people who have a creative element that can ask a question that's different than anyone you've hired before, because that's gonna add more diversity to the, to the kind [01:20:00] of curiosity and, and build your AI as a company, I think more so than someone who comes in and has to change the way that they prompt or ask to get an answer.

I just, a little fundamental difference I think is, I wonder if I, I think, I wonder if that is going to persist because I think of the, um. Some of these engines and how great they're getting at just translating what we, even knowing what we want before we ask. Yeah. Um, is what that when you come into a company, if you've got the right platform or approach that we're gonna be hiring people who have, who have queries we never thought of.

So that's, yeah. That's gonna be one of the value propositions of bringing in new people. Um, do 

AI Trance Bot: you wanna hear something over time, 

Mike Crispin: especially the people coming outta university or coming outta school Sure. Or coming from different backgrounds, or they've started their own business and now they're gonna join the private sector for big [01:21:00] company.

They're gonna bring all sorts of diversity of thought into your AI that you may not have been able to get before. So by, by putting too much, I guess, structure around it, into the respect that a question needs to be asked with these parameters in a certain way that could, that could hinder some of the, the speed, the velocity, you wanna move that?

Nate McBride: Well, I'll say two things. One, I'll tell you a secret, which is that my new role that I'm trying to hire for this person or people that make it to sort of one of the later rounds of the interview process will be asked to sit down with me in one-on-one and show me how they create prompts. Yep. Yeah. Yep. I was gonna say 

Kevin Dushney: it's become part of the interview process, but for me, 

Nate McBride: for me it will be for sure.

But Mike, let me, let me tell you a second thought, which is. Um, if you do hire somebody, and it turns out that they have a new way to leverage generative AI that you hadn't previously contemplated, or, and, or that [01:22:00] that compliments your system. 

Mike Crispin: Yeah. 

Nate McBride: What would you do to your environment? And let's put that, put this in the context of ai.

Let's just say you have, you have, you have cardian and cardian. It is like this level, it's this amazing level of it. And somebody you hire says, oh my God, this is so cool and I have an idea that can improve this. They have nothing to do with it, by the way. 

Mike Crispin: Yeah. 

Nate McBride: Do you take their idea and implement it? Do you, I mean, you probably at least think about it, but do you implement it?

Sure. And I only ask this because then let's take it back down to the AI level. Someone comes in in 2027 and they have this unique way to ask prompts you hadn't previously considered. Do you adopt that into your lexicon of AI prompting? Because I think those two are almost the same thing. 

Mike Crispin: Yeah. Yeah. I would adopt it.

I would examine it. I would examine it and wanna work about it. But I, I, I would like people to have, um, if they certainly something that no one's thought of. That's why I'm hiring them. [01:23:00] It's because we haven't thought of it, uh, in a lot in, I expect that the, these LLMs and tools are, they're gonna need to be asked different questions.

I'm just a, a big believer in, and, and you guys are as well, from the prompting perspective is just, it's all about questions. And, and, and you can, I do also believe that you can start with a one sentence question and then ask a paragraph based question and then another one sentence question and get an unbelievable result.

But it's a, it's a continuous, it's a structured stream of questions that Right. You're nuancing it as you go forward. Where I think that, we talked about this last week with the big qua, the big challenge, a bigger challenge is gonna be when someone wants to bring their entire knowledge base with them.

Yes. And their history, not just their prompts, but the, the information, the actual AI component with them into a company. I think that's gonna be a bigger challenge from a [01:24:00] governance perspective than the prompting con concern. Oh my God. Yeah. Because yeah. Hey, I've had this assistant with me all my life that's got me through X, Y, and Z Right.

And helped me learn. And I've, I've accomplished all these things on my resume with this ai. Uh, I wanna, I wanna bring it to you and you can use it too, and you can get the most out of it just like I did. 

Kevin Dushney: Well, back to the conversation last week about the portability. That's what I'm 

Mike Crispin: talking about. Yeah, exactly.

I think's conundrum for, for companies in the next decade. 

Nate McBride: Yep. But for the autonom autonomic response that you're going to need to, to maintain as an IT leader, you have to sort of say, okay, on the one hand, this person's very talented, I need to figure out a way to, um, hook their LLM up. On the other hand, that still does not mean that they know what they're doing.

You could have spent the last 10 years cur curating an LLM and creating your own, uh, AI environment that you can then port in. It could be shit though. So we have a new, we almost have like a new, um, [01:25:00] a new experience problem. 

Mike Crispin: Yeah. Yeah. And well, to ke Kevin's point about kind of adding the, the write a prompt with me, uh, or write a prompt kinda as part of the interview process.

Yeah. It may be that we end up interviewing ai, so we need to interview the ai the company needs to interview the AI that's coming with the employee to the organization. Yeah. And that's a job right there in a new job is someone in IT or HR who is interviewing the personal assistant, who is interviewing the AI that's being brought into the company to check that it is either, um, you know, unbiased or it's, it's focused in an area of interest for the company.

But I think there maybe another AI that interviews it, who knows. But I do think that there's gonna be a need to vet ai, just like we vet humans or we qualify. Right now you're talking about 

Nate McBride: Black Mirror. Yeah, 

Mike Crispin: no, I think that, I think that when, that's gonna be the only [01:26:00] way that a company can really understand these ais that are coming in is to actually have a conversation with them and, and say, yeah, I don't know.

It's, I, I, this is where I think we get into trouble because I don't know that humans will be able to do that appropriately, but, but we're ta 

Nate McBride: so what we're taking, and this is the problem. No, no, it's not. You're not, you're fine. The problem, what you're getting at is that search up until recently or now or this moment has been a utility.

It's been, you know, in the terms of the three frame rule on the left side of your SaaS app, you have navigation in the middle of your content, and then somewhere in the top you have search. It's been this little tiny bolt on, most people don't use it. They'd rather sort or just like click through 10 folders to get to where they were last time.

But search has always been this utility. Now it's not utility anymore, it's becoming a strategic capability. Probably one of the most important. It directly impacts your ability to leverage your assets unstructured or structured or [01:27:00] otherwise. And a search evolves from, I don't know, does this keyword exist in the document towards this AI discovery thing that people are building?

The decisions that you make, the PE employees you hire make about search architecture tools, governance. All, I think have pretty big implications on future autonomy, because I cannot let an employee come into my company and just start searching the way they used to. They have to search my way, or I have to con condition my search environment to allow them to search their prior way.

And then on the fly, we train them to search my way. 

Kevin Dushney: Yeah. 

Nate McBride: That's the only way. Otherwise, if they come in there and they're like, wicket, whacking away on the keyboard with their old search methodology and nothing's happening, A, they're fricking useless, and B, they're probably not gonna last very long. 

Kevin Dushney: Well, there has to be a path to adopt to your way, [01:28:00] right?

Because if you, I think if you can map that transition where I come in, I'm comfortable with this, but I see what my objective is longer term, and there's a path to do that that's different than come in, learn this, or you're, you know, you're out. 

Nate McBride: Do you think at all that there's ever a possibility there might be conventions for search, for instance?

Um, we have, we have code conventions, you know, Python, we have libraries of conventions. Yeah. For things we're, do we do a great job at giving groups of things, names like taxonomies about things. Could there be a time when we have, oh, uh, cardian uses the, uh. Jole Norberg search model, and so, oh, I'm familiar with that model.

I've [01:29:00] been trained on that model. Or well, chime, Kymera uses the SCH Swanson 1, 2, 3 model. Oh, I'm familiar with that model. Or they use a variation of it. Like it could become that level of, I dunno, maybe I'm just talking outta my ass, but 

Mike Crispin: No, no, I, I, I think that component, Nate, that search model is going to be proprietary and ip, so I don't think it'll be shared.

So, so you have to 

Nate McBride: every company, but every company you have to go in a new one is what you're sup what you're suggesting. 

Mike Crispin: I'm saying that if you're, you're not probably gonna broadcast a standardized unless you're using something standardized. I think it's competitive advantage. The better ais that I agree.

Not gonna be disclosing them what model they use or what language, maybe the language model like today with the CHATT and Geminis, et cetera, maybe that 

Kevin Dushney: extent, especially to the extent you've customized the crap outta some of these things that is ip. Why would you share that? 

Mike Crispin: Yeah, it's gonna be, it's gonna be ip definitely 

Kevin Dushney: everyone can [01:30:00] do it, but you, everyone's gonna have their own secret sauce that, you know, as in terms of sure.

Works well. But there's, but there's, 

Nate McBride: there's baselines today, like today I go ahead and build a program using some sort of, uh, coding language. Yep. I am, I'm the IP is what I, what I built, but I'm using a standard library 

Kevin Dushney: standard. 

Nate McBride: So that's what I'm talking about. But, but I get your point. Like if you had a library that was so good, would the library be a super, would supersede the standard?

And then what would the standard be like, oh, they're using again, the whatever system, but then they've gone ahead and built their own proprietary layer on that, at least to be familiar with that system of search. Yep. So you'd be able to come in and I know we're getting off into the weeds, but I feel like, again, this is something I wanted to talk about a lot because this is one of the biggest things that is so overlooked and search for all of its greatness is being thrown away, fucking thrown away for ai, in my opinion, [01:31:00] evidenced by what we heard today at this vendor demo thrown away.

No, no, no, don't need to search anymore. Now they'll just use this AI to find shit for you. 

Kevin Dushney: Yeah. So I mean this, this kind of hits on a topic for me where there's so much expectation and hype around, you know, we'll use copilot as an example where it has access to your entire, you know, Microsoft environment and yet it's not returning things.

I had somebody today say, Hey, I was trying to find an email, you know, from two years ago with this attachment using copilot, couldn't find it. And he is like, I found an Outlook search. I'm like, 

AI Trance Bot: yeah, 

Kevin Dushney: exactly. Yep. Like that's just frustrating, right? And 

Nate McBride: yep. So was it the prompt? Uh, the question is, so Yeah, it could, it could be the prompt.

You're right. Was it the prompt? But the fact was it, was it 

Kevin Dushney: Outlook search, which is notoriously bad. 

Nate McBride: Yeah. 

Kevin Dushney: Uh, versus like, here's this brand new, [01:32:00] you know, natural language. I should just be able to ask this question and get it. And No, I could think not to find an email. It should not be, be that sophisticated.

It's not like you're doing deep research. So I, I get it. Yeah. The prompt could be completely, you know, or it could be written better. But if you're, if you're doing something more than searching the, the prompt quality matters more, right? Sure. And say, retrieve this, find me this document, or find me this conversation I had with Soandso, that, that should be easy.

Nate McBride: But to your point, let's suppose I say find, I talked with Kevin three months ago, find that email, but in fact, I talked to you three times. So which email is it gonna return to me? 

Kevin Dushney: Well, you need more in that case. Yes. You need more specificity. This guy was looking for a specific, 

Nate McBride: but, but my, my point is, Kevin, let's suppose I didn't, I I couldn't recall that I talked to you three times.

Yeah. I just knew that I, I knew that I talked to you at least once. So find me that email that I, I I have with [01:33:00] Kevin, it's gonna have to return all three. 

Kevin Dushney: Yeah. Gimme all three. 

Nate McBride: Right. But, but I wouldn't know to say that is my pro is like, my, my prompt would be, I talked with Kevin a couple months ago. Find, find that email for me.

Kevin Dushney: Find me interactions between me and Kevin over the last one. Right. 

Nate McBride: So that would be, I'd have to write, find me everything of, I talked to Kevin during this timeframe, which brings us back to search. 

Kevin Dushney: Have an email. I don't even know. What mode was it? You don't, you don't remember that either. Right? Right. Was it an email, was it a Slack message?

Nate McBride: So the question is, when to you search versus when to use an agent. Then how to, of course, write, how to search for the thing that you wanna search for. It's also how to understand the library you're searching against and what its capabilities are. Yep. Um, it's all going to change, and yes, we are going to have to, I think, strongly that in 2027, perhaps sooner to Kevin's point, that first week of orientation, it [01:34:00] won't just be, here's how you log in and here's your password.

There will be a comprehensive series of potential education that goes on to even get you to a, to like a baseline working point. Mm-hmm. Plus, you know, attaching your agent to my mothership. All right. Yes. Last thing for tonight on this topic, I. Then we have a little bit of cleanup, the digital sovereignty imperative.

So the final area I wanna talk about is this wonderfully grandiosely named Digital sovereignty Imperative. It's not that big of a deal, but what it really means is the, the ability to maintain control over your digital assets, data and strategic technology decisions is becoming a major focus for governments around the world.

And this goes from all the way from the individual to the, uh, state country region level. Uh, look at eus Gaia X Initiative. China's focus on digital technology development and the US Chips [01:35:00] Act, um, soon to probably be repealed, all reflect growing concerns about technology dependency and its implications.

So this has direct implications for IT leaders and their autonomy because not only do you have to deal with your changing search framework and the fact that Mike Crispin's gonna walk in your company one day with his own LLM, but there's gonna be local, state, federal countrywide implications for every decision you make around the things that you're going to do to create a whole lot new level of constraints and opportunities around your autonomy.

So we are all used to GDPR, Dora EU AI Act and these other CPAs that are happening across the country. Um, and these all change your data requirements. Limit technology options, they increase compliance, complexity, yada, yada, yada. Okay. Yeah. I think there are, now, there are 22 [01:36:00] states with active or pending legislation for consumer privacy acts, which changes everything about your data.

For a company like Box today to suggest that you can just have fucking unlimited, unstructured data and we'll find it for you, means that you are going in the wrong direction. You need to have structure, you need to have ontology, because when that customer comes along and says, remove all of my shit, you need, you better have it in a place that's called the Kevin DNI folder.

Yeah. You complete all of Kevin Dnis shit. You should not go into AI agent to do that. Or you could if you had an amazing AI that could prove it and create an attestation agreement. But even still, 

Kevin Dushney: it, it was funny, there was a bit of a paradox today where it's like, you know, the vendor, this large vendor was almost encouraging, just blatant laziness, just throwing in the IL took care of it, and the panel is like, no, the metadata is crucial.

Which I agree with. [01:37:00] Right? Yeah. So 

Nate McBride: a hundred percent, 

Kevin Dushney: yeah, I was, 

Nate McBride: I was shocked by that statement. 

Kevin Dushney: Yes. 

Nate McBride: Oh, you don't need to structure it, just, just ingest all the shit and we'll take care of it for you. That is, that is terrible reason. 

Mike Crispin: Yeah. Along with the AI sometimes makes mistakes. Yeah. Hey, importing millions of documents, if it makes seven mistakes, how do I know which seven are wrong?

Kevin Dushney: Exactly. And you're relying on that, and it's like, okay. You know, it's like, I, I literally did a session on this today, and if you're not an SME,

how are you? 

Nate McBride: Right. Well, this is the, my higher, 

Kevin Dushney: but I'm not the subject matter expert. So off, off I send it. It's good to me. Says, Hey, this is wrong. They may not find out until downstream. 

Nate McBride: Yeah. Well, it's good. I mean, it's got all these big words in it, [01:38:00] especially 

Kevin Dushney: when you're like, 

Nate McBride: you 

Kevin Dushney: know, everyone, like everyone is just so jammed for time.

Yeah. It's like, oh man, this looks close enough. Send, 

Nate McBride: you know, and well, every one of the, every one of these vendors that's, that's, uh, increasing their generative AI fabric on LLMs is also simultaneously increasing, or, uh, they're increasing the rate at which, uh, generative AI slop can be put back into the LLM, and it's going to be this wonderful giant pile of turds in the next two years after everyone's done using generative AI to create data that goes right back into their LM without any qc.

I, I just, I mean, is, 

Kevin Dushney: is this an emergence of a whole new department in the organization? It's like a hallucination adjudication team. 

Nate McBride: Yeah, your RLHF team that spends all their time samples, 

Kevin Dushney: I can submit samples free to fact check them. 

Mike Crispin: Yeah, I'm, I mean, they did say [01:39:00] that AI is gonna create some new jobs, right?

Kevin Dushney: Well, yes. No, I'm kind of what those are, 

Mike Crispin: right? Hey, these new jobs will be to check the ai, like you're saying, like the adjudication panel, you know, that's gonna look and make sure all this data is good. I don't think that at all. I mean, I 

Nate McBride: agree. If your company's revenue future depends on the validity of a document, you are not going to allow that to only go through ai.

You are sure as shit going to have human eyes on that entire thing, uh, reviewing it. And the same goes for any RFP, any CDA because if I send Kevin a CDA and he goes into that CD and he is like, fuck this, you cannot indemnify yourself in Delaware. I'm a Massachusetts company. I'm gonna go ahead and strike this line and it comes back and the AI's like, looks good.

Send it for signature, send it, send it set. I was like, [01:40:00] 

Mike Crispin: no, no, I didn't send it. I didn't send it. The AI agent sent it. So, yeah. 

Nate McBride: Yeah, exactly. You go from no humans to four humans in like a split second. So 

Kevin Dushney: your performance review, it's like, well, I didn't do that. The, yeah, no, the agent did that. I have no idea.

Mike Crispin: So what do you say you do here? 

Nate McBride: I'm a people person. So for IT leaders, for IT leaders like us and yourself, who, if you forgot, if you did, did, took nothing else away from anything we've said in any one of these episode nine parts. If you're concerned with remaining autonomy at this level, the highest level, you have to do a couple things.

One, monitor what's going on in your part of the world. Okay? You should monitor evolving digital sovereignty and all the jurisdictions that are relevant to you if you have a single [01:41:00] customer or employee in the eu, you had better understand Dora if you have people that are consuming things that your company sells in California, or 21 other states.

Yeah, every single one of those CPAs, 

Kevin Dushney: that list is growing. 

Nate McBride: Yeah. Yep. And sovereignty aware architecture, you better design your systems to be able to adapt. If you go with a single vendor that's locked in and then somebody who's gonna change the game comes in and says, I can't work with that, you got a tough choice to make.

Not only that, but everyone's going to change. Fucking hell in the next two years. The whole thing is we know it's going to be different. You need data governance alignment. Data governance must be flexible enough to adapt to any regional requirement, even as they change. You need to be able to evaluate vendors on their capabilities, but also on their ability to support your requirements.

So yeah, you can have a vendor that's [01:42:00] awesome, you love them, they're so cool. Oh my god, their UI is so amazing. But they don't support data sovereignty in California or the e uh, it's a problem. That's a problem. And you can't have them, or you can have them, but you have to have an alternative to them for those other jurisdictions.

And lastly, strategic alignment. Ensuring your technology strategy aligns with your organization's approach to the geopolitical challenges that might face you. Now, I happen to be fortunate enough to work in a company that doesn't have geopolitical challenges. Today we have a single employee in Scotland, but that's really the end of our geopolitical, uh, fabric.

Kevin Dushney: But what about, what about vendors, suppliers, CROs, CMOs, et cetera. 

Nate McBride: I'm also work at a company that has vendors and suppliers in countries that have been deemed, um, enemies of this state. Mm-hmm. Unsavory. [01:43:00] Unsavory. And so all of this plays a role now. Yeah. So this, the key insight here is that your digital sovereignty, you have to look a little bit beyond yourself and the people that are around you screaming for this, that the other thing around AI or whatever, you have to actually look all the way up to the biggest possible macro level.

And if you can do that, you'll have a significant advantage over those that just treat it as a con. Some sort of annoyance or minor constraint. 

Kevin Dushney: Yep. Or novelty. Yeah. 

Nate McBride: Or novelty. 

Kevin Dushney: Yeah, 

Nate McBride: I agree. So in the end of the day, uh, and this was a long road to get through, but for these three parts, if you're an IT leader listening to this or somehow someone told you, listen to it, and you're not an IT leader either way, if you're just like, holy shit, that was such an amazing podcast and this all sounds so important to me, but where the hell do I start?

Uh, fear not. Fear not, I know where you [01:44:00] start. I know where you start. You have, i, I have proof. Kevin's got an answer. Get a bottle of bourbon. No. Okay. So first of all, uh, jumping ahead, but yeah, do, do an exposure assessment, okay. Baseline. And this goes, this is like literally anything you do ever. Just do a fucking delta.

Here's where we are today. Here's where we want to go. Here's the things we have to do to get there. No different. Do an exposure assessment. Map your exposure. Where are a ai, quantum edge and other techs, uh, in your business emerging? What are what are strategic, what are tactical? Uh, analyze all your autonomy vulnerabilities.

Make a big giant list. Here's all my vulnerabilities of all my economies. It won't be a small list. Where are my new dependencies cropping up? What are my costs to switch? Where are my knowledge gaps? And then identify new opportunities. Where could emerging tech actually increase [01:45:00] autonomy? And where can you reduce vendor lock-in or build strategic differentiation?

You also want do a vulnerability analysis, which isn't necessarily an exposure assessment. It's kind of a little bit different. You wanna figure out from an autonomy perspective, okay, can I update my governance? You can. I'm telling you that right now. That's a sort of rhetorical question. You can update your governance to address AI risk.

Continuous learning oversight. You can transform your architecture, design it from modularity, get outta your lock-in environment. Start building for, for flexibility. Um, cross vendor integration will be a must, especially when it comes to, um, bringing in these external agents. Prioritize your talent and development.

So build autonomy, critical skills across everyone in your department. Accelerate learning. And then lastly, refresh your vendor strategy. Your vendors are cool, but are they so cool that they'll stand by you when shit changes? Multi-vendor approaches. Contractual flexibility and competitive leverage will be key.[01:46:00] 

Um, and then your final roadmap. So after you've done all this, you've mapped out your opportunities. Where are your quick wins? Find, find and act on your low hanging fruit. Get those quick wins done. Um, only the ones that of course focus on increasing autonomy. Uh, what are your strategic initiatives? So invest in longer term shifts.

Architecture, governance, and skills. These are not quick win opportunities. These are long win opportunities. You invest in these, you write out plans and you chip away at them as you go. And of course, they, you're flexible enough to change and adapt. You have fire drills. What if your AI vendor disappeared tomorrow?

What if they fucked up so badly and they leaked all your information that you had to switch? Could you pivot, run that scenario? And then lastly, trigger basin responses. Define every single if this, then that plan. Yep. Any major tech or market shifts. This is the epitome of war gaming. [01:47:00] It's the holy grail of, of this whole emerging world.

And you should be doing this while you're simultaneously adopting all this new and clever um, stuff. After you've done all these assessments, you have this wonderful thing in your hand. Evolve your governance. Make a plan to do that. Transforming architecture, develop your talent, look at your team, they're ready to be developed, and then go with the vendor strategy.

Mm-hmm. These are things that you all need to do, and I don't know if you guys wanna add to that, those lists, but like, that's how you get ready for what's coming around the corner tomorrow, A week from now, two years from now.

So, yeah, go ahead. 

Kevin Dushney: No, I was gonna agree. [01:48:00] I think, you know, right now it's, it's, it's like the pace of change that is the most troubling part of all this is, you know, 

AI Trance Bot: yeah. 

Kevin Dushney: You anchor on something and a month later it's like, oh crap. Significant adjust, I mean, who just like reacting and reacting and reacting constantly versus, you know, 

Nate McBride: I think, I think all of us have, have made some, some, some me medium term or near term investments in ai.

I've signed up with a, I signed up with a vendor for a one year agreement. Um, Kevin, I think you did too, Mike. I believe you're in the same boat. Yeah. 

AI Trance Bot: Yeah. 

Nate McBride: Um, I don't think, I think a year is too long. I mean, you can't get shorter than that unless you pay month by month. But, um, I think a year from now, whatever I'm doing today will be drastically different.

Kevin Dushney: Yeah. I mean, I, I said that today, I mean, we, we pursued a two tool strategy, which, you know, which is an [01:49:00] investment, but you know, if you juxtapose that with the potential efficiency and time savings, it's, the investment pales in comparison. But I, I told people in, in, in my org, like, who knows what this looks like by the end of the year?

Nate McBride: Yeah. 

Kevin Dushney: To be completely changed what I'm telling you today, and these, you know, the tools we're using today be like, you know, what we're changing in November, you know? Yeah. Because the, the game has changed. The, the leapfrogging, the, you know, specifically with copilot, does it fall so far behind chat GBT even more, more so than it is now that it's just t and the, the ability to access your, your office documents is just not worth it.

So, 

AI Trance Bot: yeah. 

Kevin Dushney: And, and, and does Open or other vendors build, you know, graph based connections into office such that copilot is just obsolete? Who knows? [01:50:00] 

Nate McBride: And we're, we're also a two vendor company, and it's interesting that you and I are, you and I are both two platform companies. I, I wonder if we may find ourselves in a position.

Next year where we have to become a five platform company or a 10 platform company. 

Kevin Dushney: I hope not. 'cause it's hard enough to, to introduce two tools. 

Nate McBride: Yeah, no, I understand 

Kevin Dushney: that. And that held us up, you know, and to 

Nate McBride: explain which is for what purpose. Yeah. I mean that's no, 

Kevin Dushney: and, and, and, and it's hard to explain that because what I think is best for what purpose is gonna be different than you or you 

Nate McBride: Sure.

Kevin Dushney: These are optional tools. So you're gonna find that, hey, co-pilot's good at this, but child GPT is better at that. 

Nate McBride: Yeah. 

Kevin Dushney: You know, or Gemini is good at this, but I really like Claude for writing, you know? 

Nate McBride: Yeah. Well, and we're talking about, uh, engines. We're not even talking about the SaaS apps surrounding them, like the D script and pipe.

Yeah, no, we're just talking and all those other ones that are also using the engine. I mean stuff, 

Kevin Dushney: not, not the, the [01:51:00] stuff that's baked into the platforms, like the Right, right. Dlms and boxes and, you know, everything else that's emerging these days. Yeah. 

Nate McBride: Yep. Mike, what do you think? 

Mike Crispin: I think there's, um, I, I think it's early and I think there's a, a need for some sort of brokerage or aggregator service that's in front of all this stuff.

I'm not, I mean, I know Perplexity and PO and others, uh, have put pieces of this together and I think the emergence of kinda seeing where the phone is going and that the phone is the front end for so many things on the consumer side. Yeah. Being that you'll just type into a box and it'll get, you know, a little prompt and you'll, or ask it with your voice.

I, I think we're not there yet in the enterprise, but that's where we're going. So you can have as many LLMs in the back Yeah. If you want. Whatever. That's right. S asked, it's gonna query whichever the 10 or 12 vendors you have as your AI engines in the [01:52:00] backend. There's gonna be a way to aggregate them the front, and we're not there today.

Yeah. And I don't think, and I think until, I think that before that it's still, it's gonna be an experiment until you get to that unified UI that everyone can use. Probably voice or, or text. Um, yeah. Or just, you know, it could be any number of things. Um, but, but as the vision, that's where we get before you.

That's why I argue that like the Strat strategic element, uh, some of the things is, it's, it's good to start to build these, but I, I think, yeah. You, you mentioned that you'll be somewhere else a year from now. I completely agree. I think we all will. Um, hopefully we're at a place where. Some of that heavy lifting is done for us and someone has put a front end on these things so that we can obfuscate the technical stuff from the end user.

And it's just, yeah. Right. I agree with you. Far enough to know what to use 

Kevin Dushney: is, is there, is there an enterprise abstraction layer, you know, so like [01:53:00] per the last episode of this where vendor lock in and you're like, crap, you've built all these integrations on one platform, and now it's like, crap, I need to port that all over.

Is, is there an abstraction like that lets you just say, you know what, like here's, here's the model I'm going to use and it it help learn what's best for what task. And you, the, the employee doesn't need me to think about that. 

AI Trance Bot: Yeah. 

Mike Crispin: It's probably some sort of enterprise agent or assistant that sits in front of all of the ai.

Kevin Dushney: Yeah. And it's making a decision of like, here's what I'm being asked to do and I know best now, which model to pick. Oh, look at that. Um, you know, that was AI right there. Right. 

Nate McBride: Okay. Well that was, that was first of all. Awesome. Thanks you both for all the insights over these three parts. Um, we left a lot of things on the table, but in a good way, I think.

AI Trance Bot: Yep. 

Nate McBride: A lot of, definitely food for thought out [01:54:00] there. I think hopefully everyone who's listening to this is taking away that it's, if you're not ready now, you should be already thinking about. The next two years, three years of what you're going to do for all of this and, and sort of maintain and be ready for that flexible approach.

Next week we're gonna talk about a very particular part of tonight's episode. We're gonna focus in on the sort of regulatory landscape. Mm-hmm. And I'm talking about just life science. I'm talking about in general and the compliance requirements. We're talking about risk related autonomy challenges. Yeah.

So the evolution of regulatory compliance in a borderless digital world, uh, managing risk and everything becomes as a service. And the balance between cybersecurity, automation and human judgment. Talking about innovation driven autonomy considerations, let's talk about vendor and partner relationships a little bit more, but morely on the procurement and contracting side.

Then we'll talk about financial and resource [01:55:00] management. So managing autonomy when everything becomes pay as you go. And this, this last idea, and Kevin, you are welcome back next week to talk about this. This last idea perhaps is the most important of this entire episode next week, which is tokens. Ah, yeah.

So the world, the world is moving towards a token exchange. Yep. And your ability to understand the very thing that we've just spent the last few episodes talking about has a limit. It's finite, you know? Mm-hmm. In Vox, you are given, I think, 6,000 tokens per user. Maybe per day. I forget what it is, but that's not a lot.

It's per day. Per day. So, so we've talked about all this great fucking future shit, you know, like, oh my God, the world's gonna change, et cetera. It doesn't matter. 'cause underneath it all, there's a throttle or mm-hmm. We're gonna, we're gonna [01:56:00] talk about this one idea of the token. Um, I'll try to keep the technical part of the token stuff very limited so we can just get through that.

But what I want to get to is this, this moment. Um, and yeah, that's next. I think 

Kevin Dushney: the token is just currency that might set the stage for the conversation. 

Nate McBride: Yeah. And how long does that happen and when does the token become something that right now people get tokens for free? That's 

Mike Crispin: right. 

Nate McBride: Now I don't wanna get all Ray Wang on you and Industry 5.0 on you right now.

We can do that next week, but guess what? Tokens won't always be free. And certain types of tokens will start to have different types of values, like $1 bills, $5 bills, and $10 bills. You'll get $1 tokens, $5 tokens, and $10 tokens. To get different levels of access, you will get [01:57:00] different tiers of access based on your expertise and skill level, how much you use a thing, how much you use a platform, and on and on and on.

We will black mirror the shit out of that. Um, we'll also explore how like Dora GDPR and other impact it just making autonomy issues, uh, affect compliance and turn it from a constraint into digital asset. With that, I wanna thank Kevin dni, Mike Crispin, 

AI Trance Bot: of course. 

Nate McBride: Uh, I wanna remind everyone that if you like our show and why wouldn't you?

It's fucking awesome. Give us all the stars on all the platforms. Uh, buy a beer and our show description is the link. Um, visit our merch store, buy our stuff, donate to book a media, donate to the ACL U. Don't be a dick, especially, don't be a dick to the hardworking IT people in your, in your company. Be cool to them.

They'll get paid back in spades. They will take care of you faster. [01:58:00] Trust me. Be nice to animals. Be nice to old people. Explain AI to old people. It'll pay off karmically. And finally, maintain your autonomy while you still have it. It's harder than ever in this emerging tech landscape, but more important than ever to keep an eye on it.

Any final thoughts from you guys? 

Kevin Dushney: I, the only other plug would be by Micah Ceiling. 

Nate McBride: Yes. We're we're currently taking collections from Mike Basement, ceiling Concern, 

Kevin Dushney: health and Wellbeing. Uh, given that that sag that's, uh, emerging there and, uh, 

Nate McBride: you know, yes, the internet has speculations abound about what Mike's keeping up in that ceiling.

Um, and why isn't in fact leaking? Is it a dead body? Is it multiple dead bodies? Is it, uh, gold bullion? Is it, uh, leftover soup? We don't know. But it was strictly, strictly 

Mike Crispin: [01:59:00] put in by design to add character to the basement. Add 

Kevin Dushney: character, I think it's Bales of Ramen as part of his doomsday prepping. Mike's going to be, it's not 

Mike Crispin: McDonald's and Kentucky Fried chicken wrappers and stuff up there.

Nate McBride: Mike's gonna be our on the spot reporter next week while he is in New Orleans at Jazz Fest. So Mike, we expect some on the spot reporting from AI in the Yes. Okay. Hear about how, how Hell, yeah. How is AI playing a role in Jazz Fest? Okay. That is your next week on that 

Kevin Dushney: AI calculated exactly how many hurricanes I can consume without being incapacitated.

Nate McBride: Well, Mike actually already used some AI for Jazz Fest, which we could talk about next week. But I want to hear, Mike, your entire AI journey while you're at Jazz Fest this year. Okay. 

Mike Crispin: I, I will, I'll do my best. Uh, there's a lot of, I'm sure AI that's going into the, actually, I can't think of much, but we'll find [02:00:00] something.

Nate McBride: Okay. All right. Well, all right. Feel free to bring back on the spot report. There'll be 

AI Trance Bot: tokens. 

Nate McBride: Yeah. Get some tokens, some New Orleans, uh, and then bring them back. And then you can, uh, illuminate your search. Special New Orleans tokens. Special New Orleans tokens. All right. Called beads. Beads, yes. All right, gents.

It's been a pleasure. All right guys. See each other next week. All right. Good. Mike, have fun. Thank you. Thank you both. You 

Mike Crispin: good to see you again. 

Kevin Dushney: Goodnight, guys. Alright. Cheers. Bye. 

AI Trance Bot: Binary whisper flashing screens that glow so bright in the, we take flight within our side,[02:01:00] 

the

through the cyber paths. We glide in the circuits we fight. No restraints, no need to hide in the system. We reside.[02:02:00] 

The code in the

We Control

whisper in the night flashing Bright in [02:03:00] the.

People on this episode