
The Calculus of IT
An exploration into the intricacies of creating, leading, and surviving IT in a corporation. Every week, Mike and I discuss new ways of thinking about the problems that impact IT Leaders. Additionally, we will explore today's technological advances and keep it in a fun, easy-listening format while having a few cocktails with friends. Stay current on all Calculus of IT happenings by visiting our website: www.thecoit.us. To watch the podcast recordings, visit our YouTube page at https://www.youtube.com/@thecalculusofit.
The Calculus of IT
Calculus of IT - Season 2 Episode 9 - Part 1 - Emerging Technologies and Their Impact on Autonomy
Last night, Mike, Kevin and I pivoted the podcast from discussions on what is happening in autonomy TODAY and started taking a look at the impact of autonomy on the future of IT Leadership. How will future IT Leaders grapple with the murk of a vendor and technology landscape that is effectively erasing its borders? This episode was Part 1 of our two-part mini series on emerging technologies and their impact on IT Leadership and autonomy.
In this episode, we tackled:
The Autonomy Paradox of AI Integration – How AI can both empower and trap you in new dependencies.
Vendor Relationship Evolution – From traditional lock-in to AI model lock-in, what strategies can IT leaders use to maintain leverage in an AI-driven world?
Risk Management Transformation – Why traditional risk governance models are no longer enough, and how IT leaders can manage the new risks posed by AI.
As always, we didn’t hold back. From AI-powered decision-making to the real challenges of balancing speed and control, this episode is a wealth of practical insights for IT leaders navigating the chaos of today’s emerging tech.
And guess what? This is just Part 1. In Part 2 next week, we’ll put AI aside for a bit and explore:
Quantum computing and its implications for encryption and IT security.
Edge computing and its role in reshaping IT architecture.
The critical importance of building expertise in a world where technology cycles are accelerating faster than ever.
Listen to Part 1 now to get ahead of the curve—and don’t forget to subscribe so you’re ready for Part 2 next week!
And while you are waiting for next week's episode, play this fun game at home...Take ANY article/story/twitter-linkedin-bluesky post about the amazingly transformative benefits of AI and then replace "AI" with "Basic Process Improvements". You will find that the articles are still completely true. But anyway...
The Calculus of IT website - https://www.thecoit.us
"The New IT Leader's Survival Guide" Book - https://www.longwalk.consulting/library
"The Calculus of IT" Book - https://www.longwalk.consulting/library
The COIT Merchandise Store - https://thecoit.myspreadshop.com
Donate to Wikimedia - https://donate.wikimedia.org/wiki/Ways_to_Give
Buy us a Beer!! - https://www.buymeacoffee.com/thecalculusofit
Youtube - @thecalculusofit
Slack - Invite Link
Email - nate@thecoit.us
Email - mike@thecoit.us
Nate McBride: Hello? Hello? Hello. [00:01:00] How's it going? Good man. How you doing?
Mike Crispin: Pretty good. Pretty good.
Hey, I was listening to the last episode and it sounded like for some reason, for the first like 15 minutes I had just had Novocaine or something. Did you notice that? Yes.
Nate McBride: You did. Yes. But I wasn't gonna hurt your, hurt your feelings.
Mike Crispin: I was like.
I was like, what is, what happened? I don't know. Maybe it was the microphone or my tongue was swollen or something. I don't know.
Nate McBride: But maybe Were you, were you cha and tobacco?
Mike Crispin: No. No. I, I wasn't, I don't know what happened. I, um, I didn't even, maybe it was some sort of like brain thing going on or something. I don't know.
Were you, were you in a trance, Mike? Oh yeah. I was in a trance. That's, that actually might have been what it was [00:02:00] from all your, your intro. I think I was, I was hypnotized, mesmerized.
It was, uh, listened to a great Paul Van Dyke site, uh, set a couple nights ago. That was great. From Cream one, another cream one like that. 2003, uh, a Abi Closeout.
Nate McBride: Yeah. Paul Van Dyke Cream, 99 to 2003. Like Tasha Space, all those shows.
Mike Crispin: Yeah.
Nate McBride: Fantastic.
Mike Crispin: But it was three hours and 45 minutes. It was freaking fantastic.
Nate McBride: You gotta go back and listen to the old, the old Carl Cox, like Carl Cox, uh, Carl Cox, record 2001. Fuck that shit. Man's so good. It's just, uh,
Mike Crispin: any one of. Remember we saw him at Access a number of times. Yes. He would be in Boston a lot.
Nate McBride: He had kind of Austin [00:03:00]
Mike Crispin: and there's still some, I think Paul Van Dyke has a new record out and he's gonna be around.
Probably won't, he'll probably just be in New York or somewhere, but, uh, well,
Nate McBride: I mean, Oak and Fold retired a couple years back. Yeah. Yep. He's still producing, but he retired from DJing.
Kevin Dushney: Where'd you guys go? Uh, copper
Nate McBride: House in Um, mm. Walden nice. I love, I love that bar because, I don't know, it's been around for decade maybe or so.
And they still haven't cleaned the bar top. No. You put your arms down and you lose skin when you pull 'em off. It's my kind of, my kind of place. I want my arms to stick to the bar. That's the gnarlies. Okay.
Kevin Dushney: Alright, here we go. It's the, uh, exfoliating. It's good for the skin.
Nate McBride: Yes. While you're drinking exfoliate too.
Welcome back to the, [00:04:00] uh, British Musician's Conspiracy Podcast. Uh, tonight we'll be asking who and when and how is Ed Sullivan involved in this whole thing? So, um, it's a jam packed episode. We've brought on our expert on all things Beatles, Kevin Dni, um, oh God. We'll be. Trying to get something out of Mike that it doesn't sound completely like he wasn't already involved tonight.
Um, he continues to deny everything. I mean, it's just a convenience string of, you know, denials. Mike completely deny it a hundred percent. Okay.
Mike Crispin: I'm, I'm, I'm new here. You're new here.
Nate McBride: Mike's new here.
Mike Crispin: This iteration of me is, uh, new. This digital twin that's being presented tonight.
Nate McBride: This is Mike's digital twin.
Uh, actually this, I, I, I lied. This is the calculus of it podcast. Sorry if I threw anybody there for a loop. And yeah. [00:05:00] Welcome to episode nine of season two. And this is by the way, the future home of the AI sad salad. 'cause we're not doing sad salads in today's time. We're doing AI salads, sad salads in future time.
Wow. Just so go, go, go over to your favorite Little Jedi machine and just type in Ai sad salad. And there you are. You've got yourself a sad salad. Hmm. We are also going to be Ai af this week, uh, times 10. Very af on the AI tonight. It's been a little while since we've been so Ai af, but a lot of af. With me, as always is End Downable, Mike Crispin, uh, looking once again to completely trash his, um, reputation is Kevin dne and I knew your bride, and we are coming to you actually live from the future.
We're broadcasting to you from three days from now. And you don't even know how we did. [00:06:00] Its about Gravity, man. Gravity now, dude, I just went into chat. GPT and I had it write a script for me.
Mike Crispin: That's all you gotta do. As
Nate McBride: soon as I hit, as soon as I hit enter, it was, it was Friday. How about that?
Mike Crispin: Amazing.
That's how much time it saved you. It actually put you into the past.
Nate McBride: Uh, yes. All of us into the past. Well, future, past. True. Um, well, I mean, I think it goes without saying, but if you missed last week, last week's episode, what the fuck is wrong with you? Um,
we talked about every time,
Kevin Dushney: Hey, I, I knew enough to, to walk in and see Nate playing Orna on the iPhone. I'm like, is that Orna? Yeah, yeah. I do. I'm in,
Mike Crispin: man, I'm in. I listen, I was, I
Nate McBride: I was fighting those guys. There's, there's a couple dudes that have set up Shop on the corner at the, uh, corner of 1 28 there [00:07:00] and, uh, uh, winter Street.
Mike Crispin: Yep.
Nate McBride: And oh man, they're hard to beat, but gotta take my shot title.
Mike Crispin: What, what level are you at right now?
Nate McBride: I'm level 242 Ascension level 22.
Mike Crispin: Geez. Think I'm level, I'm level 40 something 59. Oh, it's not bad. You're moving. I'm moving. I'm getting there, man. Good. Is this something I
Kevin Dushney: wanna get sucked
Nate McBride: into or? Yeah, it's awesome.
Oh yes. Oh, absolutely. Great. The
Mike Crispin: goblin Lord. I got a goblin Lord up on the screen right now.
Nate McBride: Yeah, it's goblin lords man. They're hard to beat, so, so yeah, if you want to, if you want to jump on Orna, anybody and join, uh, Mike and i's, um, kingdom, we're still accepting players. Uh, you can't be a hack though. You gotta be actually pretty decent with the thumbs, if you know what I mean.
So, last, last week, [00:08:00] besides the
Kevin Dushney: world of Warcraft skills
Nate McBride: Yeah, exactly. Last week. Besides, uh, Orna discussions, we talked about building resilience to independence, um, and effective change management, which is critically important in today's vendor dominated landscape, which is kind of like, sort of a repetitive and redundant.
The vendor dominated landscape doesn't even exist to, to a degree. It's just all one vendor at this point. Ai, the AI vendor, we talked about the resilient orchestrator, so the IT leader who never panics when vendors announced major changes, we covered, um. Technical, uh, knowledge and vendor independence. Uh, seriously, go listen to the episode.
Stop what you're doing, just pause. You can come back to this one. It'll be there and your future self will. Thank you. And this is coming from a future Nate, who will tell you that your future self, how are you having knowledge of the future? You're actually gonna come to me in three [00:09:00] days and tell me how much you love this.
I just heard you tell me to today, on Friday. So this week we're gonna di we're gonna dive into something. Well, all these episodes are equally critical, but more forward looking than that. Finally, we've made it through all the background and explaining and talking about how all the things build and work and stuff.
And now we're gonna look in the future. And it's a future that's not so great in a lot of cases where AI is embedded into everything. Um, I was watching the end of the masters, I think I told you this, Kevin.
Kevin Dushney: Yep.
Nate McBride: And I was just waiting for Rory to implode, like just screaming the tv. Missed that putt. And he did.
And then he won finally. But, um, I kept seeing those, those fucking IBM commercials and I was like, you gotta be kidding me. You're these commercials. Make no point. But you know what they're doing that there're CFOs that they're going, holy shit, we gotta, we gotta, we gotta use ai. [00:10:00] Um, so in a world where AI is being embedded into everything, quantum computing threatens our encryption.
And edge computing is reshaping our architecture decisions. How do it leaders like you maintain their autonomy while leveraging these game changing technologies? Before we get into that though, there were two things I wanted to point out. One was, did you see the 4 0 4 article about the old people calling service?
Kevin Dushney: Love it. Yes, I
Nate McBride: did see that. Okay. This was fucking legit. So, just for the audience, if you didn't see this article, there's a new AI startup called InTouch. And what it does is so bad. This is, this is literally, this is where AI is going. Okay. So if you were wondering how like Wally would come to the world, here you go.
So the service uses an AI generated voice to call your grandparents to talk about how their day is going, their hobbies, [00:11:00] how they're feeling. And then an AI generated summary is sent to the child and includes a visual indicator of their state of mind, such as bad move or neutral mood, bad mood or neutral mood.
So the, the, the 4 0 4 people add their own sort of editorialization, they say obviously the idea of having an AI call your lonely relative 'cause you can't or don't feel like it is dystopian, insulting, and especially non-human, even more so than other AI based creations. And I'd argue with that statement, but okay.
The creator though, says it can provide a way to keep in touch with relatives and make sure they're safe. Oh my God. So in Touch's website says, busy life. You can't call your parents every day, but we can 30 bucks a month. No way, way 30 bucks a month. What are we doing wrong? I don't know. It it, it will ask you what topics you want to suggest to get the conversation going.
And then, so this guy said he, he made a fake account. He said, my imaginary grandparent [00:12:00] named Patrick likes video games like Mario, but hates Sonic. And he loves to ride his motorcycle, but he can't do that anymore. So then it made this call,
uh, you, you have to read this whole article. Uh, they have the, they have the calls and the responses. It's absolutely fascinating. So go to the 4 0 4 media website. Everybody find the article. I tested the AI that calls your elderly parents. If you can't be bothered and read this, it is absolutely a gem.
Totally worth reading. Oh my God, it's so funny.
Kevin Dushney: 30 bucks a month. Wow. And of course
Nate McBride: you guys heard that, uh, the Orange Emperor is investigating Chris Krebs. Chris Krebs.
Mike Crispin: Oh, Krebs, yeah. Security guy, right?
Nate McBride: Yeah. Former CSA director. Yeah. Yep, I saw that too. The executive order came out this week about, um, investigating him for Crimes Against Humanity.
And then there that was, [00:13:00] I mean, you can read all about it, just Google anything, you'll find it. But it really, that wasn't really the one in the news. Uh, what was the news? I wouldn say,
oh, this was the cool one. So you have to actually go, ironically to the Krebs on security website. But it was the ums, the SMS Phish group, this Smishing group, um, the Smishing Triad. And on the website for Krebs and Security, they have the, they have pictures of the walls of iPhones and Androids for supporting their cybercrime phishing campaigns, the Smishing campaigns.
It's, it's amazing to see this and what, what's gone into this development. Um, also Microsoft has a zero day big deal. Alright, where were we?
Mike Crispin: Is this all the smishing All the smishing stuff?
Nate McBride: Yeah. So yeah, there was a big campaign that went out like last month. Yeah. For, uh, for mass state tolls and stuff.
Yeah, that was just one of the campaigns. So there's no jobs update this [00:14:00] week. So if you're looking for a job, you're shadow luck. There aren't any, or you can just go research them yourself. But we'll be back next week. Um, and you better apply soon because some companies we're gonna talk about tonight are using AI to screen resumes now.
But even more than that, they're actually asking could AI just do your, do the job while you're hiring? Um, and Shopify in that case is not going to hire you if AI can do your job. But more than that in a bit, uh, we have a Slack board. Join it. We have a, uh, Substack website. Come and join it, which is where you can find all our episodes as well.
We broadcast on Apple Podcasts and the Spotify and the YouTube and wherever else you listen to the shows, we're on all of them. We also have links to buy us a beer. We have links to bias merch in our descriptions, and you can also donate to, uh, Wikimedia, the A CLU or, um, life Science Cares or the SPCA or anything else that does good for humanity.[00:15:00]
Just donate. You got a couple extra books. So like I said last week, the Human Fund. The Human Fund. Exactly. We tasking. We tackled building resilience. We introduced technical knowledge and vendor independence, and we talked about change management, uh, but not really the change management. We all know and love the other side of the change management coin.
Um, tonight we're shifting our focus to the future and we're gonna talk about AI for sure. And that might be actually the entire episode tonight, and then we'll have to do it part two, but we'll see how it goes. But so far the season's been about finding the sweet spot. So balancing risk, innovation and productivity, or preserving autonomy.
That's the key message. We started this in in episode one. We've been building on it and building on it. We're still there. We're still talking about it. It's still working. So I want to ask both you guys a question. Start this off, get the conversation going. So if you had an open FTE, which is very rare. In our industry for most companies.
But if you had an open FTE full-time employee, but you felt the job [00:16:00] could be done by ai, would you hire the head today or would you actively focus on moving the work to ai? And by ai, I mean mostly generally ai.
Mike Crispin: I would probably hire the person still at this point in time and I think CO as part of that, make sure that they're able to leverage AI tools and services and kind of that mindset. Not to be afraid of using ai. I would say that that is still sort of in the wheelhouse, but I don't think it's a job that I, any, any job would be fully replaced by ai.
And if it is, it's just someone making that AI work better and giving that person an opportunity somewhere else within the org. Agree. My today, my today point of view, you know, things change so fast, you just never know what's gonna happen. But
Kevin Dushney: now a slant on that [00:17:00] could be, could you hire someone that's a little bit more junior and level them up with AI or Yeah, if you already had a team, do you need that extra person?
Or could using AI amongst your existing team avoid that hire? But in general, if it was just straight up. Higher versus AI would, I think, I don't think we're there yet. Okay. I would person, so let me, great
Nate McBride: answers. Let me,
Mike Crispin: uh, it, it may, it may also sort of jump back in, but it also That's okay. I think it also depends on like, if you've got this cohesive AI strategy like in place, like if you've got sort of point A to point B sort of mapped out in your rules and whatnot, it depends on what the organization needs too.
Right. But sure. You, you, you may have certain scenarios where if your company is more in tune with some of these AI services, maybe you've introduced them in the year, year, year or so and a half as it, that could change what you would hire [00:18:00] based on the maturity in the organization for ai. Yep. Because it, some, some, some people, some, uh, some leadership in the organization may be, may, we're getting close to that point where we, we, three of us may be asked, can AI just do that?
And I mean, we're probably not there yet, but to have a, that strategy in place, not so much from a personnel perspective, but we've talked about sort of the tool sets and how it gets leveraged and the risk and reward components of it. That um, okay. Yeah, sure. Jump back in there.
Nate McBride: We're gonna come right back to that point, um, in just a second.
So that was good point. And. That's kind of perfect. But I want to, before we get into the next part, a follow up question to that same exact question, but replace FTE with software. So if you could, if you had the budget to buy a platform, but you could also build that platform [00:19:00] using AI with a obviously hefty dose of automation, what would you do?
Right now?
Mike Crispin: I would still probably buy the traditional software package and if I had time I would build something on the side in my own time, at least me personally and messing around with a lot of these things. I don't think they're ready for prime time yet, or I just don't have the confidence that I'm able to build them in the respect that they'll work.
I think we're a few months away from that at this point and being able to make that dec Okay. Makes solid decision for me. But Kevin,
Kevin Dushney: think so. I mean also as you guys both are well aware, everyone is throwing AI into their platforms with varying degrees of competency. Mostly variable, yeah.
Nate McBride: Collaborative ai.
Kevin Dushney: Yeah, I think it just needs to, you know, mature, uh, it'll get there. But like even our, [00:20:00] you know, contracts management system vendors, like, oh yeah, we, you know, we, we have AI now or board portal. Not sure I want AI in my board portal. Right. But they're pretty there regardless. So, so we,
Mike Crispin: we've already seen it fail and like, or have issues with like CLMs for example or Yeah.
That type of stuff where it, they went too. A vendor goes too fast with the technology and it sets you back a year. So Sure. There's that component of it. What I'm really interested is the interactive automation.
Kevin Dushney: Yeah.
Mike Crispin: More so where AI can move the mouse on the screen and you may be able to do the same old I agree and efficient process with AI and not change it, but let the AI go through it manually and manage, still have the same checklist and audit trail and SOP without having to change it.
Just let AI fall to SOP.
Kevin Dushney: Yeah. More an augmentation than a replacement, you know, just given, it's still the propensity for bias and [00:21:00] hallucination. Sure. Depending on the system of course. But I just don't think you get buy-in or it'd be tough to recommend it as well. So you can just
Mike Crispin: say you're doing it and it's something else moving the mouse.
Right. Course.
Kevin Dushney: So generated background, right?
Nate McBride: Part of the, part of the, the, the, the tie in here is, okay, we're thinking about future autonomy and future control and, and again, still, um, some sense of autonomy as this world begins to evolve around us in this way. But I'm gonna say something and then we're gonna come back and revisit this, which is to get organizational gains, I think like an organizational gain in the broad context.
Okay. Any organizational gain requires time spent in AI use, and you have to do that time spent yourself. Yeah. [00:22:00] There's, there's, there's no way, I think personally, in my opinion, in this case, that you can simply outsource how it is you're going to use AI inside of a company. Um, yeah. And, and knowledge. I'm just gonna, I'm just gonna put that out there and then we're gonna come back to this because there are, here's what, here's what companies are starting, starting to do, and I say companies, but anecdotally I only have two actual real world cases.
Uh, I'm sure there's more. They're not making headlines and people are starting to thinking about this as a future state. But two examples of companies that are moving towards a. The, like, sort of the, if, if both of you were to sort of, if I was to classify both of your answers, there are essentially zero decisions, which isn't a bad thing in this case.
In this case the, the zero decision is to stick with what you know, the status quo, the good decision, it's safe, it's stable, low [00:23:00] risk, uh, you know, still high innovation, et cetera. But these two companies are doing the one choice and the companies are Shopify, who we hate and LinkedIn. So the, the CEO of Shopify.
And this has made its way around the internet now by now, but has posted the baseline expectation memo. Uh, and there's some questions that aren't really addressed here, and they go back to my point I just made, which is, what is management's vision of what the future looks like at Shopify? So what does everyone there do a few years from now all day?
Uh, is it just nobody's there? Or like two people? What is the plan for turning self-directed learning into organizational innovation? So back to the point of nobody just does ai. You have to have, in order to get organizational gains, you have to spend time. Three, how are incentives be aligned? So people want to share rather than hiding what they know.
And or poisoning the well to, [00:24:00] uh, to damage AI on purpose to make it less effective. And then lastly, how do employees get better at using it? And the last one's, I think, a softball for everybody to answer. But before we answer these, I just wanna kind of go over the memo. Um, if you'll indulge me a second.
So the memo, uh, this is Tobias Luki, by the way, CEO of Shopify. Um, basically he wrote this manifesto effectively, and I'm not gonna read the entire thing, but what he's talking about is we often talk about bringing down the complexity curve to allow more people to choose, um, merchants and entrepreneurs as a career, so on and so forth.
Our task here at Shopify is to make our software unquestionably the best canvas in which to develop the best business of the future, so on and so forth. Maybe you're already there and find this memo puzzling. In that case, you already use AI as a thought partner, deep researcher, critic, tutor, or your peer programmer.
I use it all the time, but even then, I feel I'm only scratching the surface. So to the point of [00:25:00] takes time to learn that he's, he's in there, he is acknowledging that, and I've been pretty clear about my enthusiasm for it. Um, you've heard me talking about it weekly, on and on and on. So what he's basically saying is, uh, what we have learned so far is that ai.
A skill that needs to be carefully learned by using it a lot. Okay, great. It's just too, unlike everything else, the call to tinker with it was the right one, but it was too much of a suggestion. This is what I want to change here today. We also learned that as opposed to most tools, AI acts as a multiplier.
Okay? I mean, all things to a degree I agree with, um, this sounds daring, but given the nature of the tools, this doesn't even sound terribly ambitious to me anymore. It's also exactly the kind of environment that our top performers tell us they want running together, surrounded by people who also are on their own journey of personal growth and working on worthwhile, meaningful, and hard problems is precisely an environment Shopify was created, provide.
So he's [00:26:00] got a, uh, six point manifesto. Number one, using ai, AI effectively is now a fundamental expectation of everyone at Shopify. That's called the mandate. Uh, number two, AI must be part of your GSD prototype phase. Uh, GSD is getting shit done, by the way, in this case, the prototype phase of any GSD project should be dominated by AI exploration.
Okay? Number three, we will add AI usage questions to our performance and peer review questionnaire. That's a little hanging fruit one. But then here's number four, learning is self-directed, but share what you have learned. Okay, so this is where I start to get a little bit lost, and it takes me back to the statement I made earlier about organizational performance and boosting organizational gains.
But then we'll come back to that. So then it goes on number five, before asking for more headcount and resources, teams must demonstrate why they cannot get what they want done using ai. Interesting. Yeah. Further, [00:27:00] what would this area look like if autonomous AI agents were already part of the team? And then lastly, everyone means everyone.
This applies to all of us, including me and executive team. And then he goes on and on for this pithy closing. So immediate thoughts. First of all, I'll give you mine and then we can all sort of trash it. First of all, how would you measure that this is better? I mean, it sounds technologically transformative, but almost for the sake of just being technologically transformative.
Question number four, is that the pivotal question? I feel like everyone somehow agrees that there's something here worthwhile to gain some efficiencies, though not always, I don't know how many articles, hundreds of articles have come across my email or whatever, but none have shown yet anybody that has figured out how to address the best use case.
'cause there simply isn't one, in my opinion. Further, when an individual does find a great use case, it's individual, it does not mean it's going to transfer to the enterprise. [00:28:00] Probably not even to their office mate. Then lastly, self-directed learning to his last point, or to his fifth point, becomes innovation.
When people are valued as co-creators of the company's future, not just content consumers or reviewers, people are starting to build their own knowledge. What we used to formally brand as shadow it to be honest, as they test, fail, share, iterate, repeat in their AI platforms. We used to hate this shit.
Spent millions and millions of dollars shutting it down. Now we're encouraging, now we're encouraging it, we're mandating it. Okay, so I'll pause there. Reactions, thoughts,
Kevin Dushney: uh, mixed. I mean, I that you, you already said this along the way. I think there's some good pieces of that, but the, the mandate piece is, um, you and I have talked about this. Nate is a tough pill to swallow because I see it as more of an optional tool and everyone's gonna take this up at their own pace, right?[00:29:00]
So I think that's an issue, number one. The other is like, justifying headcount. How are you gonna do this? And that's so subjective. Like, who's, who's, how are you making that decision with what data? This is all so new it, this, that just seems very subjective, that you would say, well, as part of this headcount justification, and you know, I've, I've evaluated somehow if AI could, you know, as if they're on the same playing field, take the place of that employee.
And, and maybe it's just trying to be provocative, right? And that's not gonna happen in practice. But, you know, if that's really true, like how would you execute that? So yeah, it's, it's just a rhetoric to say, you know, you're gonna use this
Mike Crispin: if he's demanding, if he's, he's mandating that everyone learns how to use AI and they're going to share how they're using it to get things done, get shit done, so to speak.
And that's the culture of the company that they bring. It may be [00:30:00] that when you're trying, it's a little bit of a different environment over the course of time that you're trying to, uh, justify needing a headcount when now you've got a whole organization that's, yeah. Using AI to do different things and probably be more likely to be able to disprove that a new worker is needed.
Mm-hmm. If they're all working mm-hmm. In these, if this whole culture takes shape within their company, which I think is his goal. I think it's, uh,
I, I, I, I think in time it won't, it it'll, right now people are using AI on their own, like you said, they like it's kind of individual. Uh Yep. It's still kind of, Hey, you didn't know I used AI on that and it made me look smart, uh, type stuff. Mm-hmm. Yeah. That's, I mean, right now, if you don't write a job description.
You know, if you were to take a day or three days to write a job [00:31:00] description for some, I mean, you don't use ai, someone would say, what are you spending three days doing? Um, and as time goes on, more and more of these tasks, certain things may be able to be done. Sure. So I get, I get where he is going. I think it's early, too early, like we said, with our own kind of projects and how we would approach it.
Maybe he's trying to get ahead of it and, and like you said, Kevin is sort of this rhetorical component of it that he, they want to get, it's, it brings some, it's brought a lot of media attention to Shopify. Yep. As an innovator. We're talking about enough business model. We're talking about it right now.
It's worked. Um,
Nate McBride: but they also suck as a company, but Okay. Keep going.
Kevin Dushney: Yeah. Regardless still, you know, no, no press is bad press. Right.
Mike Crispin: I, I, I think this will happen more, um, invisibly in companies and he's just trying to get ahead of it. And I think it's either it happens in a vacuum in pockets of the company [00:32:00] where you don't have support from management that this is how the company needs to continue to learn and get things done.
Mm-hmm. Or you have a rogue element of many components of the company going this direction in which you run a lot more sort of risk, I guess to some extent. I imagine whatever apparatus Shopify has built to support this across the. Has been tested and understood. And I think
Kevin Dushney: he says it doesn't, he reference it in the, in the piece that they have tools in house.
So essentially, you know, he's built the
Mike Crispin: sandboxes and maybe they made
Kevin Dushney: the investment and he is like, Hey, the usage rate's low, so I'm gonna, I'm gonna light this fire and get people. Yeah, yeah.
Nate McBride: No, it's definitely a call to arms. Uh, I'm gonna come back to a point in a minute that I think will help us sort of, uh, tackle this a little bit broader.
I wanna mention the other company, LinkedIn and just, I, I only took a snippet of their statement, but, um, effectively at LinkedIn, company leaders are now [00:33:00] encouraging and encouraging is the word not mandating, but encouraging engineers to use AI to speed up design work, and designers to use AI to generate code.
Yep. Um, and this is from, so their chief product officer, Tom Tomer Cohen, said, building products will be less about technical knowhow and more about intangible skills like taste, imagination, the mo storytelling, the most important trade for a builder is judgment. A taste making ability to know if something is good for the market or not.
So think about the developer, and I'm not a developer, but think about a developer that's spent, you know, however many years learning the language and becoming amazing at it. Now having to effectively say, well, the AI's gonna write all the code. Mm-hmm. And I'm just gonna figure out the taste part. It's gonna make for a very subjective, I think, process.
'cause taste is inherently subjective. Absolutely.
Mike Crispin: I mean, look what, [00:34:00] uh, Canva just did last week. I mean, they have built a, basically an app builder for everybody to Yeah. To create apps and websites that are pretty darn good. And well, I mean, it's almost like the pro, going back to ai, the initial, we talked about Chachi bt in the earlier days is the prompting.
Mm-hmm. You know, most about your imagination to ask the best question, the ability to have all the details to craft instructions is that's what's more important than, than the actual building part it seems as we go forward. Mm-hmm. Even movies, just think of the best movie and perhaps, you know, this will get good enough to build a, yeah.
A this great story out of a book you've written that's more consumable for all ages or for people who are into gore, you know, whatever it could be. It just depends on what taste you want to give it, what, how you blend it over [00:35:00] time.
Nate McBride: Well, here's the question then, Mike. Uh, well, and Kevin, both of you. So if everybody, let's go back to the job description example you gave.
If everyone's using AI to generate job descriptions. And AI generates the same job descriptions. Do we eventually move to what we talked about episodes ago, the great homogenization, where effectively there is no more job description because every job description is identical. There is no more variability anymore.
There's no more variability in anything because everything is identical. Though the uniqueness factor is removed for the efficiency and speed, effectively, you're a global zero decision.
Kevin Dushney: Yeah. I think you're using a, I ironic, that's if all the job descriptions are the same, 'cause you're not, you're no longer factoring in what?
Oh, not, not, not
Nate McBride: company Kevin, more like me. You and Mike all use generative AI to create a [00:36:00] job description. Yeah. Or as like senior director of of serial, uh, it's, I it's identical.
Mike Crispin: It won't be
Nate McBride: because you
Kevin Dushney: wouldn't, they look, wouldn't they look similar anyway if we wrote them on our own in terms of Exactly.
But forget the title. Do we get
Nate McBride: closer by, by not writing it ourselves and not adding any unique flavor? Yeah. Do we get closer and closer? We, like, we don't necessarily move to a zero, full zero point, but do we get closer and closer to where the parallel lines Yeah. We really are on top of each other in terms of, or Yes.
Right. The lack of, lack of uniqueness. Mm-hmm.
Kevin Dushney: I don't know.
Nate McBride: That's a good question. I mean, you can, you can argue about it, but I think, we'll, we'll talk about this in a little bit, but. I put another example on here, which is if everyone's asking the same questions of the same generative AI engines, yeah, they're all likely gonna get the same answers.
And so who wins the battle? Who gets the answer and calls? I think the better, the better
Kevin Dushney: Prompter
Nate McBride: wins,
Kevin Dushney: right? Because if you're gonna [00:37:00] differentiate how, like if we're all vying for the same candidate, writing the same jd, how are you gonna differentiate and draw that that candidate to you?
Nate McBride: Exactly. So can, but, but wouldn't you also, from a candidate's perspective, be able to type in senior director of serial and you get a hundred senior directors of serial that are all identical and now you're able to filter by other phenotypical things like geography, salary, et cetera.
The job descriptions are all the same. Yeah. I mean ultimately you're able to,
Mike Crispin: I I, I dunno, I think that's you're gonna get creative, you're gonna be able to put more things into a job description. Yeah. That salary could be descriptive
Kevin Dushney: other than the regular stuff.
Mike Crispin: You could pull data from different sources to be, if, if I'm trying to bring someone in against two other people and I'm creating the job description in, um, in ai and I've surveyed that, that job is open and I really think the job description is what's gonna pull someone in, then I'll work extra hard on it.
But I think if there are other elements that we can come attach to the job description [00:38:00] or in the job description, uh, by using different data points or context of your company, you would include that in the prompt and in the, in the iterative process of building the job description to make it your own.
Nate McBride: I'm only arguing from Toby's perspective, which is you shouldn't be spending your time doing that. Yeah. You should be just having AI do the work for you and then spitting out the job description to go ahead and hire somebody. You're talking about you are, but if
Mike Crispin: the job description, you know, you're, you're saying is that you need to be unique or not.
And I, I would say it can be if you want it to be, if you don't need it to be, and job description isn't a huge value in recruiting someone, which debatably may or may not be.
Nate McBride: Um, well, I'll stipulate and frame it a different way. Sure. So what you're, what I'm suggesting is that if everyone's using AI to create the same job descriptions heading towards the state of uniformity or near uniformity, that would be a zero decision.
What you're suggesting is that you would stop for a moment, pause and then make it unique, which is, would be the, in fact, one decision here. You're taking the autonomous step mm-hmm. To say, [00:39:00] actually no, I'm not gonna put out this tropish trash that everyone else is doing. I'm actually gonna add in my own little sauce.
That's what you're saying? Yes.
Mike Crispin: My, my own taste.
Nate McBride: Okay.
Mike Crispin: Yeah.
Nate McBride: Okay. So then well, I want you to, I'm gonna ask a question, don't answer it yet. I wanna think about it and we're gonna come back to it, but Kevin just kind of said my thing, which was whoever prompts first, and here's a question that we're gonna have to answer soon in, in this podcast, which is, uh, if we're all asking the same question and we're all getting an answer, I.
And then one of us is going to go ahead and take that answer forward and to do more serious, potentially patent based work. How do we prove that we're the one who asked the question when everyone else got the same exact answer on the test? Don't answer it yet. Think about that. We're gonna go back to that.
Secondly, uh, on that same question, I'm also gonna ask you what if everybody had, what if everyone hired somebody or [00:40:00] people or a team or even an AI to go ahead and write every single possible preemptive query that could ever be asked ever for their company? Put those into a database and then start asking them.
And not only asking them and asking them on a routine basis, and then comparing those results of every answer from every single time they're asked to determine what's changing in variability. Think about this one too. 'cause I was draw, I was like trying to map this out today and thinking about this exact possible scenario happening and how simple would it be to create and my, my, my, my ears and my eyes and my nose started bleeding all at the same time.
So we'll come back to those, but I want you to think about 'em 'cause I do have a point to come back to them. So it's the topic at hand, by the way. That was good intro and it got everyone's brains moving 'cause we're gonna need that. 'cause some of this stuff gets pretty deep. So we're talking about emerging technologies and their impact on autonomy.
Okay? We've [00:41:00] established that and this builds in everything we've been discussing this season. We've also talked about that. But here's the thing about emerging technologies that no one talks about. Enough, except us of course, on this amazing kick ass podcast. Emerging Technol. Emerging technologies by default, have a what?
A paradoxical relationship with autonomy. That's right. A new paradox. The same technologies that can liberate you from vendor lockin and give you unprecedented control can also create new dependencies and more dependencies and constraints if implemented without strategic foresight. Okay, well they will give you those things, but with strategic foresight you can minimize those.
And what's fascinating is that these emerging technologies, IE AI aren't just iterative improvements on what came before. They're fundamentally changing the rules of the game. AI doesn't just automate existing process. And Kevin and I had a wonderful aside tonight about how pissed off I am and nobody's talking about improving process anymore.
But [00:42:00] AI doesn't, just doesn't automate existing process. It can reinvent process and not always for good. And like we'll talk about quantum computing later, it doesn't just make existing encryption a little bit better. It can potentially breaks it completely and forces us to rethink our entire security model, which is to say we might as well just all give everyone the passwords at this point.
So. So that's where we'll start tonight. We're gonna look at AI first. We spend some time on it, and then we'll jump into the rest of the future world, this future. Um. Dystopia and it may take us two episodes to get through and that's okay. So this this paradox, the autonomy paradox of AI integration, uh, this is basically a central tension that we're all facing right now.
The three of us, everyone else is an IT leader. It's we're getting pressure from all sources, the media executives, people around us, uh, around this AI thing. It's hard to completely classify [00:43:00] as emerging 'cause it's been around for 25 years, 30 years on a technological skill that we can conceive of anyway.
But I think in a lot of ways people are just starting to get their heads around it now because it's become the zeitgeist. Unfortunately, it's becoming the zero answer to problems now that have always already existed and are not solved by solvable by zero solution. And it's replacing zero based processes that need to be fixed before you can go ahead and gild the lily with ai.
So I am gonna go on a brief rant, I'm sorry, but if you take every single article that's out there today, every article that says generative AI will do this amazing thing, you can simply take generative AI outta that sentence, replace it with a process improvement, and it will, the entire article will still be true.
So why aren't we asking this question? Why aren't we asking ourselves about how we can improve process Before we use ai, we've thrown out any [00:44:00] idea we're concerned about improving our processes. We all just want ai. So why is AI becoming the default answer? I'll throw it to you two for that
Mike Crispin: because improving processes is really hard
Nate McBride: and boring
Mike Crispin: and people don't do it well.
And AI sounds like a magic bullet.
Kevin Dushney: There's no hype cycle on process improvement. Nate, also true
Nate McBride: truth, truth, truth.
Mike Crispin: Don't want operational excellence program. Don't want cross-functional groups spending time on operations when they could be closer to the product or to some getting new data sets or, you know, discovering new drugs.
Yep. So we can have AI do that for us. Right, right. Mike, right, Nate, right Kevin. We don't, we don't wanna, we don't. The business process management.
Nate McBride: Yeah. Fuck process man picks a hundred steps to do this thing. Ah, it's hundreds fine. We gotta air now.
Mike Crispin: It's hard. It's hard to build a new business process.
Kevin Dushney: It is hard.[00:45:00]
You gonna, you're gonna plot this curve or the decline in, in good process just makes a mess and you're gonna try to keep offsetting it with AI improvements. Well,
Mike Crispin: I wasn't kidding when I said I want AI just to move the mouse. Click here, click here, click here, click here. Yeah. It's the worst. Well, how else are gonna in the world?
But hey, just move that thing to come I up morning. If
Nate McBride: I have to keep touching my mouse and touching my keyboard, how can I, how can I get to level two 50 if I gotta keep touching shit?
Mike Crispin: All you need is win batch game. Focus on my game. Just get a copy of Wind Batch and we'll, we, we can do better than AI get, just get the wind patch, you know, use the Windows Hut keys, tap enter five times.
Nate McBride: Well, I'm, I was all fired up about it, but yes, you both make good points. I mean, ultimately process improvement sucks. It sucks ass and it's hard as hell to do. It's beautiful and gorgeous and, and wonderful to behold when it's [00:46:00] done. But everyone seems to not wanna do the work. And this ai uh, here's the secret, everybody.
It is not the magic bullet for your broken ass process. No. You need to fix your process. True. So, um, this, you know, so Shopify feels that AI is going to itself change the process. It will not. Um, I mean, yeah, so I. Get some
Mike Crispin: Lean Six Sigma Black belts
Nate McBride: in Lean Six Sigma black belts. I am one, by the way, and I don't know fuck all about how AI and Lean Six Sigma work together.
Oh, that's
Kevin Dushney: just a, a custom GPT or a gem. Yeah.
Nate McBride: We don't need, that's a gem. That's right. We do the
Mike Crispin: six Sigma gem.
Nate McBride: Six sigma G, the lean,
Mike Crispin: the lean gem.
Nate McBride: I'm telling you, just by virtue of the fact that you said that there's a company setting up shop right now, uh, to build that app. And his name lean me is Mr.
Lean. [00:47:00] It's leany. So on the one hand, AI promises unprecedented autonomy. Unprecedented autonomy. If you watch the IBM commercials and you read the headlines and you look at the sound bites, unbelievable. You're gonna save a gajillion dollars. If you put AI in, first of all, you're gonna be, you're gonna work so fast.
You can be on your yacht yacht all the time. You'll be just buying doge coins and uh, and stable coins and all kinds of shit. You don't have to work anymore. You just gotta put AI in everybody. I'm just telling you the secret right now. But here's the, here's the little secret. I'm gonna whisper it a little bit.
It's all completely specious. I'll stipulate. AI has capabilities to automate routine decisions. Mm-hmm. Okay. So does automation and process improvement, but AI has that. It can reduce dependency and specialized staff. Okay, no problem. I can go in and ask Gemini to help [00:48:00] me do something. That means I don't need to go out and call somebody and give them money.
Okay. Even allows you to build custom solutions that previously would've required expensive vendors. Mm-hmm. Like, uh, I don't know. The RSMs of the world. Yep. AI powered DevOps can self-heal infrastructure before they even become problems. AI code assistance can help smaller teams accomplish more without any kind account.
All great. But there is no guarantee that if you do any of these things, it will work. If you apply it to things that are already, shit, if you're starting net new, you can use these capabilities to potentially accomplish these things. But if you are gilding the Lilly, putting lipstick on the pig, use any other metaphor mm-hmm.
You're, it's not gonna work out. Do you agree? Disagree,
Mike Crispin: guys. I, I agree. Be one reason I agree [00:49:00] especially is if it's a bad process and it's 26 steps, it's gonna be a lot of tokens. It's gonna cost you a lot. It's the same thing. Efficiency work versus cost. If you gotta a crappy AI model running a crappy process, it's gonna be, it's not gonna return any benefit, except for you don't need a human, it's still gonna cost as much.
Kevin Dushney: Agreed. Let, let's say you, you didn't have an ERP yet. Like isn't one of the first things you walk through is why do we want to build bad process into the ERP and do process redesign and improvement before automating it? Yep. It's the same principle, isn't it? Yeah, it is. It goes back
Nate McBride: to that old guys. We all went through this, which was, um, what's this platform going to do for you?
Are you going to completely, so are you willing to basically inherit the process of the system and and work exactly the way it does? Yeah. Or are you going to take your shitty process and [00:50:00] customize the crap out of it to make it work like you do zeros and ones? Yep. Um, so imagine a company that uses AI to build a customer service platform, like, uh, one that's supposed to sell shirts online and then kicks customers out for no reason.
And this would've cost millions to purchase from a vendor. I'm not saying names, I'm just giving you like a speculative question. This company maintain, and I know what you're
Kevin Dushney: talking about, sounds somewhat esoteric.
Nate McBride: This company maintained full control of data, the algorithms and implementation. That's increased autonomy now.
Also, just imagine how fricking hard it would be to build that.
Kevin Dushney: AI has decided to remove this from your cart.
Nate McBride: AI has canceled your account for no reason that we're gonna tell you about. And then there's more meta questions, uh, such as, um, in the process of building the thing, [00:51:00] what would you learn about yourself along the way? And if I was to go out, and this is true for, I mean this is applicable to anything in the world that you ever make.
If I go ahead and make a pie, what do I learn about making pies? Well, the next time I make a pie, I'll be better theoretically and so on, so forth. Right? Until I become the king of pie making. Well, in the process of doing such a build a service of doing such a thing, you would learn so much about it. Would you need to repeat that process again?
And also in the time it takes to you to build it, this is, this is my favorite part. The technology will have outpaced you. So what you have spent a year to build is already going to be so archaic by the time you actually go into production. How do you accommodate for that? But on the other hand, AI can create new, often less visible dependencies, sort of the shadow dependencies.
So when you rely on the LLMs from the open ais and the gies and the Anthropics of the world, you're [00:52:00] outsourcing your decision making to a black box. You don't control. Mm-hmm. When your business processes become dependent on the AI predictions, you're creating a new point of potential failure. I mean, well, multiple points actually.
And so. Can you guys see a time when companies who rely, like for instance, the security example, um, you know, now you're seeing your favorite security vendors. I won't mention any names, but your favorite security vendors saying, we've got AI in our platform now, and you can make decisions faster, et cetera, et cetera.
But when you come to rely too heavily or entirely on this AI for security monitoring, um, you do you lose a grip on understanding your security posture? Are you basically just hoping at this point, because that's decreased autonomy.
Mike Crispin: You're putting a lot of, um, putting a lot of trust in it, that's for sure.
Kevin Dushney: Well, and who steps in it? It doesn't get it right and things go sideways. Like who's, who's rescuing that situation, [00:53:00] right? A I say you're compromised and then a human's like, what do we do?
Mike Crispin: I just say it'll be us. It'll have to answer. We picked the vendor.
Nate McBride: I mean, one of the greatest examples of this gone wrong is we almost had a thermonuclear war between Russia and the US in the eighties.
Because of this, the whopper decided that we were just gonna let Whopper run and make decisions. And then what did it do? It almost kill everybody. Think tic Tacto. The key question for IT leaders is this, I think. Does this specific AI implementation that I'm about to do this thing that I have been talking up a storm about in my company, is it actually going to increase or decrease my decision making power as an IT leader?
Does it gimme more options or fewer options going down the road? And will those answers still be the [00:54:00] same in 3, 6, 12 months or longer down the road? So if, if I can say to this, oh my God, of course I'm still gonna have all the power by putting this in. I am the power Lord of this AI implementation at the moment, at the second I deploy it maybe.
But is that still true down the road? I mean, you guys can see this, right? Yeah. For that, the potential for that to change, for it to become so evolved, you've lost your capability to say, actually we're gonna change a different platform now. Everyone's like, absolutely not. Can't, can't. No fucking way we're done.
Kevin Dushney: We're in vendor lock of AI locking.
Nate McBride: Yep. Started off as king of the castle and now you're just washing the floors. So I don't, I don't think it's the answer's ever gonna be clear cut, but I think that's because autonomy impacts the autonomy impacts of AI happen at every level.
Kevin Dushney: Yeah. [00:55:00] It has me thinking just, you know, brainstorming on this is, you know, people are asking a question, you know, will this take your job?
And it's more. Maybe it will take your autonomy. It's a better question.
Nate McBride: Yeah.
Kevin Dushney: Yeah. It's both depending on the, on the role, right? I mean, small tangent, but if you look at the quality of image generation now that evolved in the last month versus what it was before. If you're creative doing, you know, labels for products, movie posters.
Nate McBride: Yeah.
Kevin Dushney: I mean, what are you gonna do's? I didn't That's right. Average skills can generate. Yeah. That stuff versus what you, you being a creative Now I could do it as a non-creative.
Nate McBride: Yeah. There's, there's gonna be a huge point in time when the word art becomes a cliche. And [00:56:00] we, we talked about this a little while ago, but the standardization of all things, the ubiquity, the, everything's the same.
Now, if the three of us decide to use Imogen or whatever to create images, they'll all be different images
Kevin Dushney: or mid journey. You guys have played with that yet?
Nate McBride: Yeah. Mid journey or Dolly or whatever, but doesn't matter. The fact that we're just creating tons of useless images the same way is in of itself, is, it's non-unique.
Just the, the premise of how we did it. So yeah, we have now 7 billion people that are capable. How many people are on the planet? 7 billion. Yeah. 7 million people that can all create shitty images. Great
Mike Crispin: whole thing is, and job descriptions, I think shitty images.
Kevin Dushney: But think about the, it's funny 'cause somebody asked this as a, as a question around like using all this AI and inability and think about all the nonsense that it's, it's being used for, whether it's deep [00:57:00] breaks, you know, uh, Instagram reels, crappy images, the GPU power and all the, the cooling and energy goes into that for just nonsense.
Nate McBride: Yeah.
Kevin Dushney: From an environmental standpoint. Yeah. Well, the AI wonk, I don't fake use, but still, you know, there's a,
Nate McBride: the Andrew s of the world and all the AI wonks are all saying, oh my God, we're saving so much power now by how much we're making the models faster and better. Doesn't matter. 'cause you make more, faster, better models that just makes room for more faster, better models.
Kevin Dushney: Yeah. I asked that same question and Ethan Mooch, I think posted that, posted that on LinkedIn and I said, you know, yeah, they're getting more efficient, but are people just using it to do more non-productive things, therefore, you're completely offsetting the efficiency.
Nate McBride: I, I follow Ethan Mooch. I wonder if he works, 'cause all he does is post on LinkedIn every day.
Kevin Dushney: He, he's so pro prolific, it's ridiculous. And he's testing like every, every single new model that comes out, he's
Nate McBride: [00:58:00] a good guy to follow on LinkedIn.
Kevin Dushney: Yeah,
Mike Crispin: I think the, some of the. Real innovative things that are happening in, in ai, they're not being shared for by design. I don't think they're, how do you mean, Mike?
I, I mean, for example, if, if you, if you've got a business that's running on AI and you've got some secret benefit as a competitive advantage, you're not gonna be on LinkedIn or anywhere else talking about, or in conference, you know, talking about how all these great things your business is doing with ai.
No. Unless you're, unless you're getting a nice plug from Google or IBM or getting some discount, I, I think that it's competitive advantage, just the way that when we all started with chat, GPT and probably used it for things, it was like, oh wow. I'm not like, oh, I, I just typed that into chat. GBTI got my presentation, two of my slides from that presentation.
Businesses aren't gonna share the way that they're using it to, to be competitive
Kevin Dushney: or you'll get the watered down stuff, but not the Yeah. Truly [00:59:00] transformational.
Mike Crispin: The billions of images that are created are, are wasteful, but they're truly proof of concepts in a lot of ways in which they are thrown out to everyone to try and to use and prove we can scale up at a speed that we can scale at.
Mm-hmm. And then, you know, for all this other stuff, whether it be warfare or science or anything else. That's all happening in the background. And I think it's being used for a lot of things that we probably can't imagine right now.
Kevin Dushney: Or simulations, for example.
Mike Crispin: Scary stuff. A lot of scary stuff, and probably a lot of very good stuff.
Yeah, you
Nate McBride: can, you can imagine all the terrible shit that's happening with AI right now that's not being talked about. Yeah. Oh, I I,
Mike Crispin: I, I think it's even, I think it's even worse than terrible.
Kevin Dushney: I mean, I mean, if you had a model with to zero ethics, can, can you imagine? Yeah. Just, just go get no guard rails. Go,
Mike Crispin: go, go spend 20 grand on a few GPUs in your basement.
You can, you can change the world for the worst.
Nate McBride: If I had 20 grand, I wouldn't put GPUs in my basement by the way. I would put a pool table. [01:00:00]
Mike Crispin: I'm saying if you wanted, if you wanted to truly, uh, create havoc
Kevin Dushney: pool table. No, but I would the pool table. But all the legs are Mac minis stacked up.
Nate McBride: Yeah. Yes. And then I would, I would create it, but then I would invite all my friends over and we would have a big party.
And that would be disruptive. That would be disruptive. My wife would be pissed. That would be be probably a lot of people would get pissed like about that. But anyway, so, so ai, it might increase tactical autonomy by automating routine decisions. Okay. But if you decrease strategic ti, strategic autonomy, if you become, sorry, you will decrease your autonomy if you become dependent on, on a vendor's AI platform.
Or it might increase your autonomy relative to your competitors, while simultaneously decreasing your autonomy relative to your AI providers. And it might do either of these in a way that swings the potential based on a billion other factors. So you need [01:01:00] a framework for evaluating the autonomy of your decisions, uh, so that when you answer, go to ask that question on zero day, three months, 12 months, one year, two years, you have a way to assess it empirically.
And there's a lot of questions you could ask a couple that come to mind. Who owns and controls the data going into and coming out of the system? Basic ownership of a data point. Right? Now, if you have, for instance, A-C-L-M-S, you have an ERP, you should be able to ask and answer these questions. Who owns the data in this thing?
You are effectively leasing the platform. You do not own it, but you need to make sure you still own the data. Two, can you understand and explain how the AI reaches its conclusions? Explainability? I hammer this in every single class. If you cannot explain how you connected the query to the output, you should not be using ai, or at least not for that topic.
Reversibility, can you roll back? [01:02:00] And I don't see any fricking way to do this. Unless you're using strict integrative ai, there's no way to roll back or can you even change direction? And lastly does the AI implementation that you're going to do to make the company amazing and worship you build or erode internal capabilities.
So yeah, I just built this new AI platform, it's kick ass, but we have to fire 10 people. Hmm. How's that gonna go over for skill development?
Mike Crispin: Yeah. Yeah. And people, I get people feeling comfortable in their role and wanting to work there. They could be next.
Nate McBride: Mm-hmm. That's right. And then that goes back to the point about sharing, well, if Kevin's gonna go and build his kickass AI platform, but he would really need my information about what I do to, you know, stuff.
I'm not telling Kevin the right answers. I'm gonna give him a whole bunch of bullshit. Yeah. So that when he tries to launch this thing, it fails. Yep. That's right. Great point. Well, so this, this brings us to the next topic of this whole future [01:03:00] evolution. So, okay. So IT leaders need some sort of autonomy framework for, for how they're gonna deploy ai.
That's a future need. You're going to have to have. But what about, yeah, that last bullet I mentioned about Okay, once you've deployed it, have you considered what, what the impact is for people that are either there that need to be upscaled? Yeah. Are people that are there that are gonna lose their jobs?
Mike Crispin: Well, can we take one step back to the autonomy piece real quick? Yeah, yeah. Just, just that. And similar to these requirements and any, I think system that an IT organization is. Thought to be able to support is almost the same things, right? So if you, can you roll back from you? You mentioned kind of your, your, your exit strategy.
Can you undo what you've put in place? Yes. Can you said you, you used the query example. Can I explain how it works? Right. Can I, can I just [01:04:00] described why this valuable and, and, and then how it can be fixed or how it can be made to work better. Can we teach people how to use it and can we rip it out and replace it?
And all of those, you know, those three or four things, or five things. Once we don't have those answers and we've given them to a machine to do, we've lost all autonomy. And that's where we're going is we're not able to answer when we can't answer those four or five questions. You've lost, you've not entrusted your operation to, uh, an autonomous system.
And pretty much, and that's it. Like, like if we never learn how to drive, we've, we've lost the ability to make decisions on the road. Now you could argue if that's good or bad, right? But once, once, I can't explain to you how I drive. I don't know any of the turn signals. I don't know what a red [01:05:00] light is.
You've lost any sort of freedom. And when you put a system in or you have an AI. Approach. I think in some respects we can't, we may not be able to explain how A GPT works in a lot of work. They just go and they buy it and they use it, right? Yeah. So it's like
Nate McBride: the Moderna of the world.
Mike Crispin: You drill down into these things as, as time goes on, it's how much you can, that zero one.
And we talk about autonomy all season is once you've given in to it, you've traded all this autonomy for convenience. Right. And for speed. Let's say
Nate McBride: the thing you just said, once you've given in to it, so this is, let's, I mean we can pause on the rest of the podcast for a moment because that's maybe one of the most key points ever.
Once you've given into it, what does that even, how is that defined? Because I'm with you once, once you've given into it. I would, I would use the same sort of colloquialism like once you've given into it. But aren't we [01:06:00] once, is there, is there a way to track
Mike Crispin: that? I think once you're, you're honest with yourself that I don't have a clue how that works, but it works.
Nate McBride: Yeah.
Mike Crispin: That's when you've lost control. Mm-hmm. And you, uh, it may look great because you met all your goals for the year and you're doing things faster and better than ever before on paper. And it's measurable, but you have no idea how it's happening. Is it important that you don't know? Well, it is because nobody else is going to know either.
And you have
Nate McBride: that that was absent from Tobias's Manifesto. Yep. That you understand how it's happening. He just
Mike Crispin: doesn't care. Yeah.
Nate McBride: Just
Kevin Dushney: go do it.
Nate McBride: Just go
Mike Crispin: do it. And, and they're very well, maybe leadership or people that believe that look it's working and uh, someone will be able to figure it out. But right now we're, we're doing great and this other company's doing it and this other company's doing it, [01:07:00] and that's this person's job to know how it works.
And they don't really know 'cause they've put it in. But that's what I mean. Like, I think over time, like I, I, I think it happens more in pockets within companies as they, I hate to say it, kinda give into some of these automations and tools. They really don't even know how it's working. I will say the exception is in our maybe hope, luckily the exception in our industry and the financial industries and others, there is a pretty good spotlight on us knowing how things work from a business perspective.
Whether it's yeah, FDA compliance or it's, uh, fiduciary re uh, responsibility as a publicly traded company. Things that governance that helps keep that from happening. Hopefully. But regulations are changing with AI even. And I think what you lose the ability to explain, I mean, who can say. Yeah, I get it.
It's cool. Yeah, I know how that works. It's good. It's just working out so well. You know, someone who's not being genuine [01:08:00] or truthful as a leader says, oh yeah, I get it. I know how it works. But really, they're dropping it into a prompt somewhere and some automated engine. Mm-hmm. And three, three years later, their job's gone and they've left and the company doesn't know how it's running.
Yeah.
Kevin Dushney: And I think that crucial knowledge is trapped in a prompt somewhere
Mike Crispin: that That's right. And it's a, it's a circle, it's a, and it just will loop itself, uh, over time. It's a dystopian view, but I think that's why it's important that so many people are in there learning these tools, how they work, what their caveats are, what their risks are, what it does well, what it doesn't do well, but it's moving so fast that, I mean, I would argue that that's, I, that's near impossible.
I think we eventually yeah. Get overrun by convenience. When something's easy, we jump all in. Just genuinely, A lot of people just jump all in. They don't even know how it works. Right. So I wanna, I wanna make a new picture for something. Uh, [01:09:00] I wanna create a, like take Canva for example. Yeah. I keep using that example.
I wanna build a great presentation. I don't need to know how to do that. I just type it in. Now do you
Kevin Dushney: use Canva,
Mike Crispin: Mike? I do. Yeah. And now, now the auto, what they just announced with their, their AI studio, similar to what Google is doing and others. It's like they, they sat, there was a little video where they got a bunch of people that were just small business owners.
Mm-hmm. They said, come sit down at this table and write an app, write a story about an app you want to build. And it built great prototypes. And this is Canva. This isn't
Kevin Dushney: like my, my, my wife started using Canva to write Instagram posts for her business. And that's, that's what it was good for. It
Mike Crispin: looks great.
Yeah. It's good. And it's good enough. Right. And I think that's, but can you, are you making the design decisions, are you able to make something truly unique that's yours and that you can Well, it's good enough. I, I put my [01:10:00] stamp on it, my taste. I liked that earlier when you said taste. I feel like that a lot of those tools, it's just, do I have a unique enough idea or a unique enough approach?
Doesn't, I don't need to know how it does it or how it works. Yeah. But I
Nate McBride: don't, I don't. Okay. I'm just gonna take the devil's advocate approach here because I don't want you spending all your day thinking about whether it should be red or green. Like you're not, your background is not in ui, your background is not in this or that.
If your background is in style and design, then great. Red or green, what should it be? And don't spend all day thinking about it. Yeah. Uh, when you're talking about this idea of using somehow this like sixth sense that they're supposed to develop overnight about taste, uh, it's farfetched to me.
Mike Crispin: Oh yeah.
Yeah. Maybe taste, I mean, taste is a, I think it's used basically as a, create maybe a creative, a way to say creativity or u uniqueness or, yeah.
Nate McBride: It's when you generate [01:11:00] something, if you generate something from zero and you have no, you're just gonna press a button and whatever comes out, you're gonna use.
Yep. That's, that's one element of taste. If I, in my mind, have a vision, okay, I want to build a website and I want to do these things, and then I'm able to have an AI build it the way I dreamed about it. Yeah. That I think is a more corelative form of taste. Then me just saying, Hey, gimme a picture of a smurf and then fucking makes a picture.
Mike Crispin: Yeah. I think on the, on the creative side of the, not, not so much in the, the business and productivity side, but on the creative side, we were talking about this not the Biot world this year, but at the previous by Oh it world. I wish John can
Nate McBride: remember.
Mike Crispin: I, I think it's, it's, um, it's gonna be curation. That's, that's what the, the human element of all this is gonna be.
So if you take all these pictures that are excellent, you know, great pictures that are created by, [01:12:00] first of all, I think it's some point people aren't gonna know the difference. It's like you can say a human made it or not, you're really not gonna be able to know and so forth. But it's how you bring all of those AI generated content into.
A story or into a, a set of music or into a bunch of moving pictures or, and how humans will assimilate that. And then AI will probably be able to do that too at some point. But it's, it's sort of this, where does, where does it, where does it go from there? And I think like take, we always use the DJ analogy, right?
We've talked about this before. Mm-hmm. You know, DJs fill 80,000, a hundred thousand people stadiums. They don't write the music. Some of them do, some, some have created the music, but ma majority of them just take these. Now, Mike, now Mike does an intro with just, uh, AI tool. They pull, they pull it together and create this whole new experience.
And I think that might be where ART and some of the creative elements are gonna go.
Nate McBride: Yeah. Okay. Okay. Well, the IT leader I know go way
Mike Crispin: off [01:13:00] topic, but I'm just saying like IT leaders tomorrow to
Nate McBride: deal with the, the autonomy decision of art. Unless of course they're working in a company that deals specifically in art.
The, the point though is you can apply this idea, Mike, back to the same principle of sameness. Uh, on a principle level, if I can go out and buy a stereo system that has AI built into the, uh, amplifier, so it's automatically adjusting. Based on a whole bunch of different factors of how good listen to music.
Uh, I eventually begin to lose this capability where, you know, again, on the commercial side, we already know in the consumer side, people have given up autonomy fucking years ago. Right. It's gone. Absolutely it exists. But there will, I think that there will be a time when we sort of either feel nostalgic for, or we feel like it was always in a, a better way when we didn't have an AI intervention for everything that we wanted to create.
There was this, we will
Kevin Dushney: So, like back to vinyl, right?
Nate McBride: Yes. Back to vinyl. Back to vinyl. You're gonna call it that. Back to vinyl. Two amps. [01:14:00] Just give me a, a fricking seventies van with a, a big unicorn on the side and some tasty waves and a high buzz. You know, Mike, you said something and I, I, go ahead.
Mike Crispin: I was gonna say, I think that, I think absolutely there's a huge element of, might be nostalgia that just brings us back to where we need to be.
Mm-hmm. And, uh, as more of this stuff, you know, if it's truly dystopian, it'll be
Nate McBride: too late when it happens.
Mike Crispin: Yeah.
Nate McBride: The AI won't let it. You said something earlier, Mike. I wrote down and I wanted to ask this question, which is, and I. We're not gonna get to the rest of the podcast. Maybe that's okay. We'll just, we'll chip away at this.
It's a chipper. But should explainability be a key requirement? And so if you go back to, to biases manifesto and like what LinkedIn's saying, uh, and what we've talked about so far. Mm-hmm. So in terms of the IT leaders' decision on autonomy and my sort of my IT leader [01:15:00] framework for making a decision, should I also have this one?
I add one more bullet, uh, to that says explainability must be a key requirement. So anything that you're gonna be, be building, everyone should be able to explain the outcome versus the, the, the query versus the outcome. No one should have lack of explainability or it's a failure.
Mike Crispin: I think the builder should be able to explain it.
I don't the user, the, the consumer may not need to explain it, but the, they're the sme. Yeah,
Nate McBride: I mean the sme and I would
Mike Crispin: argue that there are a lot of builders who in, in the AI context won't, don't even know how it works. They'll just, they'll just spit something out for them and they'll put some data in and something good will come out the other end and they'll be like, this is great.
Nate McBride: This works. Like, it's like when you eat like four Chalos at Taco Bell and something good comes off the other end.
Mike Crispin: Just like that.
Nate McBride: Just like that.
Kevin Dushney: Awful. [01:16:00]
Mike Crispin: But, but at the same welcome. The podcast is, is it important? Uh, I would say. Yes, from a data integrity perspective, because you're gonna have to explain how you get to from point A to point B, and I think in many industries.
So yes, I think it's very important. Uh, now if you're creating pictures for a birthday card that's gonna go to the CEO, then you know, you don't have to tell 'em how you made
Nate McBride: the picture picture. Well, that, that's, but that's semantic context. Like if I'm gonna write dear, uh, hey ai, well, Mike's cat just passed away.
Write me a condolences letter from Mike. I'm not gonna say, Hey Mike, here's this letter for you. By the way, generative AI wrote it. That's like the semantic context effect. You, we tell people all the time, the company, we have a slide for it in our training. Is that different than the don't tell people that use AI for very specific things like condolence letters.
Don't ever say that you, this was not
Mike Crispin: written with Grammarly. This was not written with [01:17:00] Grammarly.
Nate McBride: Notice. I left in all the spelling mistakes on purpose that proves it was me. It's kinda like when you, you, it's, that's gonna become
Kevin Dushney: part of your prompt. Introduce random, introduce spelling, error, random errors, and
Nate McBride: grant grammatically incorrect sentences.
Mike Crispin: It's like when you, you, you, you bake a cake at home and you give it to someone versus getting a twine. A candle in it and saying, look, I, how, how hard I worked. So, because I mean, work is a big part of,
Nate McBride: okay, we have to work on your, your similes a little bit, but I, I get your point.
Mike Crispin: Yeah.
Nate McBride: Okay. Like
Mike Crispin: work, work in progress, work time.
And, uh, this is that, that's a great case
Nate McBride: for Gen ai. By the way, next time you're gonna say that, just go in JI and gimme a, a a, a a, like versus like, comparison, so I can make my metaphor clear on the podcast
Mike Crispin: cake, Twinkies and candles and cake baking. Yes.
Nate McBride: All right. So the that, back to [01:18:00] the, we gotta bring it all the way back here.
Mike Crispin: Bring
Nate McBride: it in, baby so that the, the IT leaders and the autonomy framework. And the last point was, does AI implementation build or erode internal capabilities? Well, if you're working for Shopify, you're, you supposedly already have all the internal capabilities you need. If you don't, and you say, work for my biotech company or any one of the other number of companies, you probably have people that don't have the skill sets yet.
But yet you're gonna go ahead and deploy this kickass thing. And you're just gonna expect, like in Moderna, for instance, that everyone in the company becomes GPT fluent overnight. So the, the advent of this AI zeitgeist is fundamentally changing what it means to be skilled in it. So the critics in the journalist ethos would have you believe that we're seeing a shift from technical, uh, execution to AI orchestration.
They have you believe this, they write it. It's in the headlines. It's just very far from true. I mean, anecdotally [01:19:00] speaking, uh, yeah. Oh my God, this big massive company over here is saying they use AI and they've got all these great gains. That's anecdotal. No one's actually got any metrics on this shit, let's be honest.
But there's no real data to support any of it. Uh, at least I have found, I found lots of surveys. I have found lots of AI poles, but I haven't found any metrics yet. So all these things, they just want you to believe that the hype to believe it's to keep the hype going. It's a big hype cycle. So, however, anecdotally I would agree there's a glimmer of truth here.
So in the past, uh, it value, at least in my past, was largely tied to the ability to implement and manage technical systems. Period, point blank, full stop, right? Can you put this in? How long will it take? What we'll do when it's done, and can you please manage it? Or instead of, please, they would just say, just fucking manage it.
Sometimes someone would say, please, but today, value isn't coming. [01:20:00] In some ways from the ability to effectively direct and leverage AI capabilities, it's less about can you build this? And more about can you effectively instruct an AI to build this. Now, again,
Kevin Dushney: purely anecdotal. Mike Simile, by the way,
Nate McBride: what's that?
I fixed Mike Simile. It's in the chat. Oh, you fixed Mike Simile. Okay, hold on. We're gonna pause and go back to this.
A homemade cake is like a heartfelt conversation with an old friend. While a Twinkie is like a hasty text message, one offers depth, warmth and personal attention that lingers their memory while the other provides quick convenience, but leaves a little lasting impression. Okay,
Mike Crispin: so Mike, what the hell do we need humans for?
Nailed it, man. That's beautiful. That's beautiful. Holy shit.
Nate McBride: Next week on the Calculus It podcast, there won't be any humans here. We're just gonna press play and you can listen. Um, that's good, Kevin. I like that. Well done. [01:21:00] So back to the point. Effectively, we all, the three of us all came through, um, the age of build versus buy.
That was, that was what I was gonna get at. Yeah. Build versus buy. Okay. So the rate of growth could have, of anything, could have implications for it. Autonomy. On the one hand, AI could democratize technical capabilities, allowing smaller teams like my team of two, Mike's, team of three, uh, Kevin's team of four.
Uh, we're up to six now, actually six, uh, to accomplish more without specialized skill sets. Yeah, so this increases autonomy by reducing dependency and hard to find talent. Awesome. Uh, so we have an autonomy increase here. Something that I'm exploring myself these days. Uh, I'm training myself constantly and trying just to stay current with what's out there for me to get trained on is very, very hard to do.
I'm probably behind at this point if I was to be honest with myself, but I stay with it as much as I can. So imagine trying to keep a, [01:22:00] a small operational it team current all at the same time. So with, if I try to take my training load and apply it to my only other person in my department, that would be a quarter of it.
Lost training on AI each week if you're behind,
Kevin Dushney: that does not bode well for the rest of us. Me?
Nate McBride: Yeah. Oh, I, I'm, I'm behind, man. I got a d script email today talking about their new, um,
Kevin Dushney: kicking you off the platform for over usage. Uh,
don't be like
Nate McBride: draftings, don't come back. We're not telling you why you're gone. You're cut off. You didn't do anything wrong. I was kicking you off. Um, they sent out an email today and every single time I get the D script email every week, I'm like, oh shit, I can't read this. 'cause if I read it, this is like five new things they've developed.
And I'm like, oh my God. And then of course, Zapier and Make, and everyone else sends me emails like, Hey, we have 45 new AI things you can do this week. And you're like, ah, stop. So anyway, [01:23:00] so imagine trying to keep these teams current, right? So companies with small teams can use AI system development. Sure.
It's a good skill to have in your function. Everything from updating legacy systems to advanced workflow. Even if they're not strong in those functions, assuming of course they know how actually to use those capabilities, they have explainability. So if I am going to tell a generative ai, uh, listen, uh, go ahead and fix the firewall and make it do these things, or gimme the scripts to do that, and it's like, here you go.
Here's a hundred lines of code. If I can't figure out what the hell I'm about to do and execute, that could be bad. It will be bad. So hold, just think of think, and I put this question in here, how long did it take it departments to transition to the cloud? I mean, by all accounts, some did it pretty quickly.
Some took 10 years. Some are probably still doing it. Yeah. So now [01:24:00] imagine you have to tell your whole department, you have to become prof proficient on ai.
That
Kevin Dushney: same department that was different, right? Like cloud was a force multiplier, a huge change. People that have learned how to manage servers, the dude, the parallels are crazy. All them, the, the server huggers, like you still, you still find them like how many years later?
Nate McBride: We had years and years. I mean, every gardener that I went to in the early.
Teens HA was about digitizing and saving money by going to the cloud, speeding up your process and going to the edge and I OT and up your butt and blah, blah, blah. Container. This docker that everything was about saving money in the cloud. Yeah. Guess what? We all found out? It's fucking expensive to go to the cop.
Yeah. It's very subtle.
Kevin Dushney: Not capital. That doesn't mean it's not expensive.
Nate McBride: Yeah, yeah. Uh, and everyone's like, [01:25:00] oh, speaking of nostalgia, maybe I should put a physical server in. They're awfully cheap right now. But anyway, but if you're not careful, AI can create a dangerous skills gap. Yeah. If your team becomes dependent on AI to perform core functions without understanding the underlying principles, eeg, how they work, you're essentially trading one dependency on specialized staff for another on AI providers.
That was completely redundant statement, but important to just reiterate. So I think it's easy to see how our future teams, um, can still adopt AI to help us, but those that simply no longer troubleshoot things and only use AI systems to do that are completely screwed. Um. When the ai, when the AI service is out, you're basically dead.
It's, it, it's like when the cloud, when your wan pipe is down, or when your box [01:26:00] environment goes down or Google's down or slack's down, you're screwed. Well, you just basically, it's the same principle when the AI service goes down. I have no idea what I, what to do.
I don't know what this button means. It's blinking red. So we talked about force multipliers, and we've said this a thousand times in the show, but the key is maintaining, for maintaining autonomy, is to use AI as a force multiplier, not a replacement for fundamental understanding. From a lot of things I'm reading from the more intelligent AI, pragmatic practitioners, including my, um, wonderful colleagues here.
This is the people are starting to think, use it as a tool, not the tool. AI should help your team work at a higher level, not serve as a crutch, the atrophies their core skills, and go back to the cloud. How many people were like, well, I'm a fricking Windows server, and now what am I gonna do? Mm-hmm. Well, you can learn AWS, that's an idea.
I, [01:27:00]
Mike Crispin: I think that's one of the, what you just said. Learn a WSI think one of the neat things about getting your team on AI is if they depend on it, use it, and, like you said, don't understand how it works. The, the key is that they, they learn from it. They use it and they learn something that they can apply.
And maybe it's just taking a slightly different approach. As, you know, you find someone just going to AI to get the answers for everything. Mm-hmm. It's that you want to put it in the context of,
Nate McBride: well, it's explain joy want,
Mike Crispin: you, want you to learn and retain some of what you're seeing as opposed to just, I mean, it's a Google search, right?
I mean, it's the new Google search, so people will go and just Google it. But I think the difference now is that it's giving people step by step. Uh, I was in it today trying to look at something and it didn't quite line up what I needed. So I a I added a little caveat or a little [01:28:00] different spin on it and let it know what my problem was and it spit something else back out.
And now I learned something new about how, you know, certain API works, for example. Um, but if you just use it and you throw it away and forget what it taught you, which think in the heat of the moment, people probably do, it's, um, you gotta use it as a learning tool instead of a, instead of just as a problem solver.
Kevin Dushney: Yeah. I mean, and if you don't know how to code and you know, you're using AI to, you know, to be a coder, it's like, you know, I, I've found this, like you guys saw this creating that power shell at the, or at it's awesome. The bar. Well, yes, it's awesome, but it was also giving me hallucinations for calls that didn't exist.
Right. And multiple people on my team have run to this where somebody proposed this feature on a board somewhere and the AI scooped it up and made it so like, [01:29:00] yeah, no, this, this actually doesn't exist. But here it is in your code and it keep retrying. This doesn't exist. Oh, here it is. Here's a better version.
And you know, I've tried like four different models. Yeah, same thing. Yeah. So, but, but now you know how to fix it. 'cause you don't know how to code 'cause you're That's right. Really reliant on the ai. 'cause I'm not a PowerShell expert.
Nate McBride: Yeah. Nor do I want to be. Powershells a great example. So see, the thing is all three of us know how to launch a PowerShell work in PowerShell, create bad files.
We can do a lot of the PowerShell work we need to do on a day by day basis. We understand its context, its place, its usage in the world. Yes. That doesn't make us pros, but it makes us savvy enough to know how and when to use it. So when I'm going to look for a particular, uh, script mm-hmm. And get assistance with a script, I can use generative AI to help me write that script.
Yes. To solve the problem. Or when Windows throws up a zero x 0, 0, 0, 0. 9, 9, [01:30:00] 9, you know, FU error. I, I don't know what the fucking idea that thing is. I'd never seen it before. Well, time to hit that, not to hit not only Gemini, but also then get a solution. Yep.
Kevin Dushney: That's a bad uh, doc Firmware.
Nate McBride: Yes, it was that, uh, what was it called? It was stuck net virus, stuck net. Sorry, I can't believe I'm saying this, but, well, I can, 'cause we're on the podcast. I can say whatever I want, but you have to find ways to ensure your department holds onto the current skillsets while also learning the new ones. Yeah.
Fucking it's like fucking it. Leadership 1 0 1, run the new ones. Augment what you have with the, with like everything that you can possibly put in your hands. Don't throw out what you know in replacement for the Miracle drug. That means investing in AI resistant skills. The foundational knowledge that lets you team effectively direct, evaluate, and if necessary, override AI systems.
And if you've never read the, um, uh, [01:31:00] checklist Manifesto by Otto Gawande, you need to read this book. 'cause in it he talks about exactly this, which is pilots don't fly well, they put plane in autopilot, but they have a book. Astronauts have a book, nurses, doctors, they know what to do when the thing doesn't work.
But most things work on their own. Yep. Uh, same exact principles. Yes. Go ahead and do the fucking ai, but also know what to do if it doesn't work. And how to keep the business running perfectly. It is like backup for your AI plan. Yeah. It's almost like, uh, AI disaster recovery. Mm-hmm. Now I think about it.
I'm gonna write that in here. Yeah. Ai, brain d brain Dr. Brain Dr. I'm gonna give you credit for that one too. So, and these basics are, these are things that we, the three of us would hire for anyway. Systems thinking, critical analysis, architectural principles, deep understanding of business context. In other words, it, we would [01:32:00] not hire somebody who is like, no, I know how to use ai, I'm good.
No, no, no. You got to already have been through hell or been through the process to understand how what you're about to do impacts everybody else. You have to have the context and the awareness. Plus it's good to also know ai Yeah. To add on. So let's see here. I think we can get through one more section and then we're gonna Sure.
Maybe pause and go to a part two next week. How's that sound, Ruby? Okay. So my favorite topic is data sovereignty. I love talking about data. We could have a whole podcast about data. We should, uh, to coincide with our podcast about music. We'll have multiple podcasts anyway. In an AI driven world, data isn't the asset, it's the foundation of the assets or the value creation.
And this sounds kind of backwards, but I liken to [01:33:00] this, to this equation. Um, if Mike and I are both working for competitive companies. And Mike's like, oh, I'm so fucking good with ai. And I'm like, oh my God, I'm so good with ai. And then we both go into AI and we're like, give us a strategy for our company.
And it's like, here's your strategy, but Mike and I get the exact same strategy, and then we're like, fuck you. I got it first. No, I got it first. And then we're trying to do it. We sue each other and it goes on in the court for years and years. Um, that's not data sovereignty. That's just we, we didn't really get a chance to capture the sovereignty of what it is we were creating.
In other words, we weren't creating the asset. We were trying to create value, and we just forgot about the asset. The old saying of data is the new oil. It doesn't quite capture, it's more like data is the new land. So whoever controls the data, controls the territory. I remember I asked you before about, um, preemptive queries, uh, and the question about [01:34:00] like, what does a query mean?
Well, let me just before we get into this last part, I'll throw this out to both you. Um, when will it come, when will the time come when companies protect their queries? And I mean, like, as I already happening, already happening, you think
Mike Crispin: I, I I think it's some of the Sure. Yeah. Mike, Mike
Kevin Dushney: writes Mike ip.
It's part of IP already.
Mike Crispin: Yeah.
Kevin Dushney: Yeah.
Mike Crispin: Like the, like when, uh, remember the prompts you were writing Nate for the virtual CIO?
Nate McBride: Yep.
Mike Crispin: Let's say if, you know, some, some magic came out of that, that could be a proprietary prompt. You know, that's the data that you were loading in and that was unique to your background and whatnot.
It's questions are gonna be more valuable than answers. 'cause everyone can get the answer. It's what you put in the front that comes, brings the good thing out the back [01:35:00] end. And I think that's gonna be the, that's another bad analogy, but, um, I think that, uh, or metaphor, whatever word, I have to look that up too.
But, um, you know, ultimately I think the question is gonna be what you're gonna protect. And the answer is, what you're gonna tell everyone about
Nate McBride: is, well, you wouldn't, but if, if, if you're walking into the executive team meeting tomorrow and you ask a big, fancy question and everyone talks about it, who's knowledgeable, and you get an answer where, I mean, other than meeting minutes potentially
AI Trance Bot: mm-hmm.
Nate McBride: Or the zoom transcript, where is it captured that you ask that question, right? Mm-hmm. In the same principle, if I write a kickass query, like a super prompt, it's the super prompt of all super prompts. It's so fricking amazing. Every time I write it, I get, yep. A billion dollars. I'm going, sounds good. Lock that in the vault.
I don't want anyone else to, to get that prompt that I wrote. Right? [01:36:00] Yep. That's a data. That's data value creation. That's more than just an asset. And so I think this is gonna create a, a tension for IT leaders. 'cause it's no different than any other data. Yeah. But it now adds a new wrinkle to it. So on the one hand, AI requires massive data sets to be effective.
Okay, great. If you're gonna run and build your own LLM, if you're gonna build integrated ai, you need big, giant data set. And the more structural control you have over the data you put in that set, obviously the better the AI performs. In theory, you have less hallucination like a notebook lm you've created in sort of an explicit LM environment.
No problem. But this is, and this is me being Mr. Obvious, but once you share your data in any way whatsoever, you're seeding some control over it, even with strong contractual protections. The reality is the data used to train AI models creates value that may not fully owner control. And it's gonna get very blurry when I go ahead and use an off the shelf agent to build a [01:37:00] model using my proprietary data to create an outcome that generates revenue.
Um, now I can state to a certain degree of re reasonability that no data went out to that vendor, but still they had a big hand in, uh, how the outcomes of what it is I did. So my AI engines are not siphoning my data. That's just one of the biggest AI myths that that would happen. But can you, can you imagine.
Can you imagine a scenario where, again, Mike and I are both querying the exact same question and get nearly the same result. And we, we were using, you know, external data sets. Mm-hmm. Or we both did, well, we both like mined the internet for 300 of the same documents and then did a notebook LM query that was explicit.
And, you know, in this particular scenario, and we still got the same [01:38:00] answer, like, who owns the answer? Who, like, we, we, we went out and both publicly mine data to get the same answer. It's now there's an AI element to this that that didn't, didn't exist before. Um, and I think it's just a, it's like, it's like money laundering.
Yeah. To a degree. It's,
Mike Crispin: and it's comes down cleaning the data from the hundreds of sources.
Nate McBride: Hundreds of sources, right. And then claiming that it's yours. You're getting the value at the end. Right. But you didn't have any of the, even with an explicit LLM, you really didn't have any control along the way.
Even if I go and build correct my own integrative LLM in a very, very private little corner, and I go ahead and use some off the shelf agent to do it, I am like, it's still, it is still an element of what belongs to me. What's my [01:39:00] thing in all of this? So it's just a data derived IP problem. If an AI model is trained in your proprietary data and then used to generate insights of content, who owns those outputs?
I think part of your I've been, I've been to a couple,
Mike Crispin: so Go ahead. I was gonna say, I think part of your, you know, when you sign up or you are leveraging resources from one of these companies, that's in your agreement as to who owns the data when it comes out.
Nate McBride: Yeah. But are they even capable of saying that?
So I've been to like a, I've been to a couple of the Hanson Wade conferences and a couple other, I went to the Ropes and Gray conference last year on this point, and they don't even have clear understanding of these legal boundaries. Um, and again,
Mike Crispin: I think it's, in some instances, at least in a business context, is when, when your company provides all the data, so you're not, or, or you, you are, if you are bringing data in from a [01:40:00] source or from a third party, then you are responsible as the, as the data owner to source that appropriately.
Just like if you use a data set with Veeva, you need to pay the data owner, you know, to some extent that that data's being used within the system. I think similarly in ai, you'll, you're, you may be creating the data that goes into these AI systems in which you are trying to protect your own proprietary information.
But if you are taking other data sets, um. You have to legal, you legally obligated to
Nate McBride: disclose that. I believe, um, I, well probably, I mean, it sounds plausible. I don't have anything to back that up for truth. Yeah. But, mm-hmm. Um, I mean,
Mike Crispin: but like Kevin, in a data mart scenario, you've probably been through on the commercial side and Nate, you've probably been through that, right?
Where you've got
Kevin Dushney: Yeah, you have to buy it. I mean, just wrangling those third party, tri-party. Oh
Nate McBride: yeah.
Kevin Dushney: Right.
Nate McBride: Yeah. I [01:41:00] remember, I remember buying all that data, but yeah, that was data that one company did own and was selling to you. That's right. But the three of us could have all gone and bought the same dataset.
Yeah,
Kevin Dushney: that's right. That's right. Exact same data set. But
Nate McBride: I, those contractual agreements were that we were buying a data set, you know, here it was, it's a model data set. It's raw data. You now go do with it whatever you want, right? Mm-hmm. Yep. Um, so the legal implications were more along the lines of, well, how good is my commercial analytics team to work on this data versus yours?
Mine might be better. Sure. Maybe I get a faster insight in this case. Again, what I'm trying to get at is if you and I asked the same questions on the same service, it's like, Hey, we're gonna get the same answers. And these are not data leaks. They could probably be ascribed to that. Oh my God, how did they get our information?
Um, but we use the same queries and the same engines and the AI are essentially pollinating. Cross polling [01:42:00] insights across clients in a way that is pretty much impossible to trace. So,
Kevin Dushney: but what if we all hired the same consultant?
Nate McBride: Well, what we all need to do is hire the same lawyer. There you go. That person, I mean, I'm not gonna get into, this isn't a Tort Reform podcast, but ultimately this is one of those things where you can see how difficult it's gonna become for an IT leader to say, no, no, that's our data.
Yeah. I think, I think that's my data. Uh, is that our data. So
Kevin Dushney: is it aesthetic or Right, yeah.
Nate McBride: Yeah. So was it made in Canva? So, so running AI models in your own environment, running AI models in your own environment, rather than sending data to external services mm-hmm. Will, is one way to alleviate the stress and give you back some autonomy.
But of course then you have the building, the AI models in your environment. So the running of them is the easy part. The building of them, [01:43:00] not so much. Mm-hmm. And there's the training models across distributed data sets without centralizing data itself. Also, uh, trivial on the execution, very difficult in the creation.
There's adding noise. Uh, so that obfuscation of key data. I remember when I took that Wharton class, uh, two years ago, maybe three years ago, on um, on crypto. Uh, there was this idea that you're adding, the idea of adding random noise to transactions would help, um, prevent, uh, extraction of certain details in a way that kept decentralization of ex of currency exchange.
Same principle, right? Um, you're still able to keep the data statistically useful, but you're adding just enough noise to prevent extraction of the details. You're, you're encoding it basically. Mm-hmm. Then there's explicit contracts about how your data can cannot be used. So data transfer agreements 1 0 1, who's got 'em, how do you use them, [01:44:00] and do you include language in your data?
DTAs about, uh, AI portability. And then lastly, the most obvious, which is only sharing what's absolutely necessary to get the job done. Of course, you name, you know, who came to the party and disrupted this entire fucking list. Uh, incumbent partners and their collaborative AI layers, all your little platforms, your offices, your air tables, your slacks, your boxes, your ignites, they all have collaborative ai and you did not get the opportunity to approve a terms of service for their integration of ai.
You can turn it off in the admin console, little tiny switch to turn it off. Mm-hmm. Guess what? Still running? All you're turning off is people's ability to use it in the ui.
You didn't get a chance though. I remember getting a contract from Buck saying, uh oh, you have to sign [01:45:00] this new agreement for the use of ai. Just rolled right in. Nope. Just pushed it. Just pushed it. And, and
Kevin Dushney: that's, that's the problem. It's just showing up
Nate McBride: and you Yeah. Microsoft's pushing recall now out in the next big update to everybody and no one got a chance to sign up.
What you did do though with Windows is that little tiny TOS that you scr skimmed through and didn't read way back when you install Windows 11 gives them the right to do this to you. So all the things I just said that it leaders who are concerned about data sovereignty should do, oh, they all go out the fucking window.
Um, when it comes to collaborative ai, you have no control. Yeah. So you can either not use systems that have collaborative AI and go back to record players, or you can figure out a way to deal with it. And the goal isn't to hoard all your data that would ultimately be missing out on AI's benefits. The goal is to make conscious, [01:46:00] conscious strategic decisions about what data you share with whom and under what conditions, so that you maintain control of crown jewels.
And I'll pause right there. Thoughts?
I just dumped a lot on you.
Kevin Dushney: Yeah. I'm trying to think how to answer.
Nate McBride: Um. Well, just think of it this way, Kevin. I mean, ultimately data sovereignty. Right? Now, today, if people ask a prompt, it's not gonna be the prompt that saves your company or makes your company a billion dollars. So you're not super concerned about it.
Right? Sure. Right. And if someone gets an output from a prompt, uh, maybe it'll be integrated into some sort of document or PowerPoint, whatever, but no one's transforming the recipe. So, so maybe say it's not a big problem, but fast forward three years from now and perhaps your general counsel's in there and they're coming up with this amazing prompt or something.
That's probably a bad example. GCs won't do that, but somebody else is coming up with some sort of super crate [01:47:00] prompt or something. The scientist, or they, yeah. Or they want to put all the secret sauce into a private LLM, but then use an off the shelf agent to AI to work with it. Mm-hmm. Um, like all those assurances, all those controls, are they, are they, are they possible?
I mean, I think they're idealized, of course.
I
Kevin Dushney: don't think we know enough to answer yet.
Mike Crispin: Yeah, yeah. I agree too early.
Kevin Dushney: Yeah. I mean, one thing that I think that's, that's pretty tangible right now though we've discussed is, uh, you know, guidance from the US patent office right around proving. There was sufficient human intervention or, uh, contribution, uh, is what I mean to the Yes,
Nate McBride: that's the line.
Kevin Dushney: Yeah. Contribution and how, [01:48:00] and, and including like the prompting of how did you get, you, you showed the failed prompts and refinements to get to the output that led to discovery. And that's, that blew my mind is like having that burden of proof of, and also subjective. Like what, who's assessing how much?
It's
Nate McBride: also non repeatable,
Kevin Dushney: right? Yeah, exactly. You're gonna get a different answer, right? And we know that you ask the same question or use a different model, you're gonna get a different answer.
Mike Crispin: Yeah.
Kevin Dushney: I, that seems more tangible now. The other question is, I think, is the jury's still out? Um, don't know yet.
From my perspective.
Nate McBride: Yeah. Yeah, for sure. No, I mean, I think the way we end up with this is we have to, you have to think about it a lot as an IT leader. Your data sovereignty, up until this point in time has been about unstructured data. Data and, [01:49:00] and email systems and databases, data that exist in platforms.
It's been a relatively, um, straightforward scope to control the sovereignty. And now you have something that's, that's literally out of your control. Um, there's no programmatic 1, 2, 3 step process that everyone's going to use in your company for writing prompts. Everyone's gonna do it differently and to their tastes of style.
So we have a new layer of data sovereignty. We are all gonna have to deal with. Mike, any, any last thoughts on that?
Mike Crispin: Well, I think there, there, I think there will be more and more platforms if there aren't already that are well back up for a sec. I, I do think there will be, [01:50:00] with very precious IP and data at stake, this move to bring data back into a controlled environment, into local systems, uh, when things are very, very, uh, confidential with with their own, with their own LLM models that they may purchase.
Yeah. That stay internal. Um, just so the data residency is, is, is intact especially with those very impactful, um, systems or the risk of getting out or, or if they need to really pull back the model to see how it is researching at a code level. I think that's where open source models, I think we'll, we'll get more traction.
Um. We'll get through this. The cloud platforms, I feel like are starting to hear the noise here too, and like having to put another level of scrutiny around how AI is managed in their clouds, whether it's Amazon or Google or Microsoft. [01:51:00] Yeah. Um, yeah, and then there's the data governance component, right? So we, we talked about with box, uh, a, a while back, and that if you're able to point to what data you're pulling in and that it's the right data, that perhaps the sovereignty of the data is more easy to explain because the data you put in, you can explain where it's come from and that it is good data to begin with.
Yeah. Um, and those are, I think things that are, that I think continue to emerge and we continue to learn about. But I do wonder if there will be this, um, massive grab for hardware, if there's a lot of concern about security and about, um, ownership of data and data sovereignty is if it's like, Hey, I can prove it if I have it all here in, in my basement, it's mine.
I'm controlling it. You know, it's not gonna, we don't
Nate McBride: have to worry about it, Mike. We have micro Microsoft recalls coming out. We can just use recall and go back to our perception [01:52:00] what happened. Yeah. Problem solved easy. And then,
Mike Crispin: and then there's this sort of anonymized, you know, the, the, you know, we hear about more and more about this aggregate and masked and anonymized data sets and all these things that are, I've been around for a while, but are becoming even more prominent in the ai Yeah.
The AI context. So we'll see
Nate McBride: Industry 5.0 baby right here. So where we're headed, who gets to see what and decided by who? Who, that's a good question. Sounds like a great opener for part two. Uh, it is. Kevin and I do hope you'll come back next week. 'cause we have Yeah, I would love to it. We're, we're just getting cooking now.
Yeah. Got the, the, the grease on the pan ready to go. Lavard's popping. Um, we love that.
Mike Crispin: Love it.
Nate McBride: Some pop large. Hey,
Mike Crispin: that's not, that's not a rainstorm or a thunder thunderstorm. That's just [01:53:00] someone recording a frying pan.
Nate McBride: Well done. Um, all right, so next week Mike's gonna have some kick ass metaphors. Mike, I want three metaphors from you next week to be delivered throughout the podcast. A minimum of three.
Kevin Dushney: All easy. Okay.
Nate McBride: Okay, good. And that they should be really well thought out relevant. Metaphors for the occasion,
Mike Crispin: unlike the other one that was not very th thought out.
Nate McBride: It was No, no. Like I could see where you were going. So I I got it right. It's very like
Mike Crispin: bouncing around in my, my mind.
Nate McBride: It was just when the twine came out. Yeah. That's when I was
Kevin Dushney: like,
Nate McBride: that's what I was like, all right. I think, I think you should have flipped them around and it would've been made it maybe better,
Mike Crispin: but I could bake you a cake and you'd be so happy that I did all that work for you.
Right. I don't know. She gave me a twin. Maybe that's like romantic. Maybe it's so romantic element to that. But if I just Yeah, people were like, caught you a twine with the candle in it.
Kevin Dushney: The good [01:54:00] news is blonde had the perfect answer.
Nate McBride: So yeah. Well what if, what if like our first date, Mike was you and me sharing a Twinkie?
Then the Twinkie would have more meaning? A frozen twinky. A frozen twine. A frozen twine. It would have more meaning than the cake. So
Kevin Dushney: could, how do you know they freeze?
Nate McBride: It's like a totem. Hmm. We don't know yet. So it's the homework
Kevin Dushney: for the weekend,
Nate McBride: for the next podcast. Yeah. Remember, remember Mike, Jen and I prompts need to be unambiguous, so just work your metaphors.
Next week. Next week we're gonna go through the rest of this episode and there's a lot of great stuff to come, so stay tuned. Um. And then after that, the week after that, we're gonna get into the regulatory landscape, which is, uh, a future, which is gonna be problematic on multiple levels. I wanna remind everybody that if you like our show, give us all the stars on all the things.
Um, donate to Wikimedia. Donate to ACL U, donate to Life Science Cares. Donate to [01:55:00] SPCA. Donate to that little girl who's selling lemonade on the corner of the street that her mom made, and she's probably gonna take all her money, but donate anyway. Uh, donate to any cause that's worthwhile. Don't be a dick, especially, don't be a dick to the hardworking it folks in your company.
Be cool to them that they'll get paid back in speeds. I, I'm telling you, this is the truth. Hashtag truth. Be nice to animals. Be nice to old people. And finally, no matter what, maintain your autonomy, be,
Kevin Dushney: uh, it's harder than ever. I don't, I don't think that the, the topic at the top of the podcast was being very nice to old people.
Nate McBride: Well, exploited. Yes. I'm, I, I'm, I'm a, I'm protesting that website 'cause that is not being nice to old people. No, it's not. And so I'm saying to all you a month, call your mom and dad, okay? Don't use this service for $30 a [01:56:00] month to call your mom and dad. And if you are, ask yourself what kind of human you are.
Look in the mirror and say, what kind of human am I?
Mike Crispin: So, so when does, when does a. When does, will someone use that service to have their mom put $27,800 into a Bitcoin ATM or go get a bunch of card? It's, it's already happened. Gift cards.
Nate McBride: It's already happened. Hey Grandma. Hey, hey, grandma. Look, it's me. Hey, look, I, I, I am, I'm thinking about starting a, um, a pet ranch for rescued rabbits.
I need $27,000.
AI Trance Bot: Oh, I was seeing, is there a
Kevin Dushney: western, Western Union? Are you
Nate McBride: Western Union?
Kevin Dushney: Yeah.
Nate McBride: Um, so yes, maintain your autonomy, everybody. It's harder than ever in this emerging tech landscape, but more important than ever too. Mm. Uh, thank you, Kevin, for being on this episode. It was great to have [01:57:00] you back. Always a pleasure to be here. Thanks guys. Thank you, Mike, for your wonderful mixed metaphors, and I'll see, see you both next week.
All right. Alright. Thanks guys. Cold
Mike Crispin: nostalgia. Baby Cold nostalgia.
Nate McBride: Yes. Get those record players out next week we're gonna spin some wax final.
Kevin Dushney: Just
Nate McBride: like a frozen twine. Okay, later
AI Trance Bot: binary
whispers glow. So
soy, we[01:58:00]
through the cyber paths, we glide in the circuits, we confide. No restraints, no need to hide.[01:59:00]
The zero
we control there. It's binary whispers in the night. Flashing screams so [02:00:00] bright.