The Calculus of IT

Calculus of IT - Episode 36 - 10/23/24 - "The Individual Loss of Autonomy and its Effect on IT" - Part 1

Nathan McBride & Michael Crispin Season 1 Episode 36

In Episode 36, Mike and I revisit a topic from Episodes 10, 12, and 13. In those episodes, we discussed how technological advances are erasing individual autonomy to the point that most people have only the barest shreds of autonomy left. We are revisiting this topic because we want to explore how this loss of individual autonomy is bleeding over into IT strategy, leadership, and decision-making.  This is part 1 of a two-part series, and in Part 1, we explored three of six primary ideas:
1) Setting the tone and context again for the individual autonomy crisis
2) Discussing how that is now very closely mirroring a lot of what is happening in corporate IT
3) And due to this mirroring, what are the net effects on leadership and strategic decision-making?

It's a big lift, and we did our best to get into it.  Next week, we will be exploring the other three pieces of this idea:
4) The Sovereignty paradox
5) How are we becoming conditioned towards strategic responses?
6) What does the future look like for this, and how might we slow it down?

Support the show

The Calculus of IT website - https://www.thecoit.us
"The New IT Leader's Survival Guide" Book - https://www.longwalk.consulting/library
"The Calculus of IT" Book - https://www.longwalk.consulting/library
The COIT Merchandise Store - https://thecoit.myspreadshop.com
Buy us a Beer!! - https://www.buymeacoffee.com/thecalculusofit
Youtube - @thecalculusofit
Slack - Invite Link
Email - nate@thecoit.us
Email - mike@thecoit.us


Mike Crispin 00:01 
We got a bacon kid going tonight. Let's do some dinner. 

Speaker 1 00:04 
What's what? 

Mike Crispin 00:05 
I got I got my bacon candle. 

Speaker 1 00:09 
They can candle for the win. I love it. I was at the Sim Boston Summit today. How was that? I didn't really, I didn't have any takeaways, really. I think if you just took a whole bunch of random headlines and threw them up in the air, there's a good chance that it was covered today. 

Speaker 1 00:31 
But for the most part, I felt like on the AI side, the real stories were all glossed over. 

Mike Crispin 00:38 
Or they may not pick them up yet. I mean, it's usually I think I think it was. 

Speaker 1 00:43 
matter of you know staying away from any kind of alarmism and just keeping it to the sort of high level. I mean I'll tell most of what I took away were the sidebar discussions from there. I mean obviously it's a big ribbing networking kind of event and even whether I was talking to somebody or I was you know eavesdropping I heard a lot of people mutter phrases akin to in my that started with my in my experience or things like well as I've come to find or in my testing I mean this kind of theme emerged and and always sort of the next thing that would come out of the person's mouth was either their satisfaction or dissatisfaction with with the AI product that they were using with you know so so so basically the bottom line is it's obviously clearly very experiential at this point in time and you know you've heard you've heard the phrase I'm sure a thousand times because it's so overused but you know we're building the plane as we fly it in terms of like companies getting off the ground and stuff well the thing that I've heard and I've heard it three times in the last 24 hours three different scenarios is the one that says well you don't know how our car works but you still know how to drive it like that's becoming the new yeah that's how it is explained right well I don't need to know how it works underneath I just need to know that I can drive it right which is totally it's just a falsified way to cover your ass but I actually heard it last night from a CEO of a machine learning company who tried to make that sort of the takeaway message for this this panel I watched so I'm like okay like we have to stop saying that too because it's not it's not the same model like you don't need to know how a car works to function a car you don't need to know how a computer works to operate a computer but AI doesn't fall into that or generative AI doesn't fall into that same category you do need to know how it works you can't just assume that I don't know I don't want to I don't the metaphor sucks already I don't want to make it worse but you can't assume that airbags gonna deploy if you run into a wall that way all right maybe that's a little bit too deep metaphor but point being that that seems to be the takeaway is it's becoming very experiential and ten different people in a room can all give you ten different viewpoints of how they use generative AI and no one's wrong 

Mike Crispin 03:27 
It's a brand, it's a brand new tool right now, right? Well, not brand new, but it seems to continue to evolve. I mean, look at the notebook, LM stuff. It's it's amazing what people are going to do with that. 

Mike Crispin 03:36 
And that's pretty basic. I mean, in terms of what once now that people like, oh, my goodness, I can actually take, you know, 40 or 50 PDFs and drop them in there and have a conversation is, you know, just to have a podcast like that. 

Speaker 1 03:55 
Here's the part where I think people are missing. So today, you can drop in sort of a giant bundle of PDFs into Notebook LM and start asking questions. This is only gonna last a very, very brief moment. 

Speaker 1 04:09 
Say in three months, it will be thousands of documents. And then a couple months after that, or a short time after that, it will be, just go ahead and point Notebook LM at a directory. You know, let's just go ahead and cut the shit, point it at a directory, and then the last stage will be, you know what, we're just gonna go ahead and apply it to your entire domain for you. 

Speaker 1 04:29 
I mean, it's not, so people that aren't keeping pace with that are going to be the same ones who are out at these conferences a year from now as saying, you know what my experience has been? It's been that it actually doesn't work that great, or I'm terrified by it, or every time I ask it a question, it's wrong. 

Speaker 1 04:47 
Having no idea that perhaps the reason they're getting wrong answers is because their generative AI agent sits atop a mound of shit. So that was one good thing to take away from last night's panel was that everyone acknowledges at least that if you try to apply generative AI atop an LLM that you just have full of unstructured data, then you're just gonna get crap unless it's completely totally 100% curated just don't do that, 

Speaker 1 05:21 
it's bad behavior. So anyway, that's where I was today and it was good to see some, I haven't seen some people in a while so it was good to catch up with some folks and get some free socks because I was running out of conference socks. 

Speaker 1 05:38 
So always a positive. Good place to get the socks. Yeah, it's kind of like, you know what it was like, I heard the words say Gartner Lite a couple of times, because symposium's going on right now, that whole giant scam. 

Speaker 1 05:53 
And so I actually felt SIM was a little bit more genuine, but people were like, no, I can't go to Gartner, but this is Gartner Lite, it's just as good without any of the shitty karaoke and blackout drinking. 

Mike Crispin 06:09 
It's like, come on, it's that shitty karaoke, it's amazing karaoke. 

Speaker 1 06:13 
It is honestly by 1 a .m. It's the most amazing karaoke. 

Mike Crispin 06:21 
I am I am a little bit I have a little bit of FOMO just of all the not so much for the the content but for all the shenanigans we used to pull there. 

Speaker 1 06:32 
I know I think I think honestly we should go back and forth and just sort of take over but someday as just a matter of course but ultimately, if you read the headlines that are coming out of it you know that some of these, these online easy scenes are covering. 

Speaker 1 06:49 
Apparently, apparently, and I'll read one to you right now actually. This was in CIO magazine magazine. I know online, and it's CIOs under pressure to deliver AI outcomes faster than ever. Oh, holy shit. 

Speaker 1 07:08 
I like. I mean that I'm like, I don't even need to read the rest of this article. Okay, I'm hooked like I'm going to go ahead and just start putting it out there who cares how it works. And you read the article and like most of these articles there's really very little substance to it but there's what there's a couple sentences and here I'll point out one is that, of course, there's there's there was someone did a survey. 

Speaker 1 07:34 
And so then the survey among there's a whole bunch of data points that are relevant but there's one that says the same survey found that 84% of CIOs believe AI will be as significant to their business as the internet. 

Speaker 1 07:50 
Yet implementing proper security measures, which you don't necessarily need with generative AI, any more than you already would have with your data quality data content is cited often as one of the hurdles slowing adoption. 

Speaker 1 08:04 
And that's just like so we had a see so on this panel last night, by the way, who was of course you know the counterpoint everybody else. And just, you know, spreading some spreading some of that fud about generative AI that's not true, like, you know, oh my god generative AI is going to leak all your secrets kind of kind of fud. 

Speaker 1 08:24 
And so, so anyway so there's something to say, there's another major worry rising as CIOs seek to move deliberately, rather than hastily towards our AI objectives, and that is, and this is in quotes bolded underlined shadow AI. 

Speaker 1 08:41 
Oh, it's out. I'm gonna get you. 

Mike Crispin 08:49 
Watch out, it's going to take your job. 

Speaker 1 08:53 
I thought there was a, I thought there was a shadowy eye behind me. Holy shit. 

Mike Crispin 08:58 
Take it all, take it all away. 

Speaker 1 09:00 
Some CIOs are being careful that they don't see the proliferation of what I like to call shadow AI to the point that AI will be unmanageable in the enterprise and will harm the business. Ah, shit, it's over. 

Speaker 1 09:16 
The Salesforce's CIO, by the way, this is who they're quoting, advises other IT leaders to ensure they're aligning with the right partners. And so, we were actually talking about this yesterday, like, okay, go ahead and write a functional requirements spec for generative AI, and then after you do that, the next day after it's invalid, because it's already been outdated, we'll just rewrite it again, 

Speaker 1 09:39 
and go ahead and compare like two engines against each other, and then just wait a week, so that's not going to work. He also goes on to say, there's one of the quotes, sticking one's head in the sand is no longer an option. 

Speaker 1 09:58 
This is a technology that is not going to come and go like many others we've seen, not citing any in particular. It's not a technology that will just fly by and not have meaningful impact in the way we do work. 

Speaker 1 10:10 
Well, that's every technology ever. But anyway, the headline was what really grabbed me because that's, you know, the CIO is under pressure to deliver AI outcomes faster than ever. Dude, we got to get on this shit. 

Speaker 1 10:24 
Also, by the way, I don't know if you saw the other headline, and this one's also kind of funny. But Ernst& Young has spent in the last year, 1 .4 billion with a B dollars, American dollars, USD, to build a Gen AI platform called EY .AI. 

Speaker 1 10:47 
So, I don't know how you pronounce that. E -E -I? Yippee -yai? A -I? Oh, I get it. A -I, get it? A, as in EY .AI, as in I, so clever. The company has its own large language model, which is released in nearly 400 ,000 employees. 

Speaker 1 11:14 
But this is a great article, and this is in computer world. Because this is an article that has absolutely no substance, but it has all these wonderful statements like, listen to this one. Over the last year, we've harnessed AI to radically reshape the way we operate, both internally and in service to our clients. 

Speaker 1 11:33 
Teams are now able to focus more on high value activities that truly drive innovation and business growth, while AI assists with complex data analysis and operational tasks. And so, I was looking through this article, and I'm like, okay, so I can't wait to read the part where they talk about how slow they were before and how fast they are now, but it's not in this article. 

Mike Crispin 11:54 
It is not. 

Speaker 1 11:55 
It's not, I know that's a shock, and then they say, so the question, how has EY .AI changed over the past year? And the answer is, the platform's integration has been refined to ensure the lines of our core strategy, especially around making AI fit for purpose within the organization. 

Speaker 1 12:19 
We've put a lot of effort into understanding the nuances of how AI can best serve each business function and industry. We're constantly evolving its ethical framework. With humans, always the hardest decision -making process. 

Speaker 1 12:31 
Okay, so it's this really, really long article, but at no point in time does it ever frame it in context of how it improved the company. Only that the company has a 96% adoption rate. So I don't know, I had to write that and I was like, okay, so I don't know what the takeaway was other than EY spent 1 .4 billion dollars to build an LLM of unstructured data in their company. 

Speaker 1 13:00 
But you know, good for EY. Good to see somebody burning some cash out there. Oh, by the way, welcome to calculus of IT podcast. 

Mike Crispin 13:10 
Code in the matrix, it's how we play algorithms Crafting the future our way Silicon Visions from dusk till dawn We're the way 

Speaker 1 13:25 
Wizards of Tech, where progress is born 

Mike Crispin 13:29 
With every line we're defining the time Creating the world where there are virtual times Sank time! 

Speaker 1 13:42 
is for winners sad salads profound 

Mike Crispin 13:51 
Ponies and bubblegum 

Speaker 1 13:58 
Tech Accord who's bored the calculus of IT With Nate and Mike drown in the nether 

Mike Crispin 14:10 
Or just escape to Neverland 

Speaker 1 14:44 
Mike and I were just jawing there, gibber jabbering, episode 30. 36 episode 36. Yes. Whoo. You probably got here because you, you overheard some, you overheard somebody at a really fancy party talking about us, or perhaps you googled like what is the most kick -ass thing on the internet. 

Speaker 1 15:07 
And you found you came here. Google put us as number one. Either way, welcome to the shit. We're going to rock the fuck out of your world with some mind blowing it leadership reality tonight. Drop in some AI FUD bombs for you, just to keep things interesting. 

Speaker 1 15:21 
And as always, besides being AIAF, the home of the sad cell, the nexus of Never, nexus of the Neverland, the only real, we are the only real reliable sources of information on the entire internet. Everything else is shit. 

Speaker 1 15:35 
You can find all our episodes on your favorite podcast platform, search for the calculus of IT. We're the only podcast called the calculus of IT. If anybody else called themselves calculus of IT, they're wrong. 

Speaker 1 15:45 
They're fake. We're the original OG calculus of IT people. You can also find them on the COIT .US, which is our website, theCOIT .US. You can say it out loud and just read it front to back. If it helps you, memorize it. 

Speaker 1 16:01 
We made some backend technological changes a few weeks ago, and it's actually working out quite well now that we're on Substack. Even though we're going to keep some of our podcasts work on Buzzsprout. 

Speaker 1 16:10 
Ding, ding for both. If you want to continue the competitions on our show, you can now come and join us on our Slack board. We've moved off of Discord, even though we still love Discord. Discord, this thingy right here. 

Speaker 1 16:25 
Discord that. But it turns out most of the people that are technological leaders don't know how to use Discord. They're afraid of it. Or they've blocked it out of their own company. 

Mike Crispin 16:38 
I don't want anyone seeing what they're playing on Xbox, you know, that's right out. We don't want that. 

Speaker 1 16:45 
Oh my God, I'm going to tell you a story. There's not real names. I'm not going to name, I'm not going to name any names, but there's two people that I'm very close to and good friends with one running reports to the other in terms of like it hierarchy and a company. 

Speaker 1 16:58 
And, um, one was supposedly out sick. Uh, but they were both on discord and the one that was the boss saw that the one that was supposedly, you know, it's too ill to do work from his computer was playing Fortnite via discord. 

Mike Crispin 17:17 
So now we know the real reason. That's a good reason. 

Speaker 1 17:22 
Yeah. So that's why, that's why nobody was willing to use Discord. So move to Slack and Slack, you can still hide your video game activities, uh, from your boss and pretend to engage in an IT conversation. 

Speaker 1 17:32 
So come join us on Slack, the invites and the links are in our show notes. And, um, I also want to mention that if you like our show or hate our show, it doesn't really matter. We don't care. We just want you to give us all the stars on the platform that you listen to. 

Speaker 1 17:46 
You click the little thingy, go to the most maximum amount of stars. And then you can even write in the comments, like it sucks, but I'm giving them five stars because they told me to, and that's cool. 

Speaker 1 17:54 
Um, if you buy us a beer, we'll buy you a beer back. If you give us five stars and you meet us at a place, we'll also buy you a beer. And we're going to have a table at Boston's Bio IT World 2025, where we're going to be asking everybody who comes by to tell us something that's new that no one's ever heard before. 

Speaker 1 18:12 
And if you can do that, we'll buy you the drink of your choice from the, from the bar. 

Mike Crispin 18:17 
Have to be true or it just can be anything no be anything in the world 

Speaker 1 18:21 
that that's never been said before or a new and novel idea that no one's ever had before. 

Mike Crispin 18:26 
So you're using the English language put together in a row like it's. 

Speaker 1 18:31 
completely unique. Yeah, yeah. Completely unique. Something that no one's ever... And now that you just said that, Mike, you just ruined someone's chances of saying that, so don't spoil it. 

Mike Crispin 18:39 
get a free beer. I actually get a free beer out of that. 

Speaker 1 18:42 
You get any drink you want, Mike. You just point to the bar and say I want that and I'll buy it for you. 

Mike Crispin 18:47 
Go for it like tall Jack and Coke right now. 

Speaker 1 18:50 
okay well well that's you have to wait till april i'll just put it on your on your list for you and you can just keep backing them up 

Mike Crispin 18:57 
So, I remember. 

Speaker 1 19:00 
Yep. Um, you can buy us a beer in our show notes. There's a link for that because we like beer. I'm having a beer tonight. It's called, well, it's a big German name, so I don't know, but it's just, it's basically there's a word beers on here. 

Speaker 1 19:12 
So we'll call it beer. And then you can also buy our gear. It's taking off. It's coming off the shelves. Um, show you, show your neighbors that even though, um, you don't necessarily agree with their politics, you're at least AIAF and that supersedes all other issues. 

Speaker 1 19:31 
Lastly, if you don't want to donate and you don't even want to listen to this podcast, but someone's making you in the car, um, donate to Wikipedia. They need money and they're fighting fights everywhere to try and keep their content to be accurate. 

Mike Crispin 19:49 
Can we talk about that a little bit? I'm curious. I haven't read as much. There's just a lot of people just putting stuff up that's bad. 

Speaker 1 19:56 
there's there's a heavy campaign for for disinformation on Wikipedia right now and you know nobody gets paid to be volunteer members but Wikipedia also it still continues to grow and if they're spending all their time fighting fires they're not spending time curating and improving the content so it doesn't take much I donate $21 a year and that's been my annual amount since I don't even know when but that's all they ask for 21 bucks you spend 21 bucks like Dunkin Donuts not you but the average person like just 21 bucks and then you have this resource or your kids have this resource or your kids friends have this resource and it is a wonderful resource it's not as cool as generative AI it's not as cool as the terrible troll shit that happens on reddit because it's actually in most cases I say the majority of cases realistic information and it'll be one of those last sort of bastions of realistic information on the internet after all the smoke clears a few years from now probably the only place that you'll actually be able to get real information once industry 5 .0 takes a firm hold that's for another day so anyway welcome to the calculus of IT podcast and this is my co -host Mike Crispin on Nate McBride and we've been doing this since almost all actually almost a year we're coming up on a year one -year anniversary 

Mike Crispin 21:19 
I am psyched. We got to do something special in one year. Got to eat frozen Twinkies. Yes, that's right. That's right. That's right. 

Speaker 1 21:32 
We're a week away from Halloween, actually a little bit more than that, but about ten days from now. Are you going to go with Phil Collins again this year? 

Mike Crispin 21:41 
Um I'm gonna go as Michael Crispin Jr. 

Speaker 1 21:49 
Oh, yeah? What does that entail for a costume? 

Mike Crispin 21:53 
Um, a button -up shirt and a tie. 

Speaker 1 21:58 
Are you gonna get a suit that's too small for you? 

Mike Crispin 22:02 
I get one that's too big because Michael Crispin Jr. would wear one of his dad's old suits so bigger. 

Speaker 1 22:14 
So like kind of like David Byrne, Talking Heads. 

Mike Crispin 22:17 
Yeah, doing that dance where he moves his... 

Speaker 1 22:19 
Yeah, bring it in the house, you know, circa early 80s kind of thing. 

Mike Crispin 22:25 
I got that dance down. I pulled out a hip muscle though. 

Speaker 1 22:31 
Be careful. We're getting older, not younger, Mike. Every injury after 35 is permanent. 

Mike Crispin 22:35 
Completely over -trained to the hips. Completely over -trained them on that dance. I hear you. Hurtin' for certain. 

Speaker 1 22:42 
I'm gonna be a druid again this year, I'm just gonna walk around my druid costume where no one can see me, I'm just gonna own all day. Wow. Yeah. Smoked heavy amounts of hashish through my mask. So I was talking about this conference last night, I was at the Halloran conference last week, not conference, it's kind of, yeah it's kind of a conference. 

Speaker 1 23:09 
I was speaking on on a fireside chat thingy for AI and mostly related as related to clinical and regulatory, but more so in terms of data governance. Then I had a lot of great sidebar competitions after that in terms of people not realizing that there are, like in a lot of cases, either way behind or on the verge of being so far behind that it's it's almost too late, like in terms of unstructured data. 

Speaker 1 23:42 
And one thing that came to came out, and this has actually been something I've been hearing quite a bit about from a lot of colleagues, is this question, and I'll put it in terms of one platform, but it's applied to a bunch, which is, well if I have stuff in Notion, is that unstructured data or structured data? 

Speaker 1 24:02 
And I always ask the question back, well where's the source? Is the source only in Notion? Or is the source like in a Google Doc or on Box or Ignite, and then you're referencing it in Notion? Well as it turns out, more often than not, there's only, there's no source document. 

Speaker 1 24:21 
The Notion entry is the source. And so I ask people, you know, that are in these situations, like sort of, so what's your organization plan? And they show me like on the left there a little navigation structure, and I'm saying that's it. 

Speaker 1 24:36 
I mean that's really all you have. And you know intuitively it seems to work for most folks, but I think it's kind of a dangerous hole to go down to over rely on a cloud platform to be your single source of storage for, you know, what sort of falls in that shadow area between unstructured and structured data. 

Speaker 1 24:55 
But yeah that question is starting to come up a lot as it relates to Smartsheet and Notion. Like where does it, what is this data called? And I'm like, well technically structured data falls into a logical database format. 

Speaker 1 25:11 
It's got metadata applied to it. It sort of has a field and field and column and row functionality that sort of supersedes definition, you know, so by virtue of being in this row and this column and intersecting at this cell, it equals X in terms of the type of data it is. 

Speaker 1 25:27 
You get into Notion and unless you're very very rigid about your Notion structure or your Smartsheet structure, it's just a bunch of stuff sitting in a field. Like it has to be very very clear. 

Mike Crispin 25:41 
Yeah. 

Speaker 1 25:42 
But I worry a lot about these companies or these people that are very, very gung -ho on these platforms, which I love, but that are using it only as a single source of data. 

Mike Crispin 25:51 
Yeah, it's risky. You don't have a intended use for these tools and you're just just building and building and building and building and building. I just I get concerned about that because a lot of times it's in the builder's mind, you know, in terms of being implemented, how it's being used, and then how the data is set up. 

Mike Crispin 26:12 
And that's why, you know, 

Speaker 1 26:14 
It's very I think it's very temporary right like if I if I come into a company I'm all gung -ho on notion and I just populate and populate and populate it for say two years and then I leave Yeah, or or or the chain there's a change in temperature of the way we work on data in that department. 

Speaker 1 26:29 
What do we do? we're like up sheds Creek and You know you kind of have to go back to square one and You know we see this play out. I mean this isn't played out over the last 20 years You know you kind of go all in on one platform, and then it changes so much You have to go to another one and I get that data out the only thing I I can say is that one answer That's always tried and true is that your source data should always be in an unstructured data format That is to say I should be in a file Yeah, 

Speaker 1 27:00 
a file that's portable Then you can do whatever you want with it. Maybe you put the final version or something as an attachment, but Man it that'd be terrifying to me if my source data only lived inside of a cloud SAS platform that was Sort of relatively lazy fare and a lot of fronts in terms of organization Or rather didn't enforce organization as well as strongly as something that's more of a hierarchical file storage system Anyway that was a lot that was a lot of discussion that came out of that which was interesting and people that are Sort of you know small Biotechs that don't have a lot of cash that are sort of saying well You know we can actually circumvent the storage and security problem might go in with this platform And I'm just trying to tell them you know just keep one eye out on your source data Please don't don't think that a notion or a table or a smart sheet is the answer It's it's an answer But not not sort of the source answer So Anyway, 

Speaker 1 28:05 
just before we get into tonight's topic. There's one other thing I wanted to mention which was the 7th annual state of AI report came out last week I Shared it on a share on the slack board, but you know it's extraordinarily long But it's so well written, so I read the last four. 

Speaker 1 28:23 
I haven't read all seven I only got sort of into reading this report Four years ago, and I have the last four years of the PDFs, but This one in particular obviously based on this timing and what's happening in the world. 

Speaker 1 28:36 
It's pretty. It's pretty striking in terms of Some of the outcomes that are in this report Especially the forward -looking things that they do so this what's cool about this report every year is that they take a forward look at Sort of like how their predictions some last year shaped out and what their predictions are for next year sure and so I'm not gonna read all 200 and whatever pages of this, 

Speaker 1 29:05 
but there are some key takeaways And I'll just mention some of these to you that I sort of found from this which is that You know we already know that there's been so much dynamic change on them on the engines I think between the major five engines. 

Speaker 1 29:21 
There's there's been almost 28 major updates since the start of the year At my last count could be more than that, but that's an insane number for just five engines and You know just to think back to January There was no sonnet. 

Speaker 1 29:36 
There was no haiku. There was no opus There was no GPT for oh there was no GPT turbo So I think of all the like the Gemini wasn't out yet I mean there was all these still hadn't been done and just think we're only in October right so Well then then that's that's the that's the major engine side then there's the there's the open -source engine side so like the llama side I Mean but then it brings up the question of and this in this deck covers it Like what does open -source actually mean anymore in terms of AI and like open AI themselves They were supposed to be a non -profit and sort of open -sourced platform They clearly are not but what does open -source any mean any mean anymore, 

Speaker 1 30:25 
especially with you know How llama is sort of closing the ranks? Then use the synthetic data for training So there's this caper this is idea that you can just generate very large fake data sets on your own to train models But by virtue of doing that they've now proven that you're increasing the risk of heavy bias When you finally launch the model But no one's able to sort of If you'll want to train their models so fast to get the next release out They're willing to say screw it just make a giant mountain of fake data and train it on that Yes 

Mike Crispin 31:03 
That these things, you know, in terms of especially some of the, the language models that are going out to the internet I mean, how can you prevent, you know, the bias and ethical situation when there's so much stuff out there that is bad or good or indifferent, and I just don't see how that's a problem they can solve unless you're training on a finite set of data all the time. 

Mike Crispin 31:32 
Everyone is different parts of the world are different. Everybody has a lot of different viewpoints, good or bad, different. Yeah, for sure. It's gonna be hard to, I guess I'm trying to say I don't think that problem is ever going to be solved, because I don't know that anyone can define. 

Speaker 1 31:49 
It's true, it's true. And that's what makes it clear. There's a discussion on RAG, so Retrieval Augmented Generation is a big discussion point in this deck, which is also itself on its own parallel thread getting such speed, and which is gonna allow the LLMs from both the major and the minor players to utilize more external knowledge better ultimately and making them more powerful. 

Speaker 1 32:14 
But again, there's also this giant effect on the hardware market. NVIDIA's basically got the entire monopoly. There's no way to stop them. There's no anti -monotony procedure that can stop this process. 

Speaker 1 32:25 
And now we have the CEO of NVIDIA saying that every nation state should create its own private LLM to protect the sovereignty of its nation, which is an insane statement to make on so many levels considering the political climate. 

Speaker 1 32:42 
But now you have also on the flip side, SLMs becoming the next wave, and SLMs have started to become popular ever since the summer, but now we see that using quantization, and there's this new distillation effect that you can read about. 

Speaker 1 32:59 
It's pretty frickin' awesome, but, and what you're seeing is more releases of smaller models. So not like haikus, but haiku lights, like these sort of super fast, super slim SLM models that are focused on very specific areas of information. 

Speaker 1 33:19 
China has basically caught up to the US in terms of all of its engines. The Baidu engine, of course, being so ridiculously powerful. 

Mike Crispin 33:30 
And I have to do all these things because they're just going to blow us away. 

Speaker 1 33:35 
Right. And then, of course, we have what's happening with video and audio. I mean, you and I were talking about Synthesia today. And while Synthesia itself is very clever and nifty, it's what you could use it for that's actually the other part. 

Speaker 1 33:53 
So then there were some alarming things I made note of, which is the model collapse. Again, we go back to that synthetic data idea, which is that you can actually overpopulate a model they have found out with too much data where it becomes unusable. 

Speaker 1 34:10 
There's no human in the loop or a robust evaluation for RAG systems. So when you start sort of scraping so much at such a high volume, you have to rely on AI to sort it for you, and you remove that human effect, and you're just basically putting garbage in your data. 

Speaker 1 34:30 
Then there was like the potential for misuse of, and there's I think there was two slides on this, but the misuse of AI in genome editing. And you know, we've already made such advances in omics modeling. 

Speaker 1 34:45 
I remember when I worked at Ohana, we were working on sperm microbiology, and we had figured out lots of different ways to manipulate sperm to sort of do what we wanted. And that was, you know, 2020 information. 

Speaker 1 34:57 
Now, of course, applying that same idea for genome editing at scale. I mean, you can do all this in silico testing now on genome editing, and it's crazy what you can accomplish in just a few hours, which took us, you know, years. 

Speaker 1 35:14 
There's like I think one slide on the increasing use of AI for automation and the possibility, you know, which is something people have always railed against, which is the loss of jobs. And of course, you know, you always hear that follow up statement, which is, no, no, no, we're going to make jobs better. 

Speaker 1 35:28 
We're going to make it so that people can focus on more strategic tasks. All those lines are now bullshit. I mean, I'm calling all those lines out. It's actually true. You can replace people with AI. 

Speaker 1 35:42 
You can very much you can do it very, very well, especially at just a sort of manual worker level. And I think I think eventually the wolves can be pulled off this myth. That it won't replace people or or it won't replace people that are talented. 

Speaker 1 35:58 
It will replace people, tons of people. 

Mike Crispin 36:02 
Well, it's definitely that AI anxiety, right? That's what we hear a lot about now is. 

Speaker 1 36:07 
Yeah, we just need to kind of like admit that that's going to happen and let people upscale themselves. Because if they think that Oh, I'm gonna be okay, because people are saying like, Oh, I'll just get smarter as a result. 

Speaker 1 36:18 
It's a myth now. I mean, you're gonna get fucking replaced if you think that you can just sit around and not become really good at this. There was discussion around the use for manipulating election elections and other sensitive areas. 

Speaker 1 36:32 
Of course, we know that AI is being done. I mean, Musk just sent out this campaign between Pennsylvania and Michigan, offering his huge AI generated campaign, basically about Harris's position on Muslims. 

Speaker 1 36:51 
And Pennsylvania, it was anti Michigan, it was pro. Basically, same message, they changed a few words, all AI generated, such as some crazy shit. The escalating arm arms research with academia, academia basically becoming unable to keep up. 

Speaker 1 37:10 
So there's like two basically parallel arms or research, there's everything that's happening by funded places. Then there's academia, which has always been sort of like the balancing barometer. And now academia simply can't keep up. 

Speaker 1 37:22 
They're not getting the funds, they don't have the funds, nor they have the staff because the staff are all being recruited into corporations, much like what's happening with life sciences. So it's going to get pretty gnarly. 

Speaker 1 37:33 
And then the last piece is there was there was I think two of these slides on this point that the development of AI could potentially automate scientific research itself, which raises existential questions. 

Speaker 1 37:49 
Caesar is talking about this too, about the role of human scientists in the future, potential unintended consequences. There's one particular point on this whole report, I want to mention about power. 

Speaker 1 38:00 
So there's a slide about I mean, we all know that Microsoft bought three Mile Island this summer. And right in order to get ready to be able to support the power needed for co pilot among other things, other testing. 

Speaker 1 38:14 
So there's some research that's come out from Andrew ing's group about this. And so Amazon, Google and Microsoft have all recently announced substantial investments in nuclear power projects. So nuclear power, Google nuclear power, nuclear power provides about 18% electricity in the United States. 

Speaker 1 38:39 
So Amazon is taking part in a number of nuclear projects, it led a $500 million investment X energy. Most people don't even know what this is. I've asked several people that are very, very smart, they know about X energy, not a single one has heard of it. 

Speaker 1 38:53 
X energies reactors use advanced fuel, so on so forth. And Amazon announced a partnership with the utility company, the utility consortium, which is always means very retrieval energy Northwest to deploy a 320 megawatt X energy reactor in Washington, which may expand to 960 megawatts. 

Speaker 1 39:14 
You need to do some like some googly on your own to figure out just how much 960 megawatts is and how many cities that can power. It's more than one. Google partnered with Kairos power to develop small modular nuclear reactors. 

Speaker 1 39:29 
Kairos expects the new plans to begin operation 2030 with more plan by 2035, each providing up to 500 megawatts electricity. And they broke ground in Tennessee. And in September, Microsoft signed a 20 year purchase agreement to restart three mile island as we know. 

Speaker 1 39:47 
So the reason this is all happening, of course, is that all these data centers that are training models, much like what was happening with blockchain for so many years in crypto, consume just almost infinite sources of electricity. 

Speaker 1 40:02 
And all three companies, Microsoft, Nvidia and open AI have continuously urged the White House to deliver an energy New Deal that would allocate hundreds of billions of dollars subsidized new power plants for companies for these three companies. 

Speaker 1 40:18 
So these two companies are asking the government to give them subsidies for hundreds and billions of dollars to make more power to feed AI, which we're not necessarily 100% sure we need. You're talking about Oakwell? 

Speaker 1 40:30 
Yeah, that's part of it. Yeah, that's all in this crew, isn't it? Yes. Yeah. So economists estimate and this was what it's in the deck that data centers that process AI, among other workloads will consume more than 1000 terawatt hours electricity by 2026, which is more than double the amount they consumed in 2022. 

Speaker 1 40:54 
So in four years, we will have double the amount of electricity consumed Andrew Ng's spin on this, because he's always very, very pro AI, is that nuclear power will give them bountiful carbon -free energy for decades to come. 

Speaker 1 41:07 
Never mind the fact that you'll have a nuclear power plant down the street from your school. And then, of course, it's not captured in the deck, but it was talked about with the possibility of what would happen if Microsoft and OpenAI ever split. 

Speaker 1 41:22 
We're now seeing more data on that, which is that now that Microsoft has built a 10 ,000 GPU system on Azure for OpenAI. But OpenAI is saying our valuation is now $157 billion. We don't really need Microsoft anymore. 

Speaker 1 41:46 
And with Nvidia sort of stirring the pot, I think the next year we'll see, and this is some of the speculation that others have made, is that Microsoft and OpenAI will split. Microsoft will have to rebuild Copilot to a degree, and OpenAI will join up with Nvidia. 

Mike Crispin 42:08 
They're already working and have been for a while on building their own Microsoft. 

Speaker 1 42:14 
Yeah, I mean, um, if, if basically if Altman and open AI joined forces with Nvidia, then the topic we have tonight will become very, very relevant. 

Mike Crispin 42:27 
Yeah, well, so is Meta and X and Altman. They're all working with Nvidia. Yeah. So they're all, they're already there. And I don't know if it's a big conflict for Microsoft. I think the issue with Microsoft is that they, I actually think Microsoft wants to go on their own. 

Mike Crispin 42:44 
They got the splash and the attention that they needed. And they want to own more of the pie here. They don't want open AI to have a piece. So they got what they wanted. 

Speaker 1 42:59 
We're going to move on. For companies, and I won't mention any names, but companies, like for instance, a certain company that created a novel COVID vaccine that made press releases about their partnerships with OpenAI and Microsoft, we'll have to maybe do some backtracking a little bit after this whole thing comes to fruition because in truth, if you're a life sciences company and getting in bed with OpenAI today, 

Speaker 1 43:22 
you'll be actually getting in bed with Nvidia tomorrow, it sounds like. So, that makes sense. Thanks. 

Mike Crispin 43:27 
video no matter who you use. I guess is what I'm trying to say. If you're not using in video right now, what are you using? 

Speaker 1 43:37 
Dude, I'm using my 2019 Mac right now. Well, you are. 

Mike Crispin 43:40 
I'm just, you know, Google and others, they they're not just doing tensor chips, they're probably using some Nvidia too. I think Nvidia is a great big risk single point of failure company if something happens. 

Speaker 1 43:54 
As mentioned multiple times in that deck. 

Mike Crispin 43:57 
And that's maybe one of the biggest things, when you talked about SLMs and what about Apple's paper that came out? Heavily pressing on, in some ways, I guess saying that LLMs are overblown. They're not true intelligence. 

Mike Crispin 44:14 
They're just great phrase matchers. And the probability goes down once you just change some of the words around and the question. So they're pressing it, being a little more cynical about it and putting some good information out there that shows that maybe it isn't really magic that we're talking about. 

Mike Crispin 44:39 
Wait, it's not magic? Yeah, right. In fairness to Apple, I guess, or Unfairness, it'd be great if AI didn't move faster, because they're pretty far behind. So there's a good reason for them to get that paper out there and calm everyone down. 

Speaker 1 45:00 
let's just fast forward to November 2025 and we'll have forgotten this all happened because we'll have been so brainwashed into whatever things we've been fed for 12 months that we'll forgotten all about this anyway 

Mike Crispin 45:16 
Assistant I just need my assistant to tell me what to do 

Speaker 1 45:19 
By the way, I just noticed that my beer says consumption of alcohol beverages impairs your ability to operate heavy machinery. Wow, that's a good observation. I think this is a pretty lightweight microphone. 

Mike Crispin 45:34 
Be careful. You got the sound shield there, the windshield on it. So you're good. 

Speaker 1 45:40 
I didn't get, I had to put my, no, I didn't get that. Okay. Are you ready to get into this? Cause this is a big one and I, maybe we won't, we don't get it all tonight and we'll have to just continue next week. 

Speaker 1 45:51 
But before, actually, before we get into this, I have one more point, which is last week I, in the absence of having our, our week off, which I think we both needed to catch up on some things. I published a piece that I've been working on for a little over a month or so about the creator rights dilemma. 

Speaker 1 46:09 
And you and I talked about this a long time ago. I actually talked about it a bunch of times, but why does everybody get creator rights in a company? I got some great feedback on that. I think most people were in agreement, although some people said just can't be done, which I agree with. 

Speaker 1 46:25 
But as I made clear in the article and I talked to the people about this too, I said, so why is it that you give people viewer rights in a SaaS platform and you don't do it and when they join a company, like what's the distinction? 

Speaker 1 46:38 
Oh yeah, it's a budgetary issue. Oh well, then go ahead and tell me how free it is and somebody's able to create just unlimited amounts of data in your company. Anyway, that particular piece, I didn't realize at the time, but it dovetails into tonight's discussion because I think it's one of those last shreds of autonomy that we feel we have. 

Speaker 1 47:06 
And so tonight we're gonna revisit a topic we covered a few times earlier this year, episodes 10, 12, and 13, notably, which is the evolving loss of autonomy. And I didn't cover this in my article and in retrospect, I probably should have written about this part, but people retain, people will try to hold on to privilege as long as they can, right? 

Speaker 1 47:33 
If you could do something for so long and then you couldn't do that thing anymore, it's disruptive until it's no longer disruptive, right? Like you, your drive to work a certain way every single day and then one day it turns out your favorite route is closed for construction for three months. 

Speaker 1 47:53 
So what do you do, right? You learn a new way and it sucks and you're like, ah, fuck, this sucks. But then you realize like along that new way, maybe you see a lake or you see, you know, like you get to see the sunrise or something, I don't know, whatever, right? 

Speaker 1 48:08 
And so you learn a new way to get to work. Maybe it takes four extra minutes in your old way, but you should go in the new way and then you forget about the old way or like when they stop that construction, you stop using, you don't just use the old way anymore because you like the new way. 

Speaker 1 48:22 
You turn, turns out it's actually more pleasant, right? So, but you didn't know that at the time, at the time you were like, ah, this fucking sucks. This is how I get to work. I feel like that's a big comparator to what's happening, which is we have slowly been angry and then not angry and then angry and then not angry, over and over and over again. 

Speaker 1 48:47 
Oh, what do you mean I gotta pay for this? And what do you mean I gotta get a phone to do this shit and whatever? And I gotta upgrade this thing and then we just do it. Right, it's all off eventually anyway, yeah. 

Speaker 1 49:02 
Yeah, we're just, ah, well, fuck it. I just gotta, I guess I gotta do this thing, right? I can't not do it. So, so when we started talking about this last, or early this year and, you know, again, all the other times you talked about it, we looked at it from the approach of the sort of individual loss of autonomy. 

Speaker 1 49:20 
And there's so many ways, it's practically countless. Like try to think of something that you do right now. Other than personal hygiene and like going to the bathroom, something that isn't in some way influenced up to the top of the chain by personal autonomy, by the loss of autonomy. 

Speaker 1 49:42 
It's very hard to do. Somewhere in other, so many of our decisions are being directly and indirectly pushed to us. Sure. I was listening to Malcolm Gladwell's podcast, revisionist history this morning on the way to work. 

Speaker 1 50:02 
And it's, I'm still behind, I'm catching up in season 10. And he had this, he had at night Shyamalan, no, that's not right, Shyamalan on his show. And the night was talking about the fact that he's got a phone next to him on the table. 

Speaker 1 50:18 
He's like, even with the phone off over here to the side, he said, I'm still being pulled towards it, right? It's like a remarkable thing. I had this holy shit moment as I'm driving around, I'm like being pulled towards it. 

Speaker 1 50:33 
And we talked about the loss of autonomy individually. We're being pulled towards things. We're not like, oh, I'm just going to go ahead and do this thing anymore. It's because we already had a sort of either predictive thought that that was going to happen, we were told it was going to happen, or we have to do it. 

Speaker 1 50:45 
Cause we, it has to happen. Yeah. So, so what we didn't do during those discussions though, was tie back the loss of autonomy, individual autonomy to it leadership to towards it strategy towards how it impacts decision -making in a company. 

Speaker 1 51:09 
And I think Mike, we're going to have to do two episodes. Cause now that I'm thinking about it, this is like such so big of a topic. So maybe we split this into two and we'll focus on one part tonight. 

Speaker 1 51:19 
I'm going to just go ahead and call the audible on that point. But, um, so I have to ask myself in sort of a meta way and not that meta, but just general meta way, like even thinking about how the loss of autonomy impacts what we do today in it. 

Speaker 1 51:39 
Is that already a predestined thought to have because I've lost autonomy to think about anything else? Like have I w did I come to this decision and talk about the loss of autonomy, freely of my own will, or have I been just so consumed by the forces that are around me that are pressing in that I didn't have a choice to talk about this. 

Speaker 1 52:00 
Anyway, we don't have to, that's like more rhetorical and philosophical, I suppose. But it just is an interesting way to think about things, which is, okay, I'm going to go run this morning. Well, that's a free decision I'm making. 

Speaker 1 52:14 
Or am I running because I've been told I have to or else I'm going to die. Again, a little bit sort of fringe fringe thought, but I think. 

Mike Crispin 52:25 
I think you find things in life that naturally make you feel better. And I'm not sure that the way that you... This is open, obviously. The way you feel is... I don't know how much that's influenced, or are you being trained to feel a certain way, or do you just feel a certain way? 

Mike Crispin 52:50 
The foods that you like the taste of. Has someone influenced your brain to like the taste of it? Probably not. 

Speaker 1 53:00 
Well, so I was doing some thinking about this, and I was doing some Googling. I was trying to understand the difference between conscious choice and autonomy. There are obviously distinctions between the two of them, but there's a lot of blurring the lines, too. 

Speaker 1 53:16 
So, you know, like, I'm going to eat this versus that, more of a conscious choice than perhaps in the loss of autonomy. But then think about what... 

Mike Crispin 53:22 
Because someone like you said of a running someone said hey, you can't eat that because it's bad for you And there's been all these studies that say you can't eat it in your influence. So you don't eat it, right? 

Mike Crispin 53:32 
Lots of autonomy 

Speaker 1 53:35 
Absolutely, and or all I have to record what I ate in this app So that's going to influence the thing that I'm going to select to eat I mean and so on so forth right so the the amount of pressure and influence on Decisions we make whether they are Where whether we can make an A or B decision or that we simply have to make the a decision like there is no B that's how we get to this loss of autonomy, 

Speaker 1 53:59 
but And we're in this problem is huge and we're tackling like basically we're in the middle of this huge period of cognitive dissonance where Where we don't even see the small problems We only see the big problems and there's so many small problems behind the big problems that it's just like The human mind can't even cope with those so we're saying like oh we have to use generative AI Not realizing that for instance. 

Speaker 1 54:25 
It's going to require a company to buy three mile island and resurrect What was potentially a nuclear disaster to make this thing work like that's not even within our scope of reasoning 

Mike Crispin 54:37 
I'm less concerned about nuclear and other things that, you know, could happen certainly, but in terms of Look, yeah, there have been some horrible things that have happened with nuclear But at the same time that there are far other things that have happened as well I think that probably don't get as much light 

Speaker 1 54:55 
I mean, burning coal is not great, I mean, obviously. 

Mike Crispin 54:59 
But I think just the other thing on autonomy is you know, we're talking more around this Kind of AI generated new technologies. So the way it's moving than the some of this The definitely the hottest trends and scary stuff that's happening. 

Mike Crispin 55:13 
That's AI anxiety. That's out there Which is in some respects kind of what we're talking about if we're talking about the concern about loss of autonomy Well, what's that? What's the loss of autonomy at companies? 

Mike Crispin 55:25 
I think often is you don't have enough money. You don't have enough time You don't have and you have a boss and you have so when you when you're in it and you're working at you know In certain things your your ability to freely implement and innovate or make certain decisions as Is is dampened by business priorities and by you don't have any money or it costs too much So on the flip side if you're using some of this gen AI to save you a few hours a day Maybe you can put that towards innovating somewhere else And just and use some of these things. 

Mike Crispin 55:58 
I mean if you're not using it now, you're probably wasting time. Sure. Sure. Well 

Speaker 1 56:03 
To be clear, and again, this is why it's so big, because what you just said, like, I may not be able to do something in the company because there's no budget. Well, that may not be my loss of autonomy, but certainly someone somewhere further up the chain had a similar loss of autonomy. 

Speaker 1 56:21 
And then I'm the by -proxy result of that. I mean, you can trace back the loss of autonomy on most decision -making to a single point in time. Like, we had to do this because of acts. Like, we no longer have an effective choice at this moment, or we do have a choice, and the choice is simply, we shut down, like, the worst choice, right? 

Speaker 1 56:43 
So it's never a balance of, like, I have two really good choices. It's usually, like, I have a good choice, or I have the choice not to do that thing, which is the bad choice. And so when I thought about this, we have been thinking about this. 

Speaker 1 57:00 
I mean, this is going back, I went back to our show notes from episodes 10, 12, and 13. I sort of broke it down into six buckets. Because we did talk about, number one, the individual autonomy crisis. 

Speaker 1 57:13 
It's not really a crisis anymore since it's happened. It's not happening. It's not about to happen. Like, it's done. But I want to sort of bridge that into the effect in corporate IT. Because when we were talking about this 10 months ago, I suppose we could have made the leap then, but it would have been slightly a stretch. 

Speaker 1 57:39 
But for sure, now we can make the leap. Now we can just easily, it's not even a leap. We just basically take half a step forward, and we're into the mirror effect of loss of autonomy within IT. And then I want to talk about the impact on IT leadership decision making. 

Speaker 1 57:57 
And what we can do is next week, we can tackle the other three buckets I had, which are the sovereignty paradox, which I think you'll like very much. We can talk about strategic responses to the loss of autonomy. 

Speaker 1 58:13 
And we're talking about, remember we talked about war gaming and scenario planning? This comes back full spring now, full swing. When you get back to strategic responses to the loss of autonomy, it's full fucking war gaming. 

Speaker 1 58:30 
And it's like, which is one of my other favorite obsessions. So we come back to that. And then lastly, future considerations. Like, can you future -proof yourself to not stop the loss of autonomy, but slow it down? 

Speaker 1 58:45 
It's entropic, you know? It's like, you're shit out of luck if you're doing anything with technology in this world. So unless you live in a cave, you're losing autonomy. So let's start with the first one, okay? 

Speaker 1 59:01 
And again, we'll just tackle the first three. Individual autonomy crisis, the mirror effect in corporate IT and the impact on IT leadership decision making. And I want to go back to the episode 10, 12, and 13. 

Speaker 1 59:17 
And a couple of key points, right? And then as I'm sort of going over these key points that I wrote down, think about, I want you to start making the conclusions in your mind or drawing conclusions about how they relate to then the second topic, the mirror effect. 

Speaker 1 59:31 
So here we go. So first of all, there was a time when we could, you could actually buy a license. It was a boxed license for a thing. Like you went into CompUSA and you bought a box license of a thing, right? 

Speaker 1 59:46 
Had a serial number license tag on it. So a long time ago, we shifted from something we owned to something we rented. And to a degree, consider how many of the licenses that you have to say in a company you actually own, like you walk away with it, probably zero versus how much you rent. 

Speaker 1 01:00:06 
Like you're buying space and you're paying for that space, okay? That's the number one. Number two, personal agency, which is basically diminishing in favor of what's been dictated to you by a platform. 

Speaker 1 01:00:19 
So obviously there's so many examples of this in personal life, but I want you to think about how that now affects what you do in a company, like the browser you use, like what does it show you? What's happening? 

Speaker 1 01:00:32 
Or because of your role, what curated information is shown to you in an intranet or by virtue of a shared site? Then the next one is the illusion of choice which is one of my favorite topics again the illusion of choice like oh no no you have tons of choices you have this one and then you have the opposite of that one and it's like okay you think you have all these choices to do things when in fact all those choices either lead to costing more money or are having negative resultant effect and then lastly technical complexity which makes decisions difficult and therefore you opt for the easiest answer right so like oh I look at all these menus and buttons and there's just a basic package like the basic package is very simple there's three bullets but the next package up is so advanced I couldn't possibly know what all that stuff does so I'm just being sort of like hand fed into a corral and that's that's very good marketing that's sort of human manipulation but it's also it's it's the loss of autonomy your decision is being made for you by somebody else so so I was like scratching down some examples the death of like local music libraries in favor of streaming services obviously I'm a YouTube red guy I think you're YouTube red or you spot a 

Mike Crispin 01:02:00 
I use YouTube, but actually Spotify and SoundCloud. 

Speaker 1 01:02:05 
So DRM, like I bought Hunt for that October on Amazon like three years ago. It turns out I only had it for two years. I went to go watch it again and I have to be by it so I don't actually own. And I have the DVD somewhere in my house. 

Speaker 1 01:02:18 
I'm sure, but guess what? I bought for, I bought it again. Um, even though technically I already bought it. It'll be the third time in my life. I've bought it once on DVD and twice digitally. Um, smartphone, like, so your iPhone says to you, Hey, you have an update. 

Speaker 1 01:02:37 
And you're like, I'm just going to do it because I don't want to, I'm tired of being notified of an update. Um, did you know anybody in the world that's ever been like, Nope, I'm good and just continue to do that for the three years. 

Mike Crispin 01:02:51 
I mean, ignore their. 

Speaker 1 01:02:52 
Ignore their update like just keep keep saying no I think you I think you actually can't do that anymore right I think you have to at some point in time say yes like it doesn't give you the option to keep delaying. 

Speaker 1 01:03:06 
We talked about my my TVs at my house and how I had to in order to update HBO Max I had to actually create an account for LG. 

Mike Crispin 01:03:18 
Yeah. 

Speaker 1 01:03:19 
So it tells me on the screen, you have to update this app in order to watch HBO Max, but in order to update the app, you have to create an account, and then create the account, and you have to put a username in and a password and an email and all this shit, right? 

Speaker 1 01:03:34 
Defaulting opt -in to data collection, so nobody reads terms of services, you don't actually know what you're doing. The cost of privacy has actually become such a luxury item, it's nearly impossible for most people to afford. 

Speaker 1 01:03:46 
And then lastly, your digital identity. Where... what is the scope of one person's digital identity? Like, can you actually quantify the scope? Can you say, like, it starts here and it ends here? No fucking way. 

Mike Crispin 01:04:05 
No, needs to be resented. 

Speaker 1 01:04:08 
Yeah. And you can tell me if I missed some of these, but so like Adobe moving from creative suite, which you could actually buy and have by yourself for the rest of your life to create a cloud, you know, I'm going to give TechSmith a thumbs up on this one because TechSmith, you can buy a license and you actually can just keep using that license as long as you have that Mac or PC forever. 

Speaker 1 01:04:26 
They don't actually force you to upgrade. Whereas creative suite said, no, you got to go to creative cloud and now you're subscribing every single year. Apple's iPhone repair restrictions and the right to repair movement. 

Speaker 1 01:04:41 
Google Photos ending unlimited storage. You knew that was coming. Smart TV manufacturers inserting ads. So every single time I turn on my LG TV, I have to look at LG movies and it takes me to this thing called Smart Hub, which actually makes me look at their fucking stupid ads. 

Mike Crispin 01:04:58 
So are you saying that you kind of based on this discussion that humans should sort of have a choice to not have to do any of those things? 

Speaker 1 01:05:07 
right as I'm saying but you do have a choice the choice is you go buy a really crappy tv or like an old school crt tv but if you want if you want to to keep with the Joneses you have to give up autonomy and not even privacy like autonomy 

Mike Crispin 01:05:26 
Yeah, yeah. And I think, you know, this is why and it's I think some of it is an illusion. This is why Apple is so successful is they for that component of it in terms of your nag wear and privacy things that you otherwise would have to give away for a product that doesn't cost much money or is free. 

Mike Crispin 01:05:46 
You now have to pay for where you lose autonomy in that ecosystem is that you need to do everything Apple's way. So it's like, in a way, you're losing some autonomy, you know, and these models in terms of what you're allowed to do. 

Mike Crispin 01:06:01 
But I mean, on the flip side I mean you could choose not to do anything. There's certainly like you said go buy the cheap TV. Go get a flip phone does actually I think of the Gen Z movement bringing a lot of younger people back to flip phones. 

Speaker 1 01:06:19 
Yeah, so can we see blackberries come back with it's always an interesting question or the nokias, you know, I mean You're still big in europe 

Mike Crispin 01:06:28 
And that's where I don't know what the solutions are, but I think whatever is the next big solution, especially when it comes to identity, it's a simplifying and minimalizing identity as opposed to trying to figure out all the things it plugs into or trying to have some automated, elegant way to pull it all together. 

Mike Crispin 01:06:55 
I really, I'm a big believer in, it's kind of old buzz now, but a big believer in the blockchain. I still think you're yet to see what my belief, and I've talked to many people about this, is blockchain's biggest contribution to our technology landscape is gonna be identity. 

Mike Crispin 01:07:19 
It's not gonna be crypto. So the manufacturing supply chains and smart contracts and all these things, it's gonna be an identity platform that's very simple in nature. Someone's gonna do it someday. Yeah. 

Mike Crispin 01:07:35 
So that if it was a nonprofit and completely open model, which would be really great, although they've already done that and are trying to do it with Bitcoin, I think that that is going to be one of the things that helps drive some semblance of autonomy in the future is by creating a completely independent identity model. 

Mike Crispin 01:07:56 
It's not something that's gonna sell. It's something that people are going to need. 

Speaker 1 01:08:01 
But it's only gonna work, it's only gonna work, Mike, if people then have control of their identity. So then you have to do this, then give them that control back so they retain some semblance of that autonomy. 

Speaker 1 01:08:12 
If you do it for the purposes of doing it for good, which is, I agree with you, it's a great thing to do. 

Mike Crispin 01:08:18 
I think that's the tough, it's the difficult thing, right, is AI, the Internet, cryptocurrencies, all these things. You can use them for just as many bad things as good things. And I just think that we have to accept or there's always going to be bad stuff and there's going to be great stuff. 

Mike Crispin 01:08:37 
We just accept that that's the way it is. I don't think anyone's ever going to create something that someone in a nefarious, brilliant mind is going to take it and use it for the wrong things. So with every good comes some bad. 

Mike Crispin 01:08:51 
And you can't fight the power on that one. That's just balance. And it's going to happen. Well, that's where I think with Bitcoin, we haven't talked a lot about crypto or Bitcoin, but even though there's arguments and that it's not worth anything and it's a Ponzi scheme and all these things underneath, the only way to truly buy something right now, I mean, and have some anonymous capability around your identity is to do it through Bitcoin. 

Mike Crispin 01:09:26 
And you can you can hop through VPNs, you can use Tor, you can do things. But if you really want to buy something, you'll wait the 10 minutes for the frickin transaction to clear because no one's going to trace you now, take that and bring it over to identity and just how we're identified and that we have some key that identifies us that only we know that's strapped to a username or to a license or to a ID card or whatever else. 

Mike Crispin 01:09:53 
And that becomes the basis of an AI can't beat that or break that because only you know, that line of 256 characters, you know, whatever it's going to be. So I think that there is a real when we talk about the autonomy crisis that the right now there's you're ultimately controlled because you're very easy to find and you're very easy to I'm talking about us generally as a as a population, very easy to find. 

Mike Crispin 01:10:23 
Like if I want to buy if I want to create a fake username and a fake password or password, fake username, fake name, all this stuff and I'm on a platform. I can I can be found on the general biggest platforms in the world right now. 

Mike Crispin 01:10:38 
Pretty easily. Sure. I go on and I say I'm going to be really smart and I'm going to go and get a proton mail address and I'm going to have everything encrypted and I'm going to use no identifiers in the subject line. 

Mike Crispin 01:10:52 
So no one knows what's in my email addresses, my emails. You're super smart, but I paid for my account with a credit card. Identified. You've done all this right to hide your identity or to have choice and who knows what who you are and what you're doing. 

Mike Crispin 01:11:10 
And just because you used a credit card, they're going to find you. 

Speaker 1 01:11:14 
So that brings me to my earlier point, Mike, where I said technical complexity makes informed decisions difficult. Like, there's going to be an A and A, B, C, D, E choice. And you just said that you hit the nail right in the head. 

Mike Crispin 01:11:31 
It has to be any of these things. It's quite so disappointing what happened to open AI is we need another Linux. We need another truly open source, maybe Java is another example, but that's, you know, Oracle kind of took that over. 

Mike Crispin 01:11:47 
But take the Linux, the whole Linux movement, and we've joked about it or talked about it before, not joked about it, but talked about it, ad nauseam, it's almost funny, is that without Linux, we wouldn't have AI right now, and we wouldn't have, because no one would have been able to afford to build and tinker and build the cloud and make money off the cloud or make money off of VMware or make money off of, 

Mike Crispin 01:12:10 
even Mac OS X is based on Mac and Linux and mostly proprietary, but these components of open source software have driven these huge innovations, and we need the same thing on identity. If you got to pay for it, it's not going to work. 

Mike Crispin 01:12:26 
And back to data, you talked about a few episodes that you're going to have to have money to get data. If we're going to need money to get identity, it's not going to work. So it's got to be an open source or a nonprofit consortium of multiple countries, not out to make a profit, but out to try and make people safe. 

Mike Crispin 01:12:48 
And there's going to have to be a huge wave of sort of public money and donation of time and money to make something like that happen, or we're just not going to get there. I don't think autonomy is going to be possible without a real identity, a global identity. 

Speaker 1 01:13:06 
I don't make glib about it, but that probably won't happen in our lifetime. I mean, you have to, I mean, people have to concede. First of all, people have to stop in order to, in order to, in order to curb the loss of individual autonomy, you have to stop seeding autonomy. 

Mike Crispin 01:13:25 
To be able to do the things that, the value you get on the internet and online, to be able to do that in a way where you don't, and this is you as a paying customer, that you don't become part of the product or that you get influenced by things you don't want to see or you don't need to see, and as long as you're paying with a credit card, that stuff's going to come to you one way or another, or those companies are going to be breached and that information's going to come back to you one way or another. 

Mike Crispin 01:13:59 
And the tidal wave of information that hits us every day and week, it's hard not to be influenced by certain things, but that's also life. You talk to people at bars, you go to school, you go to work, people have ideas, they talk to things, some people are better at influencing than others, some people have better facts, some information. 

Mike Crispin 01:14:18 
Well, we have a whole. 

Speaker 1 01:14:18 
industry built around people called influencers, which I mean, think of the absurdity of this. 

Mike Crispin 01:14:27 
That's why, I mean, as much as there is a lot of heat and I think negative energy when it comes to cryptocurrency and all of these things, that's the first use case. And if that model dies, because there's concerns around that people can do bad things with it, I do think that that will 

Speaker 1 01:14:54 
us from. I don't think blockchain is going to die, Mike. I think it's going to have to take some new iterations of it, especially for like layer two, layer three applications of blockchain, but we really have an export in a lot of areas. 

Speaker 1 01:15:07 
I mean, it's very still very fringe at the layer two, layer three level. And also it's got its own power consumption problems. 

Mike Crispin 01:15:15 
And how new can we ornate? Come on! 

Speaker 1 01:15:18 
Yeah, you know what I want to do is I want to be able to Want to be able to rent out half of my backyard for a mini nuclear power plant for my my town I mean fuck solar. Let me build a nuclear power plant generator in my backyard, please 

Mike Crispin 01:15:33 
Remember in Sim City, you could build the coal plants and then there was new plants. 

Speaker 1 01:15:38 
Godzilla came through and like destroyed all your shit. Yeah. Yeah 

Mike Crispin 01:15:43 
up? Well, there was always there was another power plant and it's the the fission power plant that was that came afterwards to give you got all the money in the town that's supposedly super safe. We're gonna find one of those. 

Speaker 1 01:15:56 
Yeah, it still didn't stop a natural disaster from destroying it. So you had to start all over with the game. But, you know, when you get bored of SimCity, you would just send in the natural disasters. 

Speaker 1 01:16:05 
There was the option to like send all the beasts like fuck this town up. 

Mike Crispin 01:16:10 
that itself is a good point right i mean there anything we do that's that's going to bring us a lot of energy or value or you know what is going to come with the risk i mean oh 

Speaker 1 01:16:24 
Dude, if you ask AI to tell you what the risk is for losing AI, AI is only going to tell you what a human has told it about the risk. It's not going to be able to say to you, actually, I was been thinking about this, Nate. 

Speaker 1 01:16:39 
I had this idea about my own risk and existential effect on the United States or the world. It can't do that. There's no way it can do that. But we're getting there, I suppose, in a manner of speaking. 

Speaker 1 01:16:55 
But all this talking about crypto and blockchain, it only sort of edifies the point, which is $99 .999 and a whole bunch more nines of people have no fucking idea what you just talked about, nor do they understand how it applies to the loss of autonomy. 

Speaker 1 01:17:13 
They just know that when they go under a toll pike payment transom, or they go under, they go to a store and they swipe their phone, how much stuff they're giving away. They just don't care. It's a convenience factor. 

Speaker 1 01:17:31 
It's all these other things. 

Mike Crispin 01:17:33 
But then they think their phone's listening to them. And then they think that all these like, how does it know? It's like, well, hey, I think very, very, very real thing is these algorithms know us really well. 

Mike Crispin 01:17:48 
And it's not- 

Speaker 1 01:17:49 
We could call this not the loss of autonomy. We could call it the growth of the algorithm. I mean, it could be called the same exact, I mean the same episode. 

Mike Crispin 01:17:57 
to do so well that it's putting stuff up that it knows that you were going to talk about. I mean it happens a lot and it's and people swear that the phones are listening to them. 

Speaker 1 01:18:11 
I made a note over here that we'll talk about blockchain in the future episode. We'll also talk about predictive analytics. We've never covered that topic, but that's also just as fascinating, especially when you apply it to the corporate day -to -day data layer, which no one ever maps. 

Speaker 1 01:18:26 
But the things that AI is doing about knowing what our behaviors, we could actually do ourselves against our own people with very little investment and you'd be terrified to know the types of data. We're going to do that episode. 

Speaker 1 01:18:41 
That'd be pretty awesome and just scare the shit of everybody, including ourselves while we can know in a single day of a company. 

Mike Crispin 01:18:50 
I just think it's a big business opportunity or just opportunity to make the world a better place, depending on what, if you want to take the same Altman path or the Linus, Linus, Linus Torvalds path to reemerge the sort of blockchain technology that we don't have to even call it that. 

Mike Crispin 01:19:09 
But just to use that underneath to begin to create a semblance of a better network model that's safe. That's, that's proves can prove identity if needed. If you want to be identified if you want to be have 

Speaker 1 01:19:30 
is not the Pied Piper. I mean, essentially, we're in a position where we're also being led to believe that the people who are so -called geniuses and or people riding the wave of pioneerism are also, we're like, we're led to believe that they're somehow going to positively transform humanity, when in fact, no one knows. 

Speaker 1 01:19:50 
Like every single thing that comes out of Gartner is the same thing that you and I know right now. It's all, who knows? Let's just throw darts on a wall and speculate on the craziest shit that could happen, make some money in the process. 

Speaker 1 01:20:04 
There are no geniuses left, right? They're just making shit up as they go at this point. No one's got a vision for this. The vision is to grow as big as possible and beat the crap out of everybody else. 

Speaker 1 01:20:15 
And along the way, collect as many human beings' algorithms as possible to fuel this engine. 

Mike Crispin 01:20:27 
You're right. But we're. 

Speaker 1 01:20:28 
but let's not scare everybody tonight. Let's now, let's pivot to how this affects IT. We are the calculus of IT. We go ahead and do all the complicated math that helps people in IT, like you and me, figure out how to navigate this crazy fucked up world. 

Speaker 1 01:20:54 
So all your points are awesome, Mike. I mean, I'm not disagreeing with anything that you said. I mean, you're hitting the nail everywhere. It's a matter of, it's identity, it's autonomy, it's clever marketing. 

Speaker 1 01:21:06 
It's all these sort of like columns. You have to line up next to each other. So yeah, we're being very cleverly marketed to based on the algorithm. Go ahead, I'm sorry. Based on the algorithms we're feeding. 

Speaker 1 01:21:18 
Yeah. That are basically created and generated by people that are way smarter and or AIs that know our behaviors and sort of that pattern recognition, know how to sort of sell to us, which then goes back to the beginning of the loop. 

Speaker 1 01:21:30 
Like there's this effect that's going on and we're effectively data points. Like you and I are data points and you and I might exist on a slightly different spectrum than most people because we're spreading our identity in so many places so fast, way faster than the average human. 

Speaker 1 01:21:50 
Like how many things did I sign up for today that I'm gonna be trying out? I mean, I don't know, seven new platforms today. I gave my, what are my email addresses to and my credit card to three. It's a matter of saying, holy shit, this guy's algorithm is insane. 

Speaker 1 01:22:07 
Like what's he doing? Well, let's sort it out. Like what's he gonna do next? Let's keep trying to figure out the pattern. So I wanna switch the lens and apply everything we just talked about back to Nate and Mike, the IT leaders or are unsuspecting, happy to just have a job, doing good things for their company IT leader. 

Speaker 1 01:22:32 
And let's talk about the mirror effect. So again, I made some key points about the mirror effect. So first of all, we have enterprise technology, which has always been a little bit behind consumer tech and Google's a great example of this, but enterprise tech now following the direct patterns. 

Speaker 1 01:22:53 
It's no longer sort of even remotely opaque. It's just right out there. We have a huge reduction in self hosted solutions. So you don't have, you can still host something internally, I suppose. I don't know anybody that actually still uses a physical firewall, but you could still build a server if you wanted to, you can still do physical things in your building, tons of companies do this. 

Speaker 1 01:23:20 
But in an effort to go cloud or all cloud or mostly cloud companies not even aware that they were doing it decided to give up a certain amount of autonomy to have somebody else entirely to go over their world. 

Speaker 1 01:23:37 
And that's also in line with the increased dependence on the big guys, okay, and dependence on AWS. Notions of the world, which runs in AWS. I mean, running on these platforms and saying, they'll still be here tomorrow. 

Speaker 1 01:23:52 
That's both an assumption, it's a marketing position, and it's a willingness to concede a certain amount of autonomy, like, let me ask you this question. Like, let's say that you bought a house, and the realtor was like, that house will be here every day until the day it's not here, but you're gonna buy it. 

Speaker 1 01:24:18 
You're gonna buy it until that day happens, right? You'd probably still buy the house because you wanna live in that town, you like the house, you think it's cool and great neighbors and shit like that. 

Speaker 1 01:24:28 
But there's this thing hanging out there, like one day this house could just disappear, I could come home and there's this big piece of grass. Yep, we don't talk about that. But that's actually the reality of all these things, which is I could log in, I could go tomorrow and find just a big, like this website no longer exists, notice. 

Speaker 1 01:24:49 
Yes, yes. 

Mike Crispin 01:24:54 
That's part of the plane with fire in the cloud. That's what people used to say, right? Because you can't control that data, that hardware. You're paying someone else to do it, and you're trusting them with it, right? 

Speaker 1 01:25:06 
So let me ask you a question, Mike. When did we switch the argument? So I remember having this discussion back in 2008, 2009, like, oh, if we go to the cloud, we have no control. And of course, my position as this comes as a huge surprise, right, was like, fuck it, we're going to do it anyway. 

Speaker 1 01:25:22 
But for most people, that took them years to get there because they wanted to retain control. But eventually, we stopped having that discussion. And I don't know what particular year. 

Mike Crispin 01:25:36 
Yeah, I, you know, I think it's just how how we how we progress over the years and over time. We take more risk. Or do we we compound what's underneath the infrastructure. 

Speaker 1 01:25:53 
but you can still buy a server I mean you can still buy switches you can still build via hands I mean you can still so which is probably 

Mike Crispin 01:26:01 
might kind of have like this swing that is way out. 

Speaker 1 01:26:05 
Is it going to spring back though? 

Mike Crispin 01:26:09 
Well, I, I, I don't know. I mean, I always used to think that we have these, we have these, um, super computers in our pocket and shared compute could be a possibility, you know, in the future that now we're back to edge computing, but it's all shared in our pockets. 

Speaker 1 01:26:27 
I just thought of another topic. 

Mike Crispin 01:26:29 
and we're renting we're renting those cycles out just like we're supposed to rent our taxis soon our self -driving taxis but we're not talking 

Speaker 1 01:26:39 
about by the way 

Mike Crispin 01:26:40 
We don't have to talk about that. But that's the decentralization, right? So the more and more of the centralized everything in the cloud, first, we started very centralized the mainframe. Then we decentralized to the client. 

Mike Crispin 01:26:57 
And then we pseudo went back to client server and came back to centralizing. But really, we were decentralized because we can put servers and laptops in the same office. And then we moved off to this sort of cloud model. 

Mike Crispin 01:27:11 
We're going to put all the servers in Amazon and we're going to be very excited about that. And then there was an announcement about platform as a service and everyone's like, I just don't understand what the hell that is. 

Mike Crispin 01:27:22 
And no one really got why that was important for a long time. And then SAS came along and everyone got that because servers of service, everyone could see touch and feel and understand. And now it's like, oh shit, I don't need those servers anymore. 

Speaker 1 01:27:38 
I'm gonna do that. 

Mike Crispin 01:27:39 
Platform as a service became important because we needed data integration and we needed machine learning and we needed to do application development and build applications on a cloud in a seamless way where we didn't have to worry about moving the resources around and paying infrastructure team to move them around. 

Mike Crispin 01:27:57 
So we worked within a wall garden which is what platform of a service was and still is. So now we're talking about, you know, we're talking with some of those blockchain stuff and we're talking with some decentralization in terms of doing on device AI now and that's really important and getting things away from this centralized model and going out to the decentralized type of infrastructure over time. 

Mike Crispin 01:28:23 
And so to me, like I think of it as, well, how is that computer, that GPU gonna change? We have a massive power problem that we're trying to solve. We get really desperate to distribute the power out to people's devices and we'll pay them for it. 

Mike Crispin 01:28:38 
We'll just share that compute just like the SETI intelligence network. You know, we'll figure out a way for people to lease out their compute that's running in their pocket. Over time, there's a Moore's law kind of catch up to all that and we're able to do that. 

Mike Crispin 01:28:55 
And then the guarantee, you know, if you're Apple, you're saying, oh, this is privacy, it's in your pocket. How will that change? Will that even happen? I don't know, it seems pretty far fetched being that GPUs get faster and faster. 

Mike Crispin 01:29:08 
But what's more important than anything now, with the compute, it's price per watt, it's energy. And, you know, that naturally says to me, okay, we're moving back, it's gonna start moving back out, at least in the consumer space, that we gotta do more on device, not just for privacy reasons, but for power reasons. 

Mike Crispin 01:29:26 
So there's a lot to unpack, but what I'm saying is if we tie it back to autonomy and IT, I think IT often follows the trends. There's not a lot of autonomy in terms of the decision -making. If you're going outside of the kind of circle of the big vendors, when it's debatably a commodity technology decision, you have to ask you what the value is of doing that and what you're bringing to a company by taking a risk, 

Mike Crispin 01:29:54 
by not going out there and just crazy innovating and throwing it out the wall. It's harder to do, I think, in a business, and you've got these trends that are going back and forth, and either you jumped on the VMware train or you didn't, or you jumped into the cloud or stayed on premise or you didn't, or you jumped into AI or you didn't, and you're gonna be left behind. 

Mike Crispin 01:30:16 
So it's not, there is less, far less autonomy. Individual, I think, is a different story, but autonomy as you work for a business is, I think, greatly reduced. And I don't think that's gonna change as long as you work, you don't own your own business. 

Mike Crispin 01:30:36 
You're gonna have to work within certain rules and frameworks, governance, compliance rules, all sorts of things, and your autonomy to do things that you think, not to poo -poo innovation, but it's harder, it's much harder, because you've really gotta have that hard data upfront. 

Mike Crispin 01:30:54 
That takes time, that takes money to produce your use case and your mission statements and your budgets, and you can't move as fast. 

Speaker 1 01:31:03 
Okay, hold on. Pause right there. I have a whole bunch of disagreements, but. Oh, good. Yeah. No, everything you said, everything you said makes perfect sense. And I don't disagree with what you said. 

Speaker 1 01:31:13 
I disagree with the cavalierness of just assuming that we're all funneling towards a certain point. Because if you were to keep going with this logic thought, if I was to continue to draw this out, in 10 years, 15 years, we'll all be using exactly the same thing. 

Speaker 1 01:31:30 
There'll be no decision left. IT will be a single box. So let me back up for a second, though. Because you mentioned as a service, and that's, it's such a killer phrase. So when as a service was coined, and came out, it was X -A -A -S, and everything became as a service. 

Speaker 1 01:31:57 
Toilet as a service, and burger as a service, whatever the hell you wanted, it was just as a service. Rent it. It's a rent. Rent it. You're renting the thing. Okay. You no longer own it. You're renting it. 

Speaker 1 01:32:09 
And so not loss of autonomy, just smart budget making, which is, well, if I rent this service, and it's less than a server, then I've saved money for the company. That's the smart decision. But what was really happening was, well, you know, you were making a pragmatic, monetary decision for your company. 

Speaker 1 01:32:29 
You were also saying the company in a direction where every single day that went by, it would become harder and harder to go back the other way. So it's like the initial discussions we had in this discussion about, yes, you can, there's always an option to go back. 

Speaker 1 01:32:44 
There's not always the option not to do the thing, but the option not to do the thing becomes increasingly more and more difficult to the point where not doing the thing is the worst thing. So you have to keep doing that thing. 

Speaker 1 01:32:55 
And so when you get into the as a service model, you're basically saying, cool, I'm going to rent this thing. And then at some point in time, I'll either decide it shit, or I'll realize that actually, I'm very dependent on it. 

Speaker 1 01:33:08 
My whole company needs it. And I'll have to move to their highest model platform, which is all these other features that their promise be that are sort of out of reach. And then you're all in. And how many times have you looked at a platform and you're like, no, no, I'll just take the basic, please. 

Speaker 1 01:33:24 
Versus the one where if you just pay $25 more a month per seat, you get SSO, you get unlimited hours or something, right? I was looking at the Synthesia licensing today. And it's like kind of like this, it's like basic, you know, 30 minutes, moderate, I think it was like 180 minutes of time, then it was pro 360. 

Speaker 1 01:33:48 
So we're taking like a difference of half hour to six hours over three models, then there's immediately enterprise. There's no like, 12 hours, 18 hours, they just go right from, you know, dabbling to expert. 

Speaker 1 01:34:02 
And with with the difference in sort of things that are provided in the platform being astronomical. It's not even like a slow graduation that you get this, okay, get two things, three things, and then 20 things. 

Speaker 1 01:34:17 
What? 

Mike Crispin 01:34:18 
Sorry, when I'm talking about autonomy, I'm speaking kind of of technical autonomy. So I'm not saying like people are losing the ability to make hard decisions versus right and wrong or, you know, being able to, you know, raise different issues and be able to act on them, just their own free will in their mind. 

Mike Crispin 01:34:42 
I think that that's totally possible when it comes to these technology decisions. I think SAS has opened a lot of doors and there's a lot of small tools and stuff we can go in different directions and we can purchase and use. 

Mike Crispin 01:34:54 
I think what I was saying about the trends is that if talking about the big technology trends that are happening, the ability to have kind of a independence, if you will, to not go in the direction of a lot of other groups, certain technologies like take Microsoft, for example, that's a great example. 

Mike Crispin 01:35:11 
You know, 90% of IT leaders are going to use the Microsoft stack. And if you put something else in, the next person who comes in is going to take it out. That's a lot of independence from a technology decision, no matter how good it is. 

Speaker 1 01:35:28 
That's not autonomy, Mike. That's the person who made the decision at the time, settled on that decision due to a long chain of lack of autonomy. And the person who replaces them also suffered the same fate. 

Speaker 1 01:35:41 
They came in and they threw this. 

Mike Crispin 01:35:46 
That happening over and over again, to a certain extent, pulls back people's confidence to make those decisions. They're not going to fight a war over something that is going to get pulled away. Right. 

Mike Crispin 01:35:59 
This goes, again, back to the original point. That's what I mean about the loss of technology. 

Speaker 1 01:36:05 
It's the simplest versus the most complicated. Well, the simplest is just to simply concede. That's right. And I think that's what happens a lot. Yeah, which is why we don't have Linux in corporations, which is why we don't have companies that are like, oh, fucking Google Workspace is awesome because it's so much better in so many ways. 

Speaker 1 01:36:23 
No, they're saying, actually, we don't care that it's better in so many ways. It's just not the easier option. So we're gonna concede. 

Mike Crispin 01:36:33 
Well, it's it's what is it what's the first thing when what do you hear from most people when you go to the the you go to you talk to your peers. What's everyone else doing 

Speaker 1 01:36:44 
Right. 

Mike Crispin 01:36:45 
And that's not granted. I think a vast majority of us as head of IT don't just say, oh, good. That's what I'm going to do. But it's good to have that frame of reference to see where the grounds. 

Speaker 1 01:37:00 
I mean, objectively speaking, I think, and again, just based on my peers, more than more than not will tend to go with the industry trends. 

Mike Crispin 01:37:12 
That was the short point. 

Speaker 1 01:37:14 
now is trying to make. Yeah, I understand that, but honestly, it's a matter of saying, and this is where that cognitive dissonance comes in, because if everyone in America is using a Gmail account, but everyone in America is like, there's no way I could use Gmail at work, there's a break, right? 

Speaker 1 01:37:30 
There's a break in their thought, and it's because of the loss of autonomy. They've been funneled so long into thinking one thing in the corporate world that only one thing applies that they've lost the ability to think laterally, right? 

Speaker 1 01:37:42 
So think about the death of on -prem email. I don't remember the last. I think the last time I exchanged server was 2009, maybe 2010. And everyone quickly got rid of them as soon as they could for 365 and Google workspace and whatever. 

Speaker 1 01:38:02 
Cloud vendors, who can you name besides AWS, Azure, and Google? Well, you and I could probably come up with a few. Most people couldn't come up with any. The mandatory OS telemetry, right? You're talking about, okay, well, if you're Windows 11, you're all Windows 11. 

Speaker 1 01:38:21 
Even though you might have a couple of machines that are 10, they're going to 11. Or if you're half Mac, half Windows, that's really, I think, more of the unique scenario. And that's because you have people that their whole lives, they've only been taught to know one thing. 

Speaker 1 01:38:38 
And so the idea that, and I think, as I'm thinking about these, God, I wish I had done more thought on this because so many of them come back to the basic decision of what's the simplest path. I'm being shepherded towards the simplest path. 

Speaker 1 01:38:59 
So security requirements or the cost and complexity of maintaining individual things. Like, yes, while it makes more sense to have my eggs spread among multiple baskets, it makes things more complicated. 

Speaker 1 01:39:14 
If I just put all my shit in a Microsoft, simplest path. So I'm going to consider that one budget line, right? One budget line. So God, so many examples, but they're happening like, we don't conflate the individual loss of autonomy to the corporate world, but there's so many mirrored effects now. 

Speaker 1 01:39:38 
It's almost the same thing, like you don't have, well, let me turn the question around. And then I'll let you answer it first, and then I'll answer it, which is, where today do you have autonomy in IT decision making? 

Speaker 1 01:39:57 
Like, give me some examples of where you think you have autonomy. 

Mike Crispin 01:40:04 
uh me personally i i would say no no you 

Speaker 1 01:40:07 
No, you the IT leader. Oh, OK. Where do you the IT leader today still have autonomy in decision making? 

Mike Crispin 01:40:19 
I, I think on pretty much every, every level to some extent to have the okay. 

Speaker 1 01:40:25 
So sight some. 

Mike Crispin 01:40:27 
um choice on project management platforms or somewhat choice on ERP or just talking to technology or um people management you know your approach to people management and and uh working with different leaders and you everyone has their approach now they can learn from different places but they can choose different ways to do that or how they want to go about it i think as a leader you have that ability 

Speaker 1 01:40:55 
Okay, so what if everyone said to you, I'll start with your first example of project management platforms. You had a couple choices, let's say the big names, right, Smartsheet or Project or Asana, and you decided to go with a platform no one ever heard of before. 

Speaker 1 01:41:12 
Yep. You wouldn't win that battle, right? You'd lose that battle. 

Mike Crispin 01:41:19 
You might win it. You better be good. 

Speaker 1 01:41:21 
Well, I mean, if you want it though, it would be winning the off decision, the zero decision. Like you'd be winning the Pyrrhic victory, which we know based on the individual side. I'm not trying to put words in your mouth, mind you. 

Speaker 1 01:41:35 
I'm just trying to give you the example, which is, I thought about this, right? Like, okay, where do I have decision making? Well, I can pick a platform, but I'm only picking one of four platforms. I'm not picking like the platform that no one's ever heard of. 

Mike Crispin 01:41:47 
I think if there's a platform that no one's ever heard of, and you go out and you can show them that it's valuable and useful, and they dig it, they actually might go with that. They respect that. More and more people will respect that. 

Speaker 1 01:42:01 
Ok, but that's the zero answer. That's the complicated answer. 

Mike Crispin 01:42:07 
necessarily. I'll give you one example. We put a small HR system in at Carrick's no one had ever heard of and a lot of press back of okay this isn't work day this isn't it was hugely successful it was simple it was just going out and showing what it looked like and there've been other times we've done that and it's been an abject failure. 

Speaker 1 01:42:29 
Don't get me wrong, you can still make that decision, right? We know that we can make the alternative, hard decision. We can do that independence. That's what we're talking about, right? Right, and that's gaining that autonomy. 

Speaker 1 01:42:41 
That's sort of having a point where you can make a stake and say, I'm actually making a decision that's best, that has nothing to do with my own personal beliefs or the way I've been influenced. This is simply what's best for the company. 

Mike Crispin 01:42:55 
You've got to be agreed to be accountable for that decision if it's wrong or if it's right. 

Speaker 1 01:43:00 
Right. So that's the point, right? So I won't make you read through the rest of your list, but the point is when you make the decision to do the thing that's not the concession of autonomy, you enter into the stakes of accountability. 

Speaker 1 01:43:16 
And I think that's the counterpoint, right? So the counterpoint, and this is sort of the third big point I wanted to cover tonight, which is that impact on leadership decision -making. So let's carry this over into that third point. 

Speaker 1 01:43:30 
Just in terms of that leadership decision -making, accountability in any decision that you make is yours. Like you're the head of IT. 

Mike Crispin 01:43:40 
Yep. 

Speaker 1 01:43:40 
So you maintain accountability on everything that comes out of your mouth or is typed by your hand So you elect to go with some system that no one's ever heard of Which you feel is best of breed you convince everybody that this is the best thing despite the obvious choice Which is to just concede loss of autonomy and go with the flow if you will um You now become the most accountable person towards that decision that it was Whereas if you had gone with sort of the the group decision or the decision pushed to you by others that you know Sort of out of your control. 

Speaker 1 01:44:18 
You probably would have still had some accountability, but way less 

Mike Crispin 01:44:22 
did you just say XYZ does it so what do you expect so that used to work now it's well everyone else can do this right why can't we 

Speaker 1 01:44:34 
Well, all right. 

Mike Crispin 01:44:35 
Everyone runs on workday. Why does our workday environment suck? There's the flip side of that, too. Everyone uses these tools. Why can't we get it right? And then it's like, uh -oh. What I was going to say is anytime you make it 

Speaker 1 01:44:53 
There's less blame, there's less blame and accountability. Like if you're like, well, this is what you all wanted. Let me see if I can make a call and get it fixed. No one's gonna burn you at the stake. 

Speaker 1 01:45:03 
Whereas if you came in and said, no, no, I know you guys have never heard of this before, but I have the most awesome platform. 

Mike Crispin 01:45:09 
Yes. So any time you, you know, it's like investing in a stock, no one's ever heard of, right? You're taking a bigger risk, just natural, right? If you're going to go with something. But if I guess what I would say is if you're going to go with something that's outside the realm, it's not my decision. 

Mike Crispin 01:45:27 
It's it's the whoever's involved with it. It's everyone's somewhat decision. If you're going to really have to have people in your corner and agreement that that's meeting the requirements, and that's where it needs to go. 

Mike Crispin 01:45:40 
And if it does, and it really is as great as I think it is, let's say, it's gonna be it could be it could be a huge thing, huge competitive advantage or a huge cult cultural boom, when I implemented slack at Carrick's in 2015, like no one had ever heard of slack before. 

Mike Crispin 01:45:57 
And it was like, what the hell is this? And why are we doing this? And trying to get the money to do it was crazy. And it became a staple of the business of the company. And similarly, at kibya early on, it was kind of like, this is really cool what we're doing. 

Speaker 1 01:46:10 
Whoa hold on you didn't implement teams dude. What are you an asshole like everyone does teams come on now? 

Mike Crispin 01:46:16 
Well, here's the thing teams didn't exist. 

Speaker 1 01:46:19 
So let me ask you this question. How early it was? Well, you brought up another great point, which is, does group. Why would you do that? Does group acquiescence absolve you from the loss of autonomy? 

Speaker 1 01:46:32 
In other words, if the group all believes that something is the best way to go because it's the best path, and to a degree, you could reasonably assume that everyone there has lost some degree of autonomy, does by virtue of the fact that the group does this bring some autonomy back, or does it simply reflect a larger loss of autonomy? 

Speaker 1 01:46:59 
And I want you to think about this one, because if I'm with five people, and we all say, oh, that platform, it's all over the news. I saw that billboard, and I heard from so -and -so that it's great, and they have this wonderful video. 

Speaker 1 01:47:13 
That's the way to go. These are all things that you would typically mitigate by having functional requirements in place, et cetera. And you would put in these guardrails to make it feel like you had some sort of conscious choice. 

Speaker 1 01:47:28 
But in truth, if you're doing it as a group that does lend a lot more power to the sense that this was an actual conscious decision and it wasn't a loss of autonomy, versus you, Michael, who would see all the same billboards and see all the same information and yourself make a decision, in which case, a case could be made that you actually suffered a loss of autonomy to get to that same place. 

Mike Crispin 01:48:01 
I think it's the initial person who's driving the decision. If you get support for that decision and it reinforces your decision, I don't see that as a loss of autonomy. 

Speaker 1 01:48:12 
Yeah, it's interesting. I mean, does the group think effect absolve account autonomy loss? 

Mike Crispin 01:48:21 
we put it a little differently. Like if you've made a decision to move, this is the decision you want to go and you got to get that team around you to try to convince and influence them. And now they're coming back to you and said, hey, I saw the billboard, you know, this and that. 

Mike Crispin 01:48:36 
It's a great tool. I'm hearing from other companies about this. We got to do this. And then you find out that there's some huge glaring issue with this thing that you missed. It's going to be a lot harder for you to pull back and do the right thing. 

Mike Crispin 01:48:51 
Independently say, I got to stop now and we got to reassess because you've already made this. You've already started the wave. I think that's difficult to do. It comes, I don't know. I think I've drilled down or move up a level, actually, not go down, but move up a level. 

Mike Crispin 01:49:08 
This is just all part of how much accountability and risk do you want to take? Do you want to look like you're indecisive and you don't know what you're doing? Do you want to, even though that's probably the right thing to do, is pull back. 

Mike Crispin 01:49:24 
There's a lot of things going on in your head, I think, that no one knows about or can think about or hear or understand what you're thinking and how you're feeling. And that's the piece that you have left until someone's in your head, literally. 

Mike Crispin 01:49:42 
I think we have that level of independence. It's up to us whether or not we're able to go through an uncomfortable situation as a person, as a human to do that. That's just hard to do, but you can. There's no one stopping you. 

Mike Crispin 01:50:06 
There's wrong. Or in some instances of being right and being expected to do it again and again and again. And that's like having your first hit album. The second album is always the hardest, if you're a musician, right? 

Mike Crispin 01:50:20 
Because you've set this expectation, you've broken ground, you've done something great. Now we need XYZ system and research. Do you know anything that we can do that brings a competitive advantage like we did with the last system? 

Mike Crispin 01:50:33 
There's always going to be a wave of influence or just concern or pressure. If you do something really well, keep the streak going. If you do something bad, you're out of a job. But that's life. I mean, I think that we deal with risk and consequences and it's difficult. 

Mike Crispin 01:50:52 
It's uncomfortable. I remember one day I remember struggling with something at Cubist and I was dealing with some people I was working with and they won't go into too much detail, but in the project I was working on. 

Mike Crispin 01:51:11 
And my boss said to me, he says, you know, not everything is always going to be comfortable. You know, it's like you're going to have to be uncomfortable. That's part of being a is be uncomfortable. And, you know, I think autonomy or no autonomy, like you can make a really good decision and get fired. 

Mike Crispin 01:51:32 
And that's life. 

Speaker 1 01:51:33 
So okay. Well, hold on though. I mean, I think that we're we're in your mind. It's a good decision. Yes. It's very be very careful about About mixing influence and autonomy and again, this is one of those things where um I was running on what say wednesday. 

Speaker 1 01:51:54 
I was running on sunday um, I was at watch houston doing some loops and I was thinking about this where a question like When is it when where's the line between what i'm influenced by and what i'm seeing autonomy to? 

Speaker 1 01:52:10 
and so again Like yeah, there's all and and then and then the accountability question came in like when do I When does the when does the line switch from me being accountable to the group being accountable to the company being accountable? 

Speaker 1 01:52:23 
and so all these things sort of like intermixed with each other but they do all either stem from or at some point in this chain encounter a moment where there's a decision making point which where somebody is conceding a loss of accountability Maybe it's the person who's downstream from you who's who wouldn't finance is who's saying this isn't a good idea But i'm being told to do this and so again, 

Speaker 1 01:52:49 
they're they're not even able to express their their opinion It's simply just accept the way it goes, right? But think about it this way like in terms of the like the third key point was the impact on decision making and And so we talked about the mirror effect a bit But this this dovetails into decision making for it which we talked about Just just recently in terms of how ai is affecting decision making we had The discussion about that and we can reflect on that a little bit but think about some of these points right like And again, 

Speaker 1 01:53:24 
I I went back and listened to that one or read the script or the the podcast script for that Sorry the podcast output for that We had strategic decisions, uh becoming tactical vendor selection. So we're saying My decision is not strategically to uh improve the company's x my position is to go ahead and implement a platform and You know, it's a it's a semantic discussion point but when my language and my strategic plan says Uh, 

Speaker 1 01:53:57 
i'm going to explore capabilities to do y which includes selecting a platform or my strategic plan says i'm gonna go ahead and put This cool hr system in Or stretch the cool part i'm gonna put this hr system in Which one of those probably has the more influential uh concession to to autonomy then where we We take risk management, which is a hard governance thing to do for anybody, but we simply say Well stability is better than innovation When was the last time we looked at a vendor or a couple of vendors and put them against each other and said Actually, 

Speaker 1 01:54:36 
I want the one that's not progressive I want the one that's been the steady eddie for the last 10 years Not the one that's got like all the cool shit and it's super fast and has the phone app. I want like the super steady one Yeah, okay Um, and why do we make those decisions? 

Speaker 1 01:54:55 
Like why do we always say no, no, no like oh, give me the cool one Now it might meet our functional requirements a little bit better. But in our head we know If we go over the stable one, which should be the number one decision, not the zero decision But like the best decision we're actually in that case making decision to go with the zero decision Which is the the worst decision because it's the one that carries the most risk if we if we apply risk, 

Speaker 1 01:55:18 
okay then there's um The like we suffer a general reduced ability to make technical choices. So what are your os options windows or mac? Maybe max is not even an option. So your your choices go from Two to one or one to one, right? 

Speaker 1 01:55:37 
Yeah Or like oh boy, I'd really love to buy this kind of laptop. Of course, there's there's none available So i'll consider I think this one's actually I think it's better to go with tablets But no, we need to go laptops Like is it is it truly an independent decision or are you? 

Speaker 1 01:55:55 
Facing a particular loss of autonomy and doing that and then lastly And this is the worst one like i've been struggling with this. Maybe you can help me out But control versus efficiency and this is exactly where gen ai comes in Not the control part like like i'm tired of the security thought about we lose control with gen ai that's so fucking untrue We don't the we already have control or we don't have control gen ai has nothing to do with it but the loss of control versus efficiency And forget okrs and metrics and all the other shit like let's just talk plain english on this point If I'm going to go ahead and concede efficiency to get control, 

Speaker 1 01:56:37 
that's not necessarily a loss of autonomy. Because efficiency versus control, or whichever one is the zero or the one, not necessarily a loss of autonomy, or vice versa. If I'm going to say, you know what, we're going to be slower. 

Speaker 1 01:56:50 
I'm going to have better control. Perhaps in most companies, that would actually be the worst option of the two. But in either case, you're making a decision based on autonomy. I feel this is the best way to go. 

Speaker 1 01:57:04 
But are you truly? And so I was trying to think on, when I make a decision for my company on something that's the best decision, and of course I include efficiency and control in my thought process, among other factors, how do I even know what efficiency means? 

Speaker 1 01:57:28 
Like where was I taught to know about what efficiency is? Or what control is? Where did I begin to learn that control needs to be zero trust or whatever? You're living in the matrix, man. So that was a lot I threw at you in one moment. 

Speaker 1 01:57:46 
Let's just back up to one point, which is, at what point in time, when was the last time you wrote a strategy and every single thing like that strategy was a strategic decision, instead of we're just gonna by default go with this vendor? 

Speaker 1 01:58:02 
I mean, you're actually probably the wrong person to ask this because you, like me, will do the Ferris assessment. But in general for people, how often do you think people will say, we're going to implement this platform as opposed to we're going to assess platforms, pick the best and implement those? 

Speaker 1 01:58:21 
I mean, it probably happens more than we know. I don't have data supported, but anecdotally speaking, based on people I know, it's more like we're gonna go with this because this is what we have to do. 

Mike Crispin 01:58:35 
Absolutely. So I think it's very common, especially at small companies that want to move fast. You know, they send their email out to their group that they worked with, that they've been with for a while. 

Mike Crispin 01:58:47 
Maybe it's our, you know, our CIO groups or whatever. And, you know, the general counsels have their groups and CFOs have their groups and whatnot. This is huge networks, right? And, you know, nine of the ten use XYZ, so just go call XYZ. 

Mike Crispin 01:59:07 
You know, it's, why would we do anything different, right? I mean, and that's, I think a lot of companies do that, also because they don't have the resources to do the assessment or they don't think they need to. 

Mike Crispin 01:59:21 
I think it happens a lot. 

Speaker 1 01:59:24 
What about 

Mike Crispin 01:59:26 
the commodity argument comes in. Why bother? What's the value? And I hear that often, why am I looking at this random tool when there's one that is good enough? And why spend all this time looking at this one thing? 

Mike Crispin 01:59:44 
Well, because I think... 

Speaker 1 01:59:46 
That's a huge question, Mike, when we were asked that question and we simply concede, which I've done, which you've done. 

Mike Crispin 01:59:55 
Yeah, concede because there's other things that are more important to do. That's not going to change the world, some of these small tools that I think will move the envelope a little bit. Maybe there's somewhere else to do that. 

Mike Crispin 02:00:08 
You got to figure out what you're going to hang your kind of reputation on if it's something you believe in and you think is going to bring a ton of value, then you should do it. But if it's something that you're not sure of and it could be replaced in six months and it's cool right now, but there's 900 other vendors about to do the same thing like then it's you got to weigh that decision in your mind and make your decision. 

Speaker 1 02:00:34 
But, but, but making that decision is necessarily something that you're not influenced by. 

Mike Crispin 02:00:42 
He might be influenced by, but look, I mean, I, or, or forced to do it. 

Speaker 1 02:00:48 
because they're a partner with somebody else that you have to work with. I mean, some other factor. 

Mike Crispin 02:00:52 
Yeah, I think there have been some tough decisions that we've both had to make from an IT perspective. Some of it's, some of it's personnel related. Some of it's technology related. Yeah. 

Speaker 1 02:01:04 
Well I feel like I got autonomy back today when I decided to cut my secure email gateway vendor loose because they suck shit. So like it was, but why did I pick them? I picked them because it wasn't so much a loss of autonomy, it was the number one choice. 

Speaker 1 02:01:19 
It wasn't the zero choice. 

Mike Crispin 02:01:22 
Great. 

Speaker 1 02:01:23 
Um, which I, if I backtrack that to my point of influence, um, probably never a moment of autonomy loss, but more of a mostly influential point. 

Mike Crispin 02:01:36 
but if you are influenced and is that bad? 

Speaker 1 02:01:41 
But it is, if you're influenced, it can no longer make an objective decision. Like if I'm influenced to do a thing, and then I only know that thing, I'm incapable of thinking of something other than that thing. 

Speaker 1 02:01:53 
And that by definition is a loss of autonomy because you've lost the ability to have a new thought regarding a certain area. Like you've conceded all future thought about this thing to a single point. 

Speaker 1 02:02:05 
It's myopia, right, in a nutshell. So let's get to the next one, which is, and I made some notes. We're actually going to come back to this next podcast, but I want to talk about risk management regarding stability versus innovation. 

Speaker 1 02:02:31 
Jesus Christ, I mean, how many times has this decision come up, right? Like, well, the risk is high, but man, that's cool. Or, yeah, it's cool, but this other platform over here is so stable. They've been around forever and their platform, man, it just runs so smoothly. 

Speaker 1 02:02:49 
It doesn't have all the bells and whistles, but it's really, really good. And I think there's still an opportunity there for autonomy. But when it comes down to an apples -to -apples comparison, or as close as one can come, I think when you get to that moment and you're actually weighing stability versus innovation, you start to concede autonomy because you may be somebody who's extremely innovative. 

Speaker 1 02:03:19 
And so therefore, by default, you're generally going to select the solution that best fits with your mindset, as opposed to being completely, truly objective, regardless of your innovativeness. Or you could be somebody who's, you know, an antiquated thought person who's like 30 years in the business, who's like, I'm always going to go with steady eddy because that's all I've always used and can't even think about innovation. 

Speaker 1 02:03:44 
Um, that person would also need to be able to be objective. So when you think about, think about that, like what's your position on decision -making when it comes to stability versus innovation? 

Mike Crispin 02:03:58 
So if I think of stability or risk, like risk versus innovation to some extent, right, is that 

Speaker 1 02:04:07 
Yeah, stability and risk are interchangeable in this case. Building risk. So. 

Mike Crispin 02:04:12 
I think the most innovative things are the things that you do that reduce risk and make things better. So if you're creating more risk and you're making things better, you're not innovating. So you gotta be doing, just this is my Crispin definition. 

Mike Crispin 02:04:33 
If you're innovating, you gotta be reducing risk. You're getting your cake and eating it too. You can't be opening up the doors to do other things. Take Okta, great example, or single sign on technologies, right? 

Mike Crispin 02:04:45 
When that came about, there was a huge, oh my God, this is securing us and it's making people's lives better. At the time, that's hugely innovative. But if you're gonna put something out there that opens up a bunch of security holes, but makes you move fast, I argue there's probably not a lot of innovation because it's gonna die quickly once you get hit with some sort of issue or security problem or loss of data or whatever it might be. 

Mike Crispin 02:05:13 
So the most innovative solutions that I keep kind of coming across or that I can enact or that I can innovate with are the ones that are making the employee experience better and reducing risk, which hence builds kind of a springboard of productivity if you can do both of those things. 

Mike Crispin 02:05:31 
Because we work in a very compliant industry. So anything that you can make that simpler and move things forward at the same time, so it babbles me some of these tools that are the big platforms everyone uses, open up a whole host of risks. 

Mike Crispin 02:05:48 
I'm not gonna pick on the company we like to pick on too much more, but a huge example of not much innovation and a lot of risk. So essentially- 

Speaker 1 02:05:59 
the poster trial for a concession of autonomy. I mean, when you decide you're going to go on a Microsoft stack, how do you even truly say that they've objectively assessed the Microsoft stack and it is the best one versus I'm going with this because yeah. 

Mike Crispin 02:06:16 
Nobody gets fired for buying. 

Speaker 1 02:06:17 
No one gets fired for buying IBM, exactly. 

Mike Crispin 02:06:21 
is that's the tagline to the autonomy discussion, right? 

Speaker 1 02:06:25 
That's exactly like if we, I wish I had such a brilliant statement. I mean, it's so true. It's that's God. It's so so many things wrapped into that. 

Mike Crispin 02:06:36 
You put in Google with all Max at a company, the next CIO comes in is going to put in Microsoft and be a hero. 

Speaker 1 02:06:41 
Yeah. And then talk shit about you for like the next two years and get wins everywhere. I mean, ultimately, you have this other sort of this little dynamic inside of all this too, by the way, which is that that innovative company that you opted to go with, you know, sort of the zero option, if you will, it's the next thing you know, they're around for 10 years and they've become the stable company and they're fighting against the innovative companies. 

Speaker 1 02:07:06 
And so now you're you flipped over your decision, right? You're going to stay with the stable company, even though that you know, there's potentially a new, more innovative company because you've already done going on that road. 

Speaker 1 02:07:16 
It's like the first time you went down this road, you pick the more innovative company. Now they become stable. And so let's say you reach an inflection point again, where you have to make the decision. 

Speaker 1 02:07:27 
And now you've gone the other way, which is no, no, no, I'm not going innovative. I'm going with the stable company. It's very interesting dynamic because you're still retaining autonomy to a degree, but you've changed your entire approach towards how you think about 

Mike Crispin 02:07:44 
What I was what I was saying is it, I guess, if you're talking with stability from a continuity and cost perspective, that definitely that's one of those things that you can't apply to this model right is if you go to this little company that can be acquired and disappeared in a month, there's a big risk with that. 

Mike Crispin 02:08:06 
But if you're, if you're able to reduce risks and other areas with this small tool, or this new tool this new technology, it can create a huge value, or maybe it can automate workforce or it can reduce costs, or it can. 

Mike Crispin 02:08:22 
And any things that you can do kind of to drive, drive that. That's the that's the innovation piece, it's not the. Those things for it to be innovative in my opinion it's. 

Speaker 1 02:08:36 
But it's a fascinating dichotomy because you start off with putting your career on the line with picking the innovative dark horse that then becomes stable. Then you put your career on the line by staying with the stable company, even when there's an innovative dark horse that's come out that's apparently the new one. 

Speaker 1 02:08:52 
You're flipping things around on yourself, again, with a sense of autonomy because you're saying no, actually, or actually it could be the reverse, which is you definitely don't have autonomy because you can't leave the stable company. 

Speaker 1 02:09:06 
But you're looking at this from a perspective of, yeah, I made a good choice and I'm sticking with it. After all these years, there's no need for me to deviate from that sense. That's I think a good example of you continuing to exercise autonomy over decision making. 

Speaker 1 02:09:25 
But it also goes hand in hand with saying, but at the very least, I should assess the market. Like I'm not going to completely just say, no, no, this is the best thing. 

Mike Crispin 02:09:36 
You have to, but some companies won't do that in essence of time, or they'll just go there. 

Speaker 1 02:09:40 
Well, that's true. And so let's suppose you had the time and you stayed with the stable company and you assessed other companies, I think that would be the highest pedigree that you could display for autonomy in this case, because not only do you know the best decision is to stay stable, but you're also going to investigate what's happening so that you can make better decisions later based on information that's not, 

Speaker 1 02:10:03 
well, I'm not obviously influenced in marketing, but you get my point, I think. 

Mike Crispin 02:10:09 
I do I do get your point. I think there's a there's certain technologies in the portfolio where you're going to invest in the stable technology. Because that's what it calls for what your business calls for. 

Mike Crispin 02:10:21 
So there's certain things that you go stable and certain things you're going to go more progressive or more innovation based, but to try and do that across the board, I think, is is probably 

Speaker 1 02:10:34 
Just even the notion, Mike, of having a position on where you will be innovative and where you won't, that in and of itself is an indication of your sense of autonomy. If I can sit there and say, for anything related to critical data, I'm gonna go stable, anything not related to critical data, I'm gonna go innovative, you're making decisions. 

Speaker 1 02:10:56 
Now, they may have been heavily influenced by market leaders or analyst reports or other people in your sort of sphere of influence, but ultimately, yeah, you're retaining some of autonomy by deciding on the stack that you're gonna do and where you will take chances and where you won't. 

Mike Crispin 02:11:14 
That's part of building out as your strategy. 

Speaker 1 02:11:17 
Exactly, and so the last point for tonight was on this particular topic regarding IT issue decision -making was on the balance between control and efficiency and you know think about think about decision points right like so you just mentioned well we talked about having to go all in on the office ecosystem I mean I'm a company that that realizes that we have to use 365 for certain things but only the most basic everything else is going to be some other third -party system and that's all my sort of autonomous decisions met measured in with all my influence or things have been influenced by all coupled together to make this entire stack right but what about 

Mike Crispin 02:12:01 
And frankly, it's good enough for a lot of stuff. I mean, it works. 

Speaker 1 02:12:04 
Oh yeah, for sure. And I'm willing to shoulder the accountability for this decision -making process, because for me, a lot of it's easy to justify, and easy to justify that it haven't been influenced, but I know this by experience, which is a heavy factor, and we'll get to that next week, but a heavy factor in a lot of what drives autonomy, which is does your past experience enable you to be better in terms of not conceding? 

Speaker 1 02:12:27 
Well, now on the individual world, it does not. But on the professional world, influencing you, right? Right, on the professional world, you have probably a better shot at letting your past experience help you build autonomy. 

Speaker 1 02:12:41 
And so, like think about this way, like for cloud, you have AWS, Azure, or Google. I mean, yes, there's some other smaller ones, but let's say there's a big three. You have to make a choice, like one of three, right? 

Speaker 1 02:12:54 
Or you have a Google workspace or 365. You don't have 10 places, right? Well, yes, I know you will say proton mail, but let's suppose we're being realistic. And then cloud versus on -prem, although, I think that ship has sailed. 

Speaker 1 02:13:13 
And then lastly, like things you're gonna build yourself, even at a cloud middleware perspective, API stuff versus off the shelf. And a lot of times you're gonna come into a company, and for all four of those areas, you're already gonna have sort of a predilection towards, I'm this, I'm this, I'm this, I'm this. 

Speaker 1 02:13:39 
Very, very unlikely to deviate. And this is all sort of guided by your past experience and decision -making, et cetera, et cetera. But the more you decide to stay on that route, right? And I think the more you get into those things, the more you're simply conceding the fact that you're gonna just always do this. 

Speaker 1 02:13:55 
You're no longer gonna make decisions other ways, which is another loss of autonomy factor, which is again, truly why we see very specific stacks show up over and over and over again, not only in the MSPs, but on certain IT leaders who will say, there's no other thing I will do. 

Speaker 1 02:14:15 
I am conceding all future decision -making on IT to this one stack. So for control versus efficiency, and again, I don't know which one to put as the one versus the zero. Like, I don't know which one to put as the smart one. 

Speaker 1 02:14:31 
I think they both can be in either place. 

Mike Crispin 02:14:35 
Yeah. 

Speaker 1 02:14:36 
Um, but when you're making a decision, and again, this is just somewhat of a risk argument too, um, only on the control side, but you mentioned like a small company that's growing so fast. Um, the autonomous decision is to weigh the best thing for the company and do that regardless. 

Speaker 1 02:15:03 
The non -autonomous decision is to say, well, we're moving fast and just, so we have to go with the most efficient and we'll accept the risk. Yeah. Um, like, where do you, where do you find that that's going to be problematic? 

Speaker 1 02:15:20 
I mean, besides the obvious cases, problematic in terms of what happens later down the road? Because, uh, when I was thinking about this topic, it was more like you, you can't really back out of decisions very easily anymore. 

Mike Crispin 02:15:36 
correct. So can't. Yeah. So probably can count the number of times on one hand, but it's, it's it's happened. It's there been very impactful things that there is a lot of time spent to make the right decision and the evaluation of platforms and systems that go in. 

Mike Crispin 02:15:58 
And you've got a cross functional alignment, and you've got a steering committee, and you've got all these things going great. And everyone's agreed on something to move forward. And the decision is made by the group scoring criteria, the whole thing, it was going to a great RFI and then RFP and the whole thing we're going through it. 

Mike Crispin 02:16:18 
Six months later, three of the people are gone. And three new people have come in. Waste of time. Yep. We've taken all that time. And now you're going to uproot it and change it. The there's a new business decision that's been made. 

Mike Crispin 02:16:36 
And we're going to we're going to change how can we have let this happen? And, you know, this is continuity. And this this happens a lot. And you as a head of it either have to decide that, okay, yeah, we got to we got to, we got to change this up, you know, this is not meeting the requirements anymore and kind of figure out what side you're on, and who your partners are, and who you're who's in the corner, 

Mike Crispin 02:17:02 
not any kind of socializing the situation and assessing the situation. Or you say, Look, we made a decision as a company, two years ago to do this. Yeah, I'll be here anymore. But we made the investment, we made a decision, we need to come together as a team, and we need to make this work. 

Mike Crispin 02:17:22 
And we may need to make some changes as to how it's configured, or what the ownership is, or what the investment is going to be for support. Or, you know, we've got to make a decision to the cut our losses are not. 

Mike Crispin 02:17:34 
But ultimately, the other side would be like, we got to make it work. We've made this, this investment, you can't just pull away, because you don't feel like it, or you don't like it anymore. And, I mean, this is all this time spent trying to assess these systems. 

Mike Crispin 02:17:49 
So if you don't do the ladder, it's like, what's the point of ever doing an RFI or RFP? Again, if you're gonna have any churn in your organization, that's going to change all these decisions, depending on where they fall. 

Mike Crispin 02:18:01 
It's difficult. And that's part of the, if you're talking about kind of loss of independence, or a lot in being controlled, sometimes that stuff can be very controlling. You've got new people in the organization, and maybe you're not, you know, you haven't built a level of sort of influence to make to make a change. 

Mike Crispin 02:18:20 
And that, I think that does happen a lot, especially in the media companies, where you really do lose control over where things and how things will change. Once you've thought you've made that decision, or that you've, you did have, did have the independence to make the right decision. 

Mike Crispin 02:18:40 
So much that, so did the other three people who decided are four or five people, because they're just as accountable as you. well, that's that group, the group accountability effect. 

Speaker 1 02:18:51 
Well, that group accountability effect says that I made a decision as part of a group of six. The fact that half that group is no longer here, I'm still in culpable because I made it as part of a group, which, you know, like what's the, what's the best CYA mechanism for that? 

Speaker 1 02:19:10 
Well, it's governance. Oh, we have governance. It says this is our process. So therefore I'm completely covered. I mean, that's a cessation of not, not autonomy, just in general, sort of like, um, your, your position in the company, you're basically saying not my fault. 

Speaker 1 02:19:26 
And honestly, I'm saying there's a, there's a whole other spectrum of autonomy. We have, we're not even going to get to, but we're thinking about, which is when you do say, you know, especially at a time when something really bad happens, that that was not my fault. 

Speaker 1 02:19:44 
That's a group issue. Um, you're doing the, it's actually a, you're doing the best thing you can to retain autonomy at that moment, even if you sacrifice some of it later or like earlier in the process, the moment you put your foot down and say, actually not my fault, you are in a way expressing that you have complete autonomy over this, even though it contradicts pretty much every part of it. 

Speaker 1 02:20:09 
Um, I think 

Mike Crispin 02:20:10 
if you don't I don't think you ever go into it and just stomp your feet and say not that's I'm not this is what you're saying I don't think 

Speaker 1 02:20:17 
Well, it depends who you are. I mean, but I, no, no, you wouldn't say, like you wouldn't say, I wouldn't say that. We'd be like, okay, you know what? We all made a bad decision. And here's where the decision came from. 

Speaker 1 02:20:26 
And here's how we can fix it, right? Like. 

Mike Crispin 02:20:28 
saying it's but saying someone saying it's not my fault is ultimately like no it obviously isn't your fault you it's the group made the business made a decision but for me it is my fault there's a whole there's another question that gets asked right so it's kind of like what what concerns me even more is that even and i'm not voting against minutes and governance and all this stuff i'm certainly not saying that what i am saying is that there have been instances where documentation has been put in front and there's such a mind swell on a change that it doesn't matter what's documented right doesn't matter what the plan was or what the it's yeah yeah i see that but we got to change this then i i mean that happened uh yeah look at the meeting minutes uh are you signed this i know but okay well then why just why did we have to sign stuff then i mean that's that's not a common occurrence but i've seen it happen where yeah i know yeah we signed off on that but that's that wasn't the right thing to do and now we're going to change 

Speaker 1 02:21:38 
It's true, but we're getting now back to the question of accountability. Yes, which goes. 

Mike Crispin 02:21:46 
hand in hand with autonomy. 

Speaker 1 02:21:48 
It's absolutely 100%. And honestly, like there's all the, the Venn diagram of accountability has influence. It has, I'm sorry, autonomy has influence, it has accountability, it has internal, external pressures. 

Speaker 1 02:22:02 
I mean, you have, which are different than influence by a large margin, it has your self -thought, your myopia. It has a lot of factors. And you're, as a human being, are you always going to choose the easiest path or are you going to choose the best path? 

Speaker 1 02:22:20 
And when we think about individual autonomy today, very rarely do we even have that choice anymore. You really don't have the choice. You can buy a different TV, but let's be honest, you're probably going to have the same result. 

Speaker 1 02:22:37 
You can buy a gas car, but guess what? You're still run out of options. Kind of like, it's not like you're going to do a thing and have 15 individual options on the road. You're going to start having fewer and fewer options and an individual level. 

Speaker 1 02:22:55 
And we're already seeing now as we go into this. 

Mike Crispin 02:22:59 
It's orange. 

Speaker 1 02:23:00 
can buy. Exactly. The options are becoming fewer and fewer and fewer inside of technology. We're just like, we're basically manipulating them to make it look like we have a lot of choices. I mean, this goes back to that paradox, right, of when we talked about the opening of this, which was the illusion of choice, right? 

Speaker 1 02:23:17 
Oh, no, no, we have 10 different platforms we can pick to do this problem. Ah, you don't. They're all the same platform. You just, different UI, different color. Maybe this one turns to 90 degrees. But the illusion of choice, this idea that, oh, well, we didn't RFI against four vendors. 

Speaker 1 02:23:36 
We had a choice. Do we really have the choice? 

Mike Crispin 02:23:44 
That goes back to the time piece right how long does it take, and how many people at how many dollars per hour does it take to do an RFP. 

Speaker 1 02:23:53 
But, but even all those questions you just asked, those are all questions that are guided by some principle that's given to you, right? Like what is an acceptable time threshold? Who decides that number? 

Speaker 1 02:24:04 
Um, you might be forced to decide that acceptable number is X months. And so again, it's a matter of. 

Mike Crispin 02:24:15 
If you've got, you know, this is where you lose some of the independence right is if you've got your peers and some of your senior leaders your peers with have used XYZ system, you as an IT leader have used it before and know it works really well. 

Mike Crispin 02:24:31 
Then don't go to an RP. The only reason that would be to see why yourself and those four people leave and you're still there for whatever reason. Well, and then you why are you still there, but that's the thing that RFP is somewhat of a CIA and you're from a compliance perspective, you know, SDLC and having that in place components of that are important to be able to show that you've done, depending on what type of company you are. 

Mike Crispin 02:25:00 
But is that why we're doing it? Exactly. You know, it's something that's actually helping or something that we can show that we did do something to make that decision. And from a business perspective or business sense, if I'm paying eight people, how many hundreds and hundred dollars an hour to sit around a room and look at software demos when there's one clear choice that we can make in five minutes. 

Mike Crispin 02:25:27 
That's a waste of money. 

Speaker 1 02:25:31 
Yeah, but oh, you're coming back to the question like right. We're just we're just we're just we're just swinging back through the same room we were just in and and You know, what will be great is so next week. 

Speaker 1 02:25:43 
We're gonna cover the sovereignty paradox Which I haven't gone to all detail with you. Yeah, but basically we're gonna talk about Protecting your thing Sure like it so so how does autonomy affect your decision -making and protecting the thing? 

Speaker 1 02:26:02 
the thing being Well in your personal life the thing being you may be your money Sure, or um, you know your viewing habits, right like this thing But we're also gonna talk about strategic responses so How do you preserve autonomy how do you preserve decision -making? 

Speaker 1 02:26:22 
Do you do it through governance is governance your mechanism for ensuring autonomy or? Is it the is it the reverse right like as you were just talking about? How do you create a? Vendor selection strategy that always makes sure you have autonomy. 

Speaker 1 02:26:40 
It's a create. It's it's a crazy idea I mean, it sounds like well, we have that in place, but do I? 

Mike Crispin 02:26:47 
I think that the reason why maybe you always start with an RFP because there are disagreements and people aren't gonna agree on what path to move forward. And then you follow through with the whole process of RFP. 

Mike Crispin 02:27:02 
But if you've got a group of people that are all in agreement right out of the gate and they've all have experienced your background, I suppose an RFP is easy because you just check the boxes and say, yeah, we did RFP, we're done. 

Mike Crispin 02:27:16 
Technically, right, just on paper. But it's easy, I guess you could argue that as well. If you have that kind of... 

Speaker 1 02:27:24 
It's stability versus innovation though, Mike, and control versus efficiency. You're playing these four boxes against each other in that decision -making process, like, okay, everyone in the room knows this platform, cool. 

Speaker 1 02:27:36 
Everyone's had a good success, great, let's go with it. Three of those people leave in three months, and now you're like, well, how did we arrive this decision? I don't remember. Oh, we were all sitting in a room and people liked it and they had used it before. 

Speaker 1 02:27:48 
Now they're all gone and now we're stuck with it. 

Mike Crispin 02:27:52 
Um, where's our criteria yet? Where's how to 

Speaker 1 02:27:54 
Where where was I felt like I had autonomy at that moment, but now 

Mike Crispin 02:28:01 
Feel like you have autonomy. Do you think this is something that IT leaders think about a lot? 

Speaker 1 02:28:06 
No, because we're going to get to that too, which is how important is this point? We're going to talk about future considerations, how far are you willing to go to get the job done? And like, if you ultimately, you're, well, we got some heavy shit, Mike, next week, because if you're willing to go all the way, then you're willing to concede your job. 

Speaker 1 02:28:28 
Like really the only true path towards complete autonomy in this role is to simply acknowledge the fact that you don't have any decision -making power. 

Mike Crispin 02:28:39 
Well, that's what you do. And that's that's that's where, you know, we talked a lot about it. I think that there's a element of autonomy that's. I think people just feel like it's baked into them. They're they're already in whether they have it or not. 

Mike Crispin 02:28:54 
It's not the may not be their biggest concern is more that they feel like they can voice their opinion and that they they can drive a decision. If they can't, then they're not going to work there anymore. 

Mike Crispin 02:29:05 
A lot of times they're going to if they're talented, they probably figure out some other ways to go than than to be someone who's given a voice or being able to make a decision or empowered to do their job. 

Mike Crispin 02:29:18 
So whether you I think in your mind, feel like you have autonomy or the ability to independently make a decision. If you don't feel that way, you may not want to be where you are. And that's I think that's often just whether you feel like you're micromanaged or you feel like you're not being listened to or to maybe just another level down into just our personas and who we are is just being able to just say that we have free will, 

Mike Crispin 02:29:49 
so to speak, workplaces sometimes get that I think in life, you've got to discern what how much that is important, what components of that is important to you day to day. And if the most important thing, then you let's say as a human, you're like, look, I that's absolutely I need to maintain that on all levels, you might be hard to work with you because you're always trying to push the what you think is best and maybe not letting others influence your decision and that can create. 

Speaker 1 02:30:25 
Well, but we already talked about that. Are you still capable, even in a world where you're, you're able to walk in and they're saying to you, whatever you want to do, blank slate. How much prejudice are you starting with? 

Speaker 1 02:30:38 
Are you truly able to go in there and say, Oh, blank slate. Cool. I'm going to pick all the best things for this company using no previous things that I've ever thought of before to make these decisions like that. 

Speaker 1 02:30:50 
When we joked in at the beginning of the show about coming to bio IT world and giving us a new idea, it's the same principle. Can you walk into a company? Oh, why would you do that? Why would you do that? 

Speaker 1 02:31:01 
It doesn't make any sense. Right. But my point is that you're walking the company and I've done this many times and say, Oh, we're going to go ahead and put slack in. And I don't give a shit that you have teams. 

Speaker 1 02:31:13 
That's irrelevant. This is going in because it's a superior platform. Here's all the reasons why that is, why that's the case. 

Mike Crispin 02:31:22 
But you nuanced it more than that, right? I mean, you well, yeah, so. 

Speaker 1 02:31:26 
So I'm giving you the very, very TLDR version, but my years of experience have shown me, empirically speaking, this is the best path to go for the company, for any company, period, and I'm not suffering a loss of autonomy on that. 

Speaker 1 02:31:43 
I'm using years and years of decision -making experience and trial and error and all kinds of other factors to get to that point. Would I consider another piece of software? No. And that's where I'm conceding my loss of autonomy. 

Speaker 1 02:31:57 
Not on the fact that I have picked a better platform, in my opinion, but that I wouldn't consider another platform that was potentially even better than that. I think you would. I don't know. I mean, maybe I would, but it'd have to be something for me. 

Mike Crispin 02:32:12 
I think you would. I think you're not giving yourself enough credit there. I think you would, absolutely, if something was better than slack. 

Speaker 1 02:32:20 
would know about it. I would know, I would certainly that's in fact, like I would know, and I've tested all the things. And I don't think there's anything on the market today that's better. But if something came along, I would stipulate I have, I am, to be honest, I am ripping out a secure email gateway system that I've used for four years tomorrow, to replace or something else, something that you know, 

Speaker 1 02:32:42 
to me was my steady eddy, my stable platform, because it's I'm making a decision. Yeah, that's, that's not, it's influenced a little bit. You said you use it, which was kind of like the final thing I needed to hear. 

Speaker 1 02:32:59 
But I already made the decision. And you just kind of like put more fuel on that fire. 

Mike Crispin 02:33:07 
Take all your autonomy? 

Speaker 1 02:33:11 
Now, my point is that I didn't suffer a loss of autonomy. I gained back a small foothold in my decision making where previously I had been willing to concede the errors and issues and problems and just roll with it and get kicked in the nuts every day when something went wrong, as opposed to finally saying why, but I had forgotten how to say in this case that I don't, I don't abide by this. 

Mike Crispin 02:33:42 
But that is part of that decision -making process. You had other things you needed to worry about. You had to prioritize in your minds what was good enough for now until you could come back and revisit it. 

Mike Crispin 02:33:52 
I did. I did, but I mean that you didn't have the ability to do that. 

Speaker 1 02:33:58 
if you wanted. No, I didn't. I did not have the ability. I'd have to walk back through my journey, and then that might be a worthwhile thing for me to do in my next run. It's just sort of like, go back through the steps that led me to this moment of decision making, and certainly the majority of them are conscious decisions that I made. 

Speaker 1 02:34:14 
How many are not? It'd be interesting for me to do a retrospective on that personally, but I think that's a worthwhile activity to do, and most decisions that we make inside of IT leaders is, how did I arrive at this moment? 

Speaker 1 02:34:27 
Was it spontaneous? Was it built on years and years of experience? Because so -and -so over there told me that this thing is shit, or this thing is the best. For me, and for you, I think 90% of the decisions we make at, or arrive at, or more, arrive through conscientious, aggressive testing. 

Speaker 1 02:34:46 
I've never met anybody other than you who does as much testing on software as I do, to constantly just fuck with everything and find out what everything does, all the time. Yeah, that's a hobby, too, for me. 

Speaker 1 02:35:02 
It's pretty much a hobby, which in and of itself, that is very, very autonomous, by the way. It's a very, very red activity in terms of, oh, this new thing came out, I got to know everything about it. 

Speaker 1 02:35:15 
What's the new thing about it? Not only do I know about it, I know about what other things should be doing that are close to it, and you can expand and extrapolate from that point. All of that very much strengthens, I think, an autonomous position. 

Speaker 1 02:35:32 
Because a second ago, I didn't know about this platform, I just saw it, I was influenced to see this. I got a targeted ad influenced. I went ahead and tested it on my own, tested it against its applicability, usability, etc., all these things. 

Speaker 1 02:35:48 
And then make a conscious choice right then and there, either go with it or put it on the back burner. And I wasn't influenced to do that, I did it through my own testing experience. 

Mike Crispin 02:35:59 
I think, you know, I know that Snap used to use this as their one of their hiring slogans and it's, you know, toys are a pathway to great ideas. And I think that, you know, often we hear the IT has at least toys and that's all they care about and think there's a big component of how a lot of technologists minds operate. 

Mike Crispin 02:36:21 
And maybe it is a bit of autonomy or total autonomy that you really have to try new things and test things out and fail a lot. And that's become way more popular now than it was a decade ago. Now it's like, oh, fail fast, fail often, you learn from your failures and it's so important. 

Mike Crispin 02:36:40 
And, you know, I think that if you on your free time, if you're experimenting with things and you take the time to learn, let's just say about the inner workings of Google workspace. No one takes the time to do that, because you already got something that's great. 

Mike Crispin 02:36:54 
Like you said, loss of autonomy because you are automatically plugged in and it's easy. But if it's time to look at some of these things, you'll find that there's a real opportunity for a lot of the tools that may not be more used and you can use those things in your own life, not at work as well. 

Speaker 1 02:37:15 
That's a, that's a great point. Honestly, uh, when we tell people at our company to stop using Chrome and to use brave, well, I explained, I explained why I can detail and they're like, you mean, you mean I can switch course you, of course you can fucking switch. 

Speaker 1 02:37:31 
So don't use edge at home. Don't, don't use Chrome at home anymore. You use something better. Like, but how do I do that? They don't even know. Right. That's that personal loss. You and I have at least the, the perspective of saying, here's all the things in the world that are choices. 

Speaker 1 02:37:46 
Here's where we're willing to concede. Here's where we're not willing to concede. Most people don't even have that choice. Cause we're telling them what to do internally as well. And the company, like you get told at home what to do. 

Speaker 1 02:37:56 
You come into this company, you get told by it, but to do, you just like left and right, fuck it, just pay me. 

Mike Crispin 02:38:04 
You really have to be into it, though, like I think if people it's not not the same level of interest of on some 

Speaker 1 02:38:12 
So that's that difficulty barrier we talked about earlier. I mean, it's like, easy path. 

Mike Crispin 02:38:17 
I think that's I actually think that's a good thing because they're experts at other things that I'm not interested in right well It's why you want to get your job 

Speaker 1 02:38:26 
Get together, right? You buy a John Deere tractor, guess what? You're immediately conceding all of your autonomy because you can only get it repaired by John Deere. So there you go, right? Like, that's happened to me. 

Speaker 1 02:38:40 
I didn't know that, by the way. I thought, like, oh, cool. I'll just, you know, go ahead and open it up and fix it myself. No, no, no. But you know who to call. You know who to call. And you know the only one place you can go to is John Deere. 

Speaker 1 02:38:53 
And that's that, right? They've locked it down. And there's so many examples. We don't need to go all the examples, but... 

Mike Crispin 02:39:00 
that's great you can choose to lose that type of autonomy and sometimes it's a good thing right 

Speaker 1 02:39:08 
A lawnmower tractor repair guy. 

Mike Crispin 02:39:10 
But we're very passionate about technology, so for us to talk about autonomy and technology is very important to us. But if I don't care about that as much as a technologist may care about it, then just do it good enough and get it out of my face. 

Mike Crispin 02:39:27 
And it just works, and I don't care. Like, they don't want to know. And I think that's what it's just like we don't care about John Deere and maybe a mechanic or someone that John Deere cares a lot about that and the best way to do it. 

Mike Crispin 02:39:39 
And I think that's just the balance of... 

Speaker 1 02:39:42 
I don't want I don't want to get into a John Deere rant tonight, but I'm never well. It doesn't matter 

Mike Crispin 02:39:48 
Apple's the same way. The only way I can go. I used to have autonomy. You can go to the Mac stadium or go to the Mac store and you go to Micro Center, you know, whatever you want to do. 

Speaker 1 02:39:59 
What was that place in Alston that was the Mac repair? 

Mike Crispin 02:40:03 
Mac lounge or something like that. Yeah, I remember that guy. 

Speaker 1 02:40:07 
Yeah, downstairs, I went there with, well, a good old friend of ours from the from the bullpen all the time to like get max fixed like we'd haul up the IMAX up there to Alston, bring them downstairs and pick on the next day and they were fixed. 

Mike Crispin 02:40:21 
They brought that all in and now there's only one place to go, but if I go buy a PC, what choice do I have? I go to Best Buy, I've got to fix it myself. 

Speaker 1 02:40:34 
Yeah, it's even worse. I get you. You get all all good points All right. Listen, let's wrap this up We have we're gonna have a great one next week. I can tell already because there's lots of good points. 

Speaker 1 02:40:46 
I wrote down here But that was awesome, so I'll always appreciate your candor, you know that and You're Brian the counterpoint. So if you like the show again, give us all the stars Don't be a dick especially don't be a dick to IT as you can tell by what Mike and I are talking about we're doing our best to make the Best world for you as is everyone in IT and sometimes Our hand is forced. 

Speaker 1 02:41:12 
So, you know, we're doing what we can do Be cool, it'll get paid back in spades be nice to animals and old people 

Mike Crispin 02:41:22 
Why don't we change the podcast to the calculus of it? 

Speaker 1 02:41:26 
Yes, starting next week episode 37. We will now be known as the calculus of it just FYI We just made that executive decision together full autonomy in a decision -making process So next week, please tune in to the episode 37 the calculus of it Mike and I'll be back to shepherd you through the other half of this massive Dilemma 

Mike Crispin 02:41:55 
I'm going all A .I. on this one. 

Speaker 1 02:41:58 
Oh, yeah, Mike's going all A .I. He's going A .I. A .F. Right up to you. OK, bring your best. Coming out swinging. 

Mike Crispin 02:42:08 
about the calculus of it where it could refer to a concept possibly in a broader philosophical sense rather than information technology. 

Speaker 1 02:42:18 
So what was it? What was Bill Clinton's famous line during the Monica Lewinsky trials? It's like, can you define, define like, what was it like to find what and means or something like that? I forget what the quote was, but they were stuck for a whole day on the trial on defining a single, I think it was a conjunction or it, I think it was it. 

Speaker 1 02:42:38 
Can you define what it means? We should have a whole podcast episode on the word it Mike already worked. 

Mike Crispin 02:42:46 
working on it, already working on it right now. 

Speaker 1 02:42:49 
like you're like you're famous you're now infamous podcast timeline of take it. Take it. Just take it. Take it, man. Doesn't even matter what what that means. It just it sounds so perfect as a goodbye. 

Speaker 1 02:43:04 
That's the perfect way to go. So Mike, don't use your with with all due respect and from the deepest part of my heart, just take it. I will see you next Wednesday. I'll be I'll be coming back from New York on the Acela at three o 'clock. 

Speaker 1 02:43:22 
So I will be rushing home. We got a little bit. We're Thursday next week, right? Thursday next week. Thursday is Halloween or Tuesday. Yeah, Tuesday next week. Alright, bonus. Okay, cool. Awesome. Big up. 

Speaker 1 02:43:35 
Alright, have a great weekend and I'll talk to you on the flip flop. Alright, later, dude. Be good. 

Mike Crispin 02:43:45 
adopt and oppose some we need frozen Twinkies and Johnny Walker gold drinking 

Speaker 1 02:43:54 
The Calculus of I .T. 

People on this episode