The Calculus of IT

Calculus of IT - Season 2 Episode 5 - Actively Preserving Autonomy in IT

Nathan McBride & Michael Crispin Season 2 Episode 5

After four episodes of context and base-building, we are now ready to jump headfirst into the season's theme.  We explored the daily battles IT leaders face against encroaching vendor dependencies, the insidious creep of "zero decision" defaults, and the constant pressure to sacrifice control for convenience. We spent some time exploring practical strategies for building autonomy into your IT infrastructure, from developing essential habits like the "Three-Year-Old Strategy" and the "Future Test" to designing processes that prioritize flexibility and knowledge sharing. 

Join us as we discuss the importance of documentation, the dangers of knowledge lock-in, and the often-overlooked human cost of losing autonomy. We also introduced the concept of an "autonomy score" and discussed how to measure success in this critical area. 

Come on in and have a listen, and prepare to have your young minds blown wide open.

Support the show

The Calculus of IT website - https://www.thecoit.us
"The New IT Leader's Survival Guide" Book - https://www.longwalk.consulting/library
"The Calculus of IT" Book - https://www.longwalk.consulting/library
The COIT Merchandise Store - https://thecoit.myspreadshop.com
Donate to Wikimedia - https://donate.wikimedia.org/wiki/Ways_to_Give
Buy us a Beer!! - https://www.buymeacoffee.com/thecalculusofit
Youtube - @thecalculusofit
Slack - Invite Link
Email - nate@thecoit.us
Email - mike@thecoit.us

Season 2 Episode 5 - Final - Audio Only
===

[00:00:00] Hey, so, um, I heard you were, I heard you spoke at the USDM Life Sciences Summit yesterday. How'd it go? Went great. It was nice to be on, talking about some fun things like, uh, data management and AI and all that good stuff. Wow, sounds awesome. Is there a recording somewhere I can watch it? I don't know. I, I couldn't find one.

Is there one? I don't know. Did you get one? No. Yeah, hopefully I'll, I'll check it out at some point, but I, I, so I didn't cut and paste the link and I, I lost it. So actually, I, I thought I copied it and then I went to paste it and it was gone. So that was too bad, but that's my fault. I'm working on getting the two of us onto a panel for bio IT world.

So right now, so, um, if not a panel, just the two of us. In a session so we [00:01:00] can get our free passes. So it's working. That would be great. Yeah. We'll just, we'll put it on Twitch . I, I had to have to prostitute myself out to some vendors to get the free passes, and so I've, I've offered myself and your services to do a free panel on their whatever product it is.

Yeah, let's do it. Yeah. Okay. Well, I'll wait to find out Good one. That'd be, that'd be a great, that'd be great. Yeah. It wouldn't be good if it was a shitty product. It would be only be good if it was a good product, but, um, maybe that's even better.

A few whiskeys before it then. Bio IT world is coming. It's April. Not it's right around the corner ish. Coming up quick. Yeah, we're going to have to go set up our little calculus of IT table. We're going to ask anybody who comes by, um, one question, which is, um, tell us something that's never been thought of before.

And if you can do that, we'll buy you a drink. Now, is that always? Is that always in the C port? Is that [00:02:00] always? Yes, it's always never anywhere else.

So anyway, welcome back to the Calculus of IT podcast. In case you didn't know it was podcast. I don't know why I said that. And welcome to episode five. Now recording. Welcome to episode five of season two. This is the home of all. That is the IT paradoxical, the home of the Sad Salad Paradox, the Nexus, the Neverland Paradox, and dare I say it, your personalized IT leader internet site Paradox.

It's just a, it's just all a paradox. Just all a paradox. So you get over Mike's a paradox. I'm a paradox. Totally. It's all a paradox. This is, by the way, Mike Crispin on the other side of the camera. I'm completely innocent. I wasn't, we weren't there. I wasn't there. I can't do it. Didn't do it. If you need an alibi, though, I can give you one.

I know a guy. I don't know where I am. I'm somewhere. It just wasn't anywhere near [00:03:00] there. I can tell you that right now, wherever that was, I wasn't there. And actually, I think you were with me. If I remember correctly, I had my vision pro on that was. That's where I was. Well, unlike the FLOTUS, each week we seek to find efficiencies by diving deep into IT leadership, but without all the terminating of all the people critical to our process.

Um, it's amazing how if you just keep people around that are smart, things will be more efficient. Um, anyway, if you missed last week's episode, what the fuck is wrong with you?

Sorry. Oops. If you missed last week's episode, what the hell is wrong with you? That was very poignant. It's the wireless sensors. We discussed identity and the value and the value of identity and how identity is identity and how to identify identity and we tied it all back to the preservation autonomy.

It was freaking awesome. Now we went an hour over our Um, bespoke, promised, [00:04:00] hour long limit of podcast episodes. But you know what? It's our frickin podcast. So if we decide to go along, we're gonna go along. But we're gonna try to do better. It's all we can do, Mike. I love that we just kinda go with it.

We'll see what happens. Word takes us. Just gotta take it. Just take it. Last week, we asked a lot of big questions about value, identity, and autonomy and answered at least 40 percent of them and it only took us two hours to do it last week. So that's a new record of some kind right there, a new record of lying about our podcast or something.

I don't know, but this week though, another absolute smash hit. I guarantee it. The big kahuna burger question, how do we actively preserve autonomy in it? We're, we're finally getting to this. I mean, that's the whole theme of the season, but we had to do some baseline setting first, and now we're getting into the actual question we're going to try and answer, and it will take us many more episodes to answer it, but we're going to start.

And I'm not talking about just reactively protecting [00:05:00] it. Like someone tries to take it away, but like actually building systems and processes that intentionally preserve it and why we would do that. Like, shouldn't we just place as much emphasis on pulling autonomy away from people so that we can get the most efficient things done?

Well, that's a rhetorical question. I don't think we should. Um, we've been dancing around this setting. All the baseline context up until now. And we're going to talk about habits and strategies and a way forward. And, oh my, so many things because I want to keep one eye on the clock tonight. We're not going to do a jobs update.

So if you want to know what jobs are out there, you can a listen to last week's episode around the 25 minute Mark, and you'll hear my jobs report and be, you can go on to LinkedIn and just look them up. Uh, pretty much every job that I read last week is still out there. And if you just want to really get the effect, you can listen to that job support twice [00:06:00] in a row, and it will give you a kind of like.

Wow, that's a lot of jobs. Um, before we get started this week, I wanted to make a special announcement. Please donate to Wikimedia. Um, even if it's 10, 5, 2, donate anything you can to Wikimedia. They have a massive, massive fight on their hands right now. It's going to continue for a very long time. They're being sued out the ass by everybody, including floaters and his little doge doggies, and they're trying to take all of Wikimedia down and on a global scale, in some places it's actually working.

And so if we want to actually preserve this thing, that is the, I don't know, in my opinion, one of the last sources of actual truth on the internet, they need some cash. Uh, it's not like they're not wisely using it. They're fighting these lawsuits. They're fighting as best they can. They're actually putting People are getting threatened for editing articles.

I mean, Doxed threatens. All kinds of terrible shit's happening to them. Um, they're using [00:07:00] sock puppet authentication now for editing articles. People's identities are preserved and, but there's companies that are trying to unmask their identities that are being run by FLOTUS. So you, you can read the whole article on 404, but in short, they need your, they need your donations.

Don't, don't give us a beer tonight or buy us a beer. You wouldn't anyway, but if you were thinking about buying us a beer, just give that 3, 10, 50 bucks to Wikimedia and they'll use it for good purposes. Um, you can also donate to the ACLU, but Wikimedia first. Um, also with every podcast, we're going to release it in two formats, as I say every week, and one will be our format as you know it right now, me and Mike just chatting, and the other one will be an AI version, so if you don't like what me and Mike have to say, or you want to hear a better, shorter version of it, you can just go listen to the bots.

Um, that will redo our show for us. Uh, we have a Slack board that's still growing, and anyone's welcome to join, the link's in our [00:08:00] sub stack. On the Substack page and in the version notes of every episode on all the platforms. And if you want to come and continue the conversations on our show, tell us how much you love us, um, come find us on Slack and let us know.

I also want to mention that if you like our show, please give us five stars on Apple Podcasts or Spotify or YouTube or wherever the hell you listen to it, um, on your favorite AM radio dial. In our show description, we have links to buy us a beer slash donate to Wikimedia and also our new, our merch store.

We need some, we need a new t shirt or something. What do you think we should come up with? Yes, I have, I have this whole list of t shirt designs over here. We've been talking about, I got to get to work on the t shirt designs. I think I have a couple as well. Send over to you. Yeah. Let's uh, for the, so we're going to have our spring collection coming out soon for our, our designer, um, clothes wear line.

So more on that, but just, uh, you know, after you're done donating to Wikimedia and you still have [00:09:00] a few bucks left in your pocket, get ready. Because, uh, Wow. It's going to be a save some for the t shirts. Yeah. Save, save a little bit for the t shirts, give the most wikimedia, but then save enough for a t shirt for yourself or for your special somebody for Valentine's day slash, um, St.

Patrick's day. Ooh, we got to have a St. Patty's day shirt. Oh, that'd be great. A show or a shirt or both. Both. Yeah, that'd be great. Some green beer going. Yes, green beer, green vomit, it's going to be awesome. So we're going to have a special March, sorry, special St. Patty's Day shirt for the show. It'll be a short run collector's item.

Stay tuned for that. And I would say that if I make any profits from any shirts this spring, I will give them all to Wikimedia. Um, or to the SPCA. Or both[00:10:00] 

compute our

make us.

I

know for that. [00:11:00] So question for you, Mike, before we even get into the episode, this bothered me, this is bothering me. And I got to ask it. So if a new employee comes into a company and immediately needs a new platform, should there be a stay of execution on that person's ability to request a platform until we know for certain they're going to stay for a certain period of time?

Yeah, that would be, that would be a good idea. I think it often people come in and their first line of business is to put the system they had before in place. Um, but it's our job to some extent to ensure some continuity. And make sure that that doesn't necessarily happen, but I think it's, that's a much easier said than done type type problem to have, but yes, I think if someone comes in the [00:12:00] organization, we shouldn't just be instantly swapping out systems or adding new systems without, um, you know, some forethought, some thought about what happens over the next year or so, obviously we should be thinking about those things, um, but yes.

Thanks. There should be some way to process and manage that. It's going to happen. It's going to happen more than we'd like. I think where, you know, if someone leaves the organization quickly, they, you know, over a few months or something like that. And they, they've made all these big decisions that you have the ability to pause.

And I think there's a, um, I think a number of times where you may be contractually obligated or you might be. Already vested far enough that you've got to move forward with something. But lessons learned is it should, you know, so the autonomy discussion, it [00:13:00] shouldn't be one person's decision. It should be a group's decision and understanding that the, the whole company or the, the ownership of the decision making process has made the decision and that you would continue on with it.

So it needs to be time to let that bake and bubble. Right. And. If you don't do that, then make a quick split decision or let someone else really drive the decision. Then you're gonna end up having to undo a lot of things. It's a good answer. Safe answer. I dig it. Um, I'm, I'm, I'm torn on this one as well. Uh, because while I agree on some levels, there should be some sort of period where, um, you don't get The very expensive enterprise platform on day 30 of your job, uh, you might have to wait some period of time.

You, if it's gonna harm the business that you don't have it. You know, it is kind of left in sort of the [00:14:00] lurch on making that, um, that risk decision. You know, the four, the four pillars and the risk, uh, what we call it, um, uh, pair productivity, autonomy, um, innovation and risk. You're, you're just kind of putting all of the.

Um, chips into two of those four circles, um, which is the productivity and the risk. So, it's a matter of saying, I don't know, when I think about this question, after all these years of watching employees come in, demand a platform, um, get it set up, you know, work there for four or five more months and then leave to go somewhere else, and everyone's sitting around going, well, who the hell put this piece of shit in here?

Um, everyone's like, well, that person that left. And so, what happens next? Well, you know what happens next. Why didn't IT do more to help prioritize this? Where was the IT governance? Where was the [00:15:00] decision making? And then you're like, well, that was all thrown away when this person was screaming so loud and the CEO just said, go ahead and buy it.

So it's kind of like, we're all caught in this paradox of, um, how to find those four balance points. So, I'm just going to put that out there. to start the discussion. Sure. So it makes no sense. So let's get into it. So how do we actively preserve autonomy and it, and as I said before, not just reactively protecting it when someone tries to take it away, but actually.

building systems and processes that intentionally preserve it. And we talk about it, we're talking about autonomy in the abstract sense. Okay. We don't have a particular thing yet. That's a future episode. We don't have a particular thing yet, but it's a, it's a sense of having a decision making, um, decision making process in [00:16:00] place.

Yes, empirical objective decision making process for lack of a better term. So I think we built a pretty good baseline To sort of dive in the specifics of the autonomy issue. So I broke this down into four areas Okay, one is the daily battle. So the thing like that scenario I just mentioned we'll put that in the daily battle number two Developing good habits number three using those good habits to build a strategic approach and then lastly, creating a longterm vision.

Incidentally, um, I have two articles coming out in the next week or so. One is on what non IT executives can learn from IT executives and vice versa. And the second article is about what good looks like. And so as I was writing this script and I was writing the first article, it occurred to me that Well, my definition of good is like nobody else's definition of good.

So now I have to go define a single word and spend all this time to do that. [00:17:00] And I did, but we're talking about developing good habits here. We're talking about habits that anyone would generally consider to be not a bad habit. Okay? For lack of a better definition. And so, here's my position on this, and you can tell me if you agree or not.

With autonomy in IT, you're either actively preserving it or you're losing it. There's no middle ground. No neutral sort of state. Every single day as an IT leader, you're either making decisions that protect you, protect or erode your autonomy, and sometimes you don't realize it until it's too late. Do you agree or disagree with that?

I would agree with that. Yeah. Yeah. Okay. I mean, I, again, I don't think there's a neutral ground. I think it's pretty boolean in terms of, um, in terms of me as an IT leader, every single day, every decision I make is going to swing the needle back one direction or the other to the, to the extreme on either side.

So back in episode three, which [00:18:00] was two episodes ago, we were discussing when making an autonomous decision just meant being able to say no. And that was, um, a ways, ways ago, um, AKA the simple days. Now we're dealing with ecosystems inside of ecosystems, AI that promises the world, but wants your soul in return.

And just all the vendors. So let's start with the daily battle, okay? Uh, this is point number one. Or eight or one. Let's start with, let's start with what this looks like day to day. So every morning you walk into the office and the autonomy challenges start rolling in. Or you zoom into the office or whatever.

Anyway, it might be a department head who wants to use just this one little app that happens to need full admin rights. Or maybe it's a vendor pushing their seamless integration that only works if you use their entire suite of projec products. Or my personal favorite, the It's Free Now trap. That somehow ends up costing more than Well, I'm always diving the car, but you get the point.

I'm trying to think of like a platform that's often considered free, but is the opposite. [00:19:00] Um, the name escapes me though. It rhymes with Yeah, I can't think of it either. Rhymes with bear joint. Uh, I can't remember the name. It's like two words, sounds like bear joint, but something else. Anyway, I kept my finger on it, but I think you know what I'm talking about.

Um, oh, it's a free platform. So, so, this is what the daily battle starts like. Yes. Um. But these daily decisions seem small, right? In isolation, they're not that big of a deal. Like, I'll make one here, make one there. Then we can just accept all the default settings. I mean, those are the zero decisions, right?

Um, everyone else does that, but here's where it gets insidious. So every little zero decision, each, each small concession adds up. I mean, hyperbolically, I get it that zero plus zero equals zero. I mean, it's basic, it's basic math, but I'm not going to get [00:20:00] into the. Breaking down the specifics of it.

Basically, it's like death by a thousand little paper cuts instead of paper cuts though. It's your ability to make independent decisions that's bleeding you out. So they do add up and over the course of a day Here's a good exercise Tomorrow when you go in the office or you zoom in or whatever count how many times you are making Decisions that are related to an autonomy challenge.

Oh, yeah I've been thinking a lot about that since we started on this topic for sure. Yeah always Like, hmm, is this something? Well, I mean, I, I did it many, many times today. And, I started counting. Because I wanted to see if I could come up with like a, a small number. I started counting. And then I just started, well, I lost track of counting.

Because the situations that I was in. The hallway conversations. And the, the things that came through to my slack. And, you're just like, yes, no. Yeah, sounds good. Okay, no. Yes, do that, [00:21:00] don't do that, and you just stop taking count. Yeah, absolutely. That's what I did. I wasn't trying to count them, but there were certain things I was thinking about today where I'm like, did I?

What, what was that decision? You know, why did I make that decision? Exactly. Even things like, even asking why, like, is it because I'm too busy or because it doesn't matter that much or, you know, what's the impact, you know? Well, let's take, let's take the Airtable discussion for a moment. So I love Airtable.

Everybody in the world knows this now. Yeah, yeah. So Airtable is our corporate relational database platform. Why? Well, when I sat and I did try QuickBase, I did try FileMaker Pro, my, I'm a secret FileMaker Pro fan. Wiz! You're a wiz! I'm, I've been using it since FileMaker 2.

I think of all these platforms. So I didn't just like, I love Airtable, but I'm saying, okay, we need a cloud relational database. I [00:22:00] can't get everybody trained on Redshift. It's going to be too hard to get everybody into Google Cloud. Um, I, I like QuickBase, but it's overly complicated. Uh, Asana is too task based.

And so I'm running down this list in my mind, right? And I'm, I'm getting, it's basically like this, uh, this contrived list of requirements I've created in my brain. Sure. Of all the reasons not to pick other things. Oh, this one's too expensive. Oh, this one's too not right. This one's got too much. And so I end up at the decision that I wanted to be at, even though I wanted to be objective to get there.

And this one, I'm doing every single thing I possibly can to look at the entire market. I am trying to do everything in Airtable, also in Google Sheets. But then again, think of, there's an irony here, because I also love Google Sheets. And I think it's a great platform. I don't know, what are you going to do?

That's a tough decision. Well, so here's, so here's the [00:23:00] problem though is, am I still not being objective? Because now I've put, put two sure winners in the pot. And you get where this is going. Sometimes, for me, you know, in a situation like that, because it's certainly, I, I like a lot of these platforms as well, very much.

I'm always thinking to myself, like, All right. Do I need to be going back and forth here? Or is it because I'm curious if one thing can do it versus the other, but in the grand scheme of things, it's Either one will work, but I'm overanalyzing because I'm just genuinely and personally interested and like, can't in this thing actually do this thing?

And it I almost forget that I'm working because I'm so like into it like so interested Then I have to pull myself back a little bit and be like, okay Well, maybe that little thing I think really is important probably isn't an important thing for the masses or the whole enterprise It's probably just important to me because and that's those are the things that [00:24:00] I love to grab on to and learn about.

I'm like, Oh, this, this is something no one else can figure out, so we can use it. The examples are endless. I. Yeah, yeah, sure. A technologist, I think it's just natural. It's hard to. Right. Just put that back in the bottle once you've taken it out. I've gotten used to using certain automation solutions. And then another one comes along and I say, well, shit, this one's 1 percent better, uh, than the other ones.

So now I'm going to switch over and I'm doing so from the perspective of it's better. Like I'm bettering the business. I'm making a decision to put a tool in that's actually better than the tool I had before. And so I always feel confident about that. Like I'm not putting in a tool because I like it more.

I'm putting in a tool because it's actually an improvement over the previous story. That's the same. I mean, that's what I mean. Like, I, I feel like that's what I'm thinking about, but that's what I have to ask myself sometimes, because what's important to me, what I think is cool. And I [00:25:00] think sometimes, you know, and sometimes rightfully so, I, I, people, they just like to tinker and figure out how, you know, stuff works and it's really not applicable, but you know, it's really cool.

Or they say play with toys and all that stuff. Or you hear that before, I'm sure. Yeah, I think we're genuinely want to do what's in the best interest of the of the company and of the business to give them something that works really well and also doesn't require them to call us every few minutes. And that's the pulling back.

You look at these, some of these very cool features, and that's what I often end up pulling myself back into the discussion of the user experience and employee experience discussion. Right. Yeah, it has a lot of. It's like the PC and the Mac in the early days, it's like it has, you can, you can build your own PC, you can change all these things, you can swap in and out different things, and it's great, and we can do whatever we want with it, and the Mac is running alongside, and you can't touch the damn thing, you can't change it, it's an appliance, if [00:26:00] you're a technologist, you're like, oh, Macs suck, most people anyway, who didn't see the aesthetic and the user experience as a benefit, they saw the ability to It's great.

the flexibility and the ability to the absolute fastest and the absolute most well performing system, but it didn't look cool. It didn't work well. And I think that same thing. Sometimes I'm digging through these applications and I'm thinking, is this an appliance that will work great for the company? Or is this a, a, a, a rat's nest of awesome features?

You know, like it's those two things that go back and forth. And that's, I think the emergence of SaaS has helped us with that because we can really treat things more like an appliance, a configuration management instead of an application development endeavor, right? It does, it does make it easier for technologists like you and I to sift through multiple solutions in a short period of time to find the best one that meets the requirements.

But you always, I think for me, well, not you, for me, there's always this voice in the back of my head. Should I take another look at Notion? Should I go look at [00:27:00] Obsidian like again? Yeah. Should I go give a Monday another chance against Asana? Like, what? And so, you can't not ask those questions. And I think, and again, I'm not absolving myself of, of, uh, the loss of objectivity, but I think by virtue of the fact that I want to go back and keep checking to see if things have improved, I I'm still putting that sense of autonomy back into the business.

I am still not forcing just one default. I am telling, I am not directly, but I am telling the business that I'm going to continue to look for all the best things. And you have the, that means you have autonomy. You have the ability. Yes, innovate and experiment and you make sure there's time you make sure there's budget to do that.

It's so important to do that. Uh, and to, you know, whether you have to sort of stuff, something's in there. So you have some ability to look at those things [00:28:00] and you should be reevaluating because. Things change very quickly and I, I do think, you know, with all this AI and other stuff that will come after AI, I'm sure that the technical debt in the laggard, huge, massive platforms won't be able to keep up and there'll be 15 new ones.

And the question is how easy it will be to move your data into these environments and The UI is going to be dead soon, Nate. I mean, I think that we're going to get another five years or six years of like, Oh, what a great user clicking most windows. And like, I, I just don't think that that stuff is going to be as important.

There'll be a new UI, which will be, it'll be voice. Well, I don't think, I don't think anyone's getting away from the three frame UI. I mean, it's every single DAS CloudVendor uses a three frame UI. You have navigation content search. That's it. I, I, I'm agreeing with you. I think I'm saying that the feature sets more like how, how good is the knowledge of the system, the knowledge [00:29:00] of the system has.

And how, how much will it be able to do what I need it to do and there'll be other, like we were talking about on the USDM call, I tried to say it anyway for a few minutes before we rammed into another topic, but is the verification ecosystem. And I think that there's a, that's the next, the kind of the next solution.

You bring up an interesting point, uh, here, which is, if in five years, the technology should be so UI doesn't matter, have we improved it to the point where we're, the expectations are going to be that it's making most of our decisions for us. I mean, we, we're not going to care anymore what it looks like, because we're all going to care about how well it operates in the way we think, against the way we think, which could, which in of itself, I mean, Well, sorry, let me The point is that [00:30:00] if we give up the UI, are we giving up autonomy?

Are we basically saying, ah, just fucking do whatever you are, you are, you want, because I just need it to work a certain way. Like, are we truly giving up the last aesthetic piece just to get the final bit of need? I don't know. Yeah. Yeah. I think that's, it's, um, that's, that's a piece of autonomy that would be given up, but there will be, I mean, I think of.

I was just thinking about this the other day, or even after we got off that call. Was it yesterday? Yeah, yesterday. Geez, time flies, huh? Um, was You know, the one of the most popular, you know, terms of the year is fact checker, right? And maybe in the last four years, it continues to be, I mean, I just imagine a whole platform.

That's the number of platforms and tools and modules and things that are all based on that same model of, of, of crowdsourced [00:31:00] capability to, and well, there's like a diminishing returns. Okay. So if we have to keep checking the AI to make sure it's right, then we're wasting more time than it's worth. Not if we do it efficiently, and that's the question, it's like, will someone build that next?

And that's, that's the checks and balances against the AI that I think is gonna be interesting, man. I'm telling you, I think it's coming. Yeah, we gotta go into the rabbit hole for just a minute here, let's just get out our claws and climb in, because what you just said, I had a thought, a couple, I don't know, I don't know when I, I'm not even gonna put a time reference, I just had a thought.

In some recent period of time, I had a thought, you know, like most futuristic movies and shows that are sci fi, everyone's just got black screens with green writing on them, or they have, it's always the same, right? There's no frills. It's just a bullshit vector graph on a screen. Right. In the future. So, what you just said a moment ago about the UI won't [00:32:00] matter.

The thought that I had back then, which was just triggered by that statement, is that can you see a world where there's no such thing as a Windows background? Where there's no personalization of an OS? Where there's no black or white or color themes or schemes? There's simply text on a screen. You're literally, you walk up to a screen, and it's a dumb terminal, you flip it on, it's a black background, white, white, white or green writing, and you just ask questions.

And it's like, yes. That's That's a full loss of autonomy. You're taking away, and as dumb as it sounds, I have had that thought as well, that text will rule. And when Slack emerged, I had that thought. It was like, oh boy, we're back to text and IRC. Man, this is a valve, a valve of information that's blasting through and chatbots are going to be huge and then we went to Gartner two years later and they're like, Oh, chat is [00:33:00] the chatbots are going to be, you're going to see them on every website and all this stuff.

And we were all talking about even back in 2013, 14, you guys, you were talking about that a lot. And I think there is like, yeah, there's gonna be a number of text elements, I think, in things. But I also think that, um, more of it is, there'll be a user experience, I think. It'll just be, you get to pick what it is.

So I think the exact opposite. I think, I think you, you design the UI as a, you're just gonna either think of it, or you're gonna be able to put it together, and you'll create your own user experience. And that's where I think it will, at least it will be sold to go is that you will have the autonomy to build the world you want to live in, which is scary.

And there's not enough oxygen down here, this deep in the rabbit hole, Mike, we got to come back up a little bit. So, so all I'm saying is that I see it as, I think that the text element of things is. Is the, the precursor [00:34:00] to some people are cons. They're great in email. Some people are great in text. Some people like images.

No one's great in email, no one's great. Some pe some people like their vision pro. Some people are gonna have implants. The one person who bought a vision pro from Apple loves their vision pro. Yes. I just got a new head strap. I spent like 600 on it. No, I'm just kidding. Um, but I do think, I do think you'll be able to create your own UI and your own experience.

But you know why we'll be doing that. We'll be doing that work. It will be doing that for the purposes of trying to get people some sense of autonomy. Oh, listen, we've stripped away all the rest of your autonomy, but here you can make your. You can have a raspberry theme for your slack environment, but you can't do anything else.

Just type, you little frickin worker bee. Don't think, just type. But meanwhile, you can have it be purple or green. Like, we're gonna give people these little breadcrumbs of autonomy through UI shifting, is what I think, what you're getting at. And I think the irony of [00:35:00] that is people are gonna be like, I don't give a shit.

We're ping ponging, man. We're going back. Here's what I'm saying. You're going to be able to create your own experience. I'm not saying you're going to lose all autonomy, but you might. I mean, you might if that's what you choose to do. You might not be able to compete. Without it. So yeah, I mean, that's, uh, you'll have your, that's like not using the internet right now related to your job.

And I think that's lost some autonomy with that too. And that's, it's everything's going one step further into the, to the next, into the best, right? Where we're going to lose more and more, uh, sort of control over things. I think that's the Wally scale, right? We're talking about Wally meter. Yeah. We're at, we're at Wally two right now.

But I completely agree with that. I think that is where we're headed. And it's, it's a question of, you know, does that, uh, is that the momentum there too, too strong, um, for people to actually stop and do they [00:36:00] want to do the work to stop, you know, do they, do they, do they not want to have the convenience, do they want to pay for everything?

Um, you know, there needs to be more of an open. Why isn't everyone using desktop Linux? You know, because it's not as good. Oh, here we go. Listen. But here's the thing. That's what I'm saying. Like that's the example I'm using, you know, is it's a great, it's a great example, but we're we, we have to sort of, we have like a ton of shit to get through in this episode.

And that's free. That's free. I know. And it works great. No one wants to use it. Why? Because it's not a good development framework. 'cause no one makes any money. It's also not, you know, so it's like, it's not friendly. No, but that's, that's kind of what I'm saying is like at some point, it's all autonomy is lost because people keep making stuff that's too easy to use.

This is the moment when you should release Crispin OS. This is, you do it right now, and at first people will be skeptical, and then before you know it, everyone's using [00:37:00] Crispin OS. Yeah. Oh, you, you know, you, you, you, you know, where it was some of the areas I'm thinking in, I think there's a lot of opportunity, uh, to, to help with some of these things, but I don't, I'm going to, I'm going to course correct this here to get us back on track a little bit.

Yeah, get us back. We'll, we'll come back to this. So, uh, back to the day to day decision making, um, technology technology selection. becomes crucial during your day to day selection process because the technologies you've already selected are going to impact all the decisions you have to make on a day by day basis.

If you have shitty technology in place, the decisions that you make during the day to day basis will not be positive decisions. Unless of course you're ripping one of those out to put in something better. They'll be mostly negatively geared decisions. And so every time that You or I, or one of our listeners brings in a new system.

We should not be thinking about just how we get it in. We should be spending just as much time thinking about how to get it the fuck out. And here is [00:38:00] something that I haven't done until like the last two years of my career, like 24. To 23. And now I have never conceptually thought about an exit plan. I have now always thought about when is the contract end?

Okay, cool. Six months from now, we better start thinking about how to get out of this. Yeah. In fact, we just renewed a certain system in our company. We had, we had basically had four months to go and I'm like, Oh shit, I don't have, I barely have enough time to build an alternative system for us to use. We have to renew for another year.

What I should have done, and again, this was grandfathered in before I got there, but the point is, I should have had an exit plan. I should have an exit plan for every single platform that I have. And building exit plans is freaking hard. Yeah, it sucks. But if you don't have an exit plan and you're stuck with technology, then your day to day autonomous decisions will get weaker and weaker and less and less productive, in my [00:39:00] opinion.

So if you have an exit plan, yeah. And technical debt, if you have an exit plan though, you can be like. No problem. Let's get rid of this heaping pile of crap and put in something better. I got a plan. Let's do this right now. And then lastly, there's the process design process design contributes to the same data decisions.

If you have shitty process, then you are going to have chaos because you need flexibility. You need. An A to B process that makes a linear sense that everybody understands if it's just oh, yeah We're gonna do it this way today and every single time you make a decision. You're doing a different methodology I I could I equate it to I still have a teenager left.

The other one's not, but principally I grew, I adulted two teenagers through my house and you want to give them, you know, a necessary freedom to learn and grow, but not so much they end up [00:40:00] on the evening news. And so the goal is providing guardrails to everybody. Is making sure that you're always staying consistent with your process.

Uh, you have good process in place. You have good technology decision making and you have good technology exit planning. That's just your, that's just for day to day. Think of all the things I just said, just to get through each day. But now we get into the second part, which is the good habits. And so the things I just said, now we have to figure out how to do those repetitively and how to improve them.

So we talk about all these big strategic moves, right? But. It comes down to the little habits that we build, the little decisions that we make every single one of those days. Are they habitually the same decisions? Do they come from the same decision framework? Are they always using the same balance of productivity, risk, innovation, and autonomy?

Or are you [00:41:00] shifting your decisions based on your mood? Based on your customer and how much they pissed you off last week, based on some other factor that's going to change your habit, or are you so capable of sticking to your habit that every time you make a decision, it is always properly balanced. So let me give you an example.

Um, so every time a vendor and vendors don't like me for this, I don't think vendors like me generally, but every time a vendor sends over their standard contract to me, uh, my first response is always. Why? And not like in a conditional way, but like, why is this cause here? Why did you write this identification clause?

Why did you write this? Uh, you know, you're going to keep our data for one year cause. Tell me why all these default causes are in your contract. And It's a habit. It's just a habit that people that have been through this with me, um, started calling it the three year old strategy because I sound like a kid who's three years old, just constantly [00:42:00] asking why.

Um, people that have been in the room, hear me with vendors say this, are like, afterwards, were you just being an asshole? I'm like, no, have you ever not wanted to ask these questions from vendors? But it's a thing, it works. Half the time the vendor doesn't have any idea why these things exist in their contract and they're willing to strike them.

They're just copying and pasting them somewhere else. And that's where you start finding those little hidden autonomy traps. By having these habits, where you're balancing these pillars, you start to see the autonomy traps. You're able to look and use these habits to say, That's not the same process you used yesterday.

Like, why are you doing it that way? But I sound like, I sound like just a general asshole when I do it this way. I'm not. I'm not trying to be an asshole when I'm nitpicking this stuff, but I'm trying to get everybody to develop the same habits. Absolutely. We should definitely, in every [00:43:00] contract, should be asking those questions.

Well, here's another habit that I do, and I'm, I'm just kind of reading down my list, but as I was thinking about all the things that I have trained myself to do, either I've learned them from somebody who's a really great leader, or someone's taught them to me, or I've read them somewhere in a guide, but I also do the future test, which is every single time that I make a technology decision, or we do as a company, Now I do the play it forward thing where, you know, in good project governance, this is kind of a rote practice where, okay, you want to put in platform X, Y, Z, what value will you get over it for the next three years?

Like what will the ROI be, et cetera, et cetera. But then to ask the next question, which is, which is what happens in the three years? And then to go further, like what happens if that vendor got acquired? What happens if they triple their prices? And sometimes you can get pricing lock in, but you get my point, like, you start asking the questions that aren't just about, yeah, I know, it sounds good, we'll have them for [00:44:00] three years, this looks like a good contract.

But you ask all the other questions. The future questions. Did you do that? Absolutely, future questions. In terms of the three year, three year where we'll be after three years. First of all, I'm not going to usually sign a three year deal, but after that What are we going to do? And that's some of your, to your run, your, your exit strategy.

That'll feed into that as well. Right. But if we get three years, let's just say it's one year. I mean, you still, where is it going to be afterwards? Like what's the price change? I think in SAS is a huge problem. They just change their models all the time. Right. And it's happening more and more prices.

Obviously it's going to be a price increase every year. For the most part, a lot of these vendors are going to do that by default, but it's. It's changing, completely changing the licensing scheme is something that's getting more and more common. Oh, sorry. We know you don't ask the questions you're going to ask a question.

They're not going to have the answer a lot of the time, frankly. And they'll get somebody on the phone eventually [00:45:00] who's been there. It's company way longer than them that knows. But if you don't ask these questions, you're conceding your autonomy on this process. You're basically saying to the vendor, just here's my wallet.

Just do your worst against me and my company. We're just going to take it. You're, you're exercising your, your, your autonomy in a great way. When you're using these small little habits to go back to the vendor and say, no, no, no, no. We like your products, but no fricking way. Am I, am I signing this? And also.

Like, Hey, business, when we're done with them, like, are we just assuming we're going to use them again? No, we should not assume that maybe if they do a really great job, we'll reconsider them. But one other component with this is you're thinking like over the course of the future is, do you, have you invested in other things that are going to progress to take its place?

Yeah, that's that [00:46:00] side of it too. Like do I still need this thing? You know, I'm gonna need it a year or two from now Maybe it's worth having it for a year as a stopgap, but this other thing is gonna take its place I know that's gonna happen. So yeah, that's a question. I didn't I didn't write in the script But when I think about it, you know, I I do and this is something I've been doing over the I don't know Maybe the last four or five years Especially if you're building things, right?

Well, I asked them what their development roadmap is. Because I want to know if they're going to potentially put in something in their roadmap that I'm using somewhere else and I could just have, I could consolidate vendors. Consolidate, if you can. That's a good thing to do if you can get the same user experience.

So, new habit for Nate and new habit for everybody else out there in podcast land. You can also ask, um, what their roadmap looks like, if you don't already. And then when they tell you, if they're being coy, you can push hard and say, I want to know your roadmap. But if they tell you, you can say, huh, we were [00:47:00] going to do electronic signatures next year and you're going to do it.

And I was thinking of a different platform, but now I'm going to stall that decision making and I'm going to get my autonomy on like full blast now. And chances are, if you're first in line or 15th or 20th in line, you're not getting the B team. No, if it's something that's new, that's great. Yeah, you're running a risk using a new technology.

And could that be just shut down because it doesn't work? Well, sure. But if you're a small company and you can make sure you're going to make, you're going to get better support right out of the gate because they want it to be successful as much as you do. Oh, for sure. It's a proven application that was written by someone 25 years ago, and they're just hoping that someone will keep using this thing.

I can think of a few of those. Um, for sure. You have no, you have no control. You're going to get stuck. It's very true. You need to sign an NDA or something to get some of this stuff. And that's fine. You know, if you haven't bought it yet, but [00:48:00] that's worth doing. Well, we were talking today, uh, myself and some of my senior research team about this, uh, ELN platform that we have and just how difficult it would be to get off of it.

And we estimate it would take us about a year. I think it's about a year to export all of our data out in a way that we could then repurpose it a year, mind you. Yeah. Um, but what are we going to do? What are we going to do? We had to put in a platform that solved our needs. And so we kind of took it and took it pretty hard there on the loss of autonomy to have this functionality.

Um, we're not going to get that autonomy back, mind you, anytime soon. Difficult one. Well, so then that brings us to the third habit. Third, I wanted to mention, which was, um, everybody's favorite and no, not governance. Governance is not a habit. It's a thing. It's a process, but the third habit is [00:49:00] documentation, which.

I mean, I love documenting things, but not everybody does that, but good documentation is like a paper trail of your autonomy. That's the way I think about it. When I'm doing a document, I feel good that I'm doing it because this is giving me agency over the decisions I just made. Here's why, here's why I'm going to make them.

Then here's the decisions I made, and then here's why I made them. And here's what's going to happen next. Like I'm documenting this. Um. It's a difference between, we can't change this because we don't know how it works, and we chose not to change this because of these specific reasons. And, uh, you and I could probably count many times that we would need many, many hands to sit, to count the times that documentation for justification saved our bacon.

Absolutely. We were able to say, actually, this is, see this document. This is precisely why we did this thing. [00:50:00] Because it's hard to remember everything you ever did and why and how you did it. And, and it's. Documentation is great for your legacy too, and that's a basic, obvious thing to say, but like you, you want to make sure the next person who comes in is successful, hopefully.

Well, and, and this isn't really a habit, but I did have one. Your successor that you're, you're bringing up to, to work through these things is, you know, they can be successful if they didn't get exposure to it previously. Agreed. And I did have one more thing, which I couldn't really classify as a habit, but it falls into the realm of just, I think more behavior overall, which is getting comfortable with saying no.

Now, I would also then argue against myself by saying, saying no is usually not the best thing to say. [00:51:00] It's better to say something like, yes, but, or yes, and, instead of just saying no. But sometimes you have to say no, and if you're not comfortable saying no, then you will say something that's wrong.

Getting comfortable with saying no is an important ha um, not habit. It's an important skill, I would say, because you're not going to say it every day, maybe a long time, but you want to be able to say no to bad ideas. But you also want to be able to say no to good ideas that come with very, very big strings attached.

Exactly. And that's, that's more like the, that's more like the, the yes and, or the. You know what? I know you want to buy this platform and it's such a great platform, but look at all the bullshit that comes with buying this platform. So yes, we need it, but, and we need to find an alternative or we need to find a way to get around these strings.

Let me help you do that. Let's align on that mission together, that kind of [00:52:00] thing, but you also have some nice to be able to say, no, that's, that's going to completely ruin our risk profile or no. That's going to send us technologically back in the stone age by putting this system that rhymes with bear joint in place for document management.

But that's what we've always used, Nate. I know, it's what we always, it's what the industry standardized on, by the way. It's what the vendors standardized on. I think that's the thing. It's the truth. It's what the CROs and the other bigger companies, they've all got it. And I think that's They all have heads of IT too, dude.

They all have people that are not exercising their autonomy. Everyone's got autonomy. Everyone's making decisions. Uh Big, big MSPs, big MSPs and big, ah, just let it stack up, go in the portal and search for whatever you want. You got a hundred problems. We'll solve all those and just don't worry about that invoice.

It'll be cool. [00:53:00] Just add it to teams. Just add it to teams. Nobody cares. It's all free. It's free, man. It comes bundled. It's free. You don't pay for it. Um, all right. So the third big area was the, uh, God, I can never say this fricking word. The strategic approach, strategic approach, strategic approach, strategic approach, strategic approach.

I want to practice it in my car. So there's something I think about a lot, which is okay. So we. We spend so much time worrying about vendor lock in. We were just talking about that. What about, what about knowledge lock in? So if only one person in the organization knows how a critical system works, and for companies, like, are the size of the companies that you and I work at, this is very common.

Very common. Then you've already lost autonomy. The moment that platform arrives in your company and the moment that super, super amazing [00:54:00] person who knows how to use it starts working on it, you're conceding autonomy. In fact, you pretty much conceded it all right on the day one that they did it, but you're conceding it and, and like pretty good amounts are right up until the moment they leave.

At which point, since you've gotten on left, your only choice is to start building it back up again. Um, Small, medium biotechs like us thrive on this problem somehow. Like we love single points of failure. We constantly bring in the best and brightest, but we can never really afford two of them. It's usually only one.

And so we end up in this problem, the single point of failure effect. So let's assume though, for instance, we have the capability. to take this from a strategic perspective. We, let's assume that you and I have the capability to take this from a strategic approach where we can, um, affect this, this, this potential outcome.

So you've seen this too, companies that become so dependent on people or [00:55:00] vendors that they can't make changes when they need to. Oh, it wouldn't be great if we could do this. Oh, we can't because the vendors got us so locked in. We can't make that change. Um, the solution isn't cross training. That's important though.

That's important. It's about building a culture of knowledge sharing. And this is like all of your collaboration platforms and your training and teaching, um, continuous learning, but most of all. Most of all, in my opinion, it's about that person, that ace pitcher, you know, you're, you're number one, a team player, them being willing to share their wealth, share their information, because you never know who's going to leave.

Yeah. And I mean, that's a good mentorship and leadership moment for that person. If you can. Absolutely. If they're not willing to share, to help them to share. Um, because [00:56:00] even no matter how complex and painful it is for a company, if someone who has all the knowledge leaves. They'll move on might be difficult, but we're going to be all right.

And a lot of, you know, especially, especially in the SAS world, you know, we're 15 years into that pretty much now. Right. At least, um, yeah, at least, well, even longer than that. I mean, I was in SAS in 2009. Yeah. Yeah. I remember. Yeah. So, you know, pick up the phone and it's like, so that's a dark side of it, but really it's.

Champions instilling confidence within your peers and the in different functions of the business to work on these things and learn how they work. And a lot of people, sir, already have some of that background or they, they want to learn already have. They already have it for some other similar application at their last company or the application you've implemented already.

Uh, it's nice [00:57:00] to see people in orientation and they go, Yeah, I've used that. That's awesome. Great. Yeah, I know how that works. And, you know, then maybe that's someone down the line. It's like, Oh, how'd you like to, you know, run it for your, your department and you can take care of all the groups for your department and manage that.

And, um, yeah, you know, just sort of enables pieces of it as you go forward. And then on a business application side, I think the trend is more of the. The actual functional lines, they want to have the ability to, within compliance rules and everything else, want to be able to manage. Uh, the application, you still need to know how it works.

That's the thing. I think you can't go too far out where, um, when something breaks, it was going to turn to you, whether you even know the name of the application or not, like, Oh, I don't, why don't you know about this? Um, so you still have to regain that knowledge or pull that knowledge from someone somewhere else.

Was the vendor or that, or that business [00:58:00] technologist, I guess, lack of a better term. Um, so I think paradox. Yeah, I mean, I think it's a good thing. I think it's very, I think as long as the I. T. leadership has some form of like some guardrails and security rules and or just instructions, you know, that everyone's following the same process, then that's really for us to foster and put the framework in and let the workers who are own the application owners to actually thrive and build and support it.

these tools and share the knowledge. It should be a learning organization. Everyone should want to learn new things and not by, by not forcing people to, to share what they know. Are we, um, sacrificing autonomy? Um, hold that thought, hold that thought. I'm going to come back to it. Yeah, sure. So remember when we talked about.[00:59:00] 

Uh, last week, identity and value are intertwined. I mean, we spent a lot of time last week talking about this. Yep. Yep. The same, the same principle applies here. So we need to identify what parts of our technology stack, which processes, uh, which knowledge bases. Are truly valuable and we build our autonomy preservation around those things.

Now we didn't talk at that level, last week we were talking about identity. Now we're, we apply the same principle here and we say we have this person, they're a valuable asset, they're a knowledge source, they're using technology that's key to them. So should we be baking in autonomy preservation around that person?

Or are we just going to let them do their job? And if we just let them do their job, I mean, that's good. We're letting them do their job, but are you and I failing at the autonomy preservation by not putting in the guardrails around them? And so I'll give you an example and let me know if you sound, if this sounds familiar to you.

A company that I worked at [01:00:00] had this genius employee. Yeah, this is a true story and you know exactly what I'm talking about in this story. And the genius employee who built this amazing Custom platform from a production enterprise platform for the whole company built it himself. And it was, it was, it was that company's competitive advantage, their secret sauce, if you will.

But they built it entirely on a vendor's proprietary platform because it was easier at that time. Now you fast forward five years. From that time. And that vendor tripled their prices there. The cost to maintain a physical server stack to run that was went through this guy and everyone else was moving to the cloud and they couldn't move this to the cloud.

This had to stay on physical. Service of service. It would have cost millions of dollars. Then that person left the company. It became, it became almost an existential [01:01:00] threat to that business. So nobody at any point in time put, uh, governance around that individual. Nobody at any point in time put guardrails around it.

They just let it keep going. So I, it's not a habit because I don't do this as much as I should. I do think about it a lot nowadays. But those single points of failure that historically I'd be like, that person's really, really smart in that thing. I'm now thinking, okay, great. And also how the hell do I put guardrails around that person in their thing?

Well, that that's I mean, uh, that's enterprise architecture principles, right? That's the we're not going to build X, Y, and Z. You know, this is when we had a certain threshold. This is our, you know, our avenue to move to a different support model. We can, you can build principles and and that's what a, um. A head of IT or enterprise architect [01:02:00] or whoever, you have certain rules that the IT organization, all the digital investments have to live by in order for them to be implemented or used.

But if you don't have that power to influence, and if it's not, if it's just a binder of IT principles, it sits in the corner. I'm kind of dating it, right? It's like, it's just this book that no one ever looks at. And no one abides by then it's useless. So it's having to bake some of these principles or rules into How you operate and we can call it governance.

We can call it, you know operating principles or whatever Um, but those are important things to lay down that's a particular situation If those principles were adhered to Existed at the time That wouldn't have happened and there would have been some drive to change And I think that's, you can have all the governance in the world, but if you can't enforce it, it's not governance anymore.

It's just, [01:03:00] I mean, having just a list of stuff that you're trying to do. Right. Having SDLC, having a code repo, having those things would have all been steps towards autonomy preservation with that particular stack individual and those, and those weren't done. But giving someone complete and total autonomy to build an application to do whatever they wanted and not follow the principles Um, or the rules just because it was great and it was awesome.

Um, we gave one person all the capability in the world to do that. Let's just say it's and it's happening. I think it's more common and development shops where it's, you know, where, but they instantly hit a wall with the, the development solution architecture. Principles that they have in place at a lot of development firms.

They just know not to do that. Um, sure it was a one decision. It was, it was a number one decision, not a zero decision, but a one decision. But all the eggs were, it's a great [01:04:00] example of how it can go. All the eggs were put into the a, the a and the eye baskets. Nothing was considered in terms of productivity and risk.

It was all about giving this individual a full autonomy to do his thing and, and do all the innovation necessary, but. Everything else would have to take a hit as a result, and it was, it was, it didn't teach you earlier in your discussion, you said it was a differentiator. It's kind of a competitive advantage at one point.

It absolutely was. And I think that's what made it difficult to, um, to look away from it because it was, wow, no one else has done this. This is amazing. Uh, and that, that, that, that, that's distracting to that exit strategy to that long term strategy, strategic approach. Um, it's very distracting. It's like, wow, this is fantastic.

This is, well, this is great. You know, the question would have been where it's going to go. Uh, you can, you can hit a wall and that, that, that's sort of how I think that, that type of a scenario played out. [01:05:00] Let's, let's suppose that in that same situation, we had had, um, uh, sort of autonomy guardrails. We had had identity guardrails around this person, this stack, this data.

How would we How would we have measured success of preservation? And so it's, it's not like with security where you can count, you can count incidents or productivity where you can measure output or service, where you can measure resolution, uh, in this case, like the answer, the question is, and this is vastly rhetorical, but.

Still worth asking which is how do you measure freedom and so with when you ask the question like how do you measure freedom? That's a very very hard question to answer in any context, but we say okay. We've been experimenting with Um, this idea called an autonomy score. Um, and we're looking at things like how many critical [01:06:00] systems could we realistically move to another platform if we needed to tomorrow?

So this, if you had come up with an autonomy score and you say, okay, we have 10 SaaS platforms, how many could I shut off tomorrow, move to another vendor easily, and let's just say it's one, right? In reality, maybe it's a little bit more. Let's just say it's one for that matter. So that means you have a 90 percent autonomy score where 90 percent of what you have, you have no autonomous control over in terms of your ability to move away from it.

Then. You guys asked that same, same question for success, which is what percentage or core capabilities are dependent on very specific vendors? In other words, how much vendor autonomy do I have? Can I switch to any vendor tomorrow? Yeah. And then lastly, you do the exact same thing with the single points of failure, which is how many single points of failure do I have?

And if how, if how many of them, if they left tomorrow, fuck the business. [01:07:00] Yeah. But it gets a little bit more complicated, Mike, because I started asking these questions. And then, then the question is. Is, are we actually measuring this autonomy, or are we measuring our anxiety about losing it? Yeah, and I, I think, um, And I cannot answer that last question.

I think in some respects, if we're talking a one and zero kind of scenario here with autonomy, if you have total autonomy as an IT leader, and you put in, sort of, through your, your own autonomy, you put in governance, To support to take and manage and put these principles in place to help support your, um, company success to reduce your risk to be able to work.[01:08:00] 

And are you in turn taking away the autonomy of. Who needs to do their own thing. So the more that the more centralized it and governing the it body might be. Are you taking away the autonomy of the workers because you're just replacing, you know, the. The, you know, the, the some of the decisions that they feel they should be able to make like a decentralized organization, maybe where you've got more independent people who are making those decisions, but now you run the risk of too much autonomy.

So it's almost like the modern model that we were talking before. This is you've got to have a balance. And if it's a 1 0 paradox, I think it's you're going to end up asking this question for a long time because there's yeah. I think there's there's got to be a middle ground where there's a good balance between having a governance model that works or a centralized type model [01:09:00] where there's some roles and responsibilities there and then a decentralized where you're giving like we're talking about the business technologist having more of a decentralized model, but you can't like Go one way or the other because you're gonna someone's gonna lose their autonomy.

Yeah I don't think that just but I mean I hear exactly what you're saying and in the back of my brain I'm like we will first of all we talked about this in episode 1 the decentralization effect and It's positive impact on autonomy, but you're exactly right. You don't just get autonomy back by decentralizing IT You have to have IT loses it, is what I'm telling you.

IT loses it. Yeah. But the advantage was, by decentralizing autonomy, IT as an aggregate definitely takes a hit. Individual IT functions and the business they represent get the positive effect as a result. Because they're able to move faster and make better decisions, but you can still have single points, you can still have single points of failure, you can still have abstracted text stacks that don't work.

[01:10:00] Sure, sure. I'm just saying, like, as you as a, as a, as an IT leader trying to, your, your decisions are going to be influenced by the mass more often probably in this scenario than it would be if you are kind of accountable for. Well, that's the day to day habit problem, like, so are you responding. I think that's why it's kind of a.

There's a reverse, uh, the one and zero thing. I mean, that's, that's why it's a paradox, right? Are you, yeah, are your decisions every day being based on cleaning up a mess or being based on something that's more related to the strategy you're trying to put in place? Like every day that you come into work, how many decisions you make are related to the strategy that you're doing going forward versus cleaning up things that didn't work in the past?

Um, and again, it's rhetorical, but it's not. It's something that you need to think about. So, yeah, I think so. This brings us. Well, no, go ahead. And I just say, I think that's why this is a good discussion is because it no matter kind of on on both sides of the coin in [01:11:00] terms of how you want to operate your organization, like there is a, there are a number of things to think about if you're trying to preserve autonomy, um, you know, within I.

T. Within your whole company. Um, you want to empower people across the organization to make decisions and do decisions. Do different things and learn new things, but at the same time, you don't want it to be a free for all where you don't have any visibility or control over how things are moving. So I think that's why it's, I think that the modern, uh, the modern approach that we talked about in episode two, I believe, is like, it always rings true to me because it's like, there's sort of this middle approach where you've got to, you've got to make.

You've got to be the leader, make the decisions and use the, your ability to be a ton of autonomous to actually drive some of the autonomous process that you're trying to implement across the whole. Yes, yes, that's, that's the point I've been trying [01:12:00] to get to the most. And this kind of leads us into the last big bucket, which was creating the future vision, the longterm vision.

And so, yes, we talked about. Good day to day habits. We talked about how to sort of measure, um, you know, effectively that value that that how do I know if the decision I'm making about autonomy is is valuable. You talked about building a strategic approach. A strategic approach and good habits. So now we talk about, okay, well, how do we put this all together going forward?

So like literally we're, we're asking ourselves this question and we're asking other people this question. People are asking me this question. All right. So what do we do about this? Like, yeah, great. You're talking about autonomy and preserving the loss of autonomy, but what do we do? So we can fill this up, philosophize about it all day.

But at some point we need to make real decisions and real situations. And again, using examples, because I have plenty, um, [01:13:00] We have a situation where we needed a new HRIS system. Now, the obvious choice, uh, is to go with what was, what else is using. Yep. But it could, like, it just, it would just integrate. But when we dug into it, we meant that seamless, we found out that seamless integration would mean complete dependency.

And a lot connectors to other things that I wasn't comfortable with. And this kind of like borders along the security and data transfer and all those other things. But in truth. It was, okay, yeah, yeah, here's this great platform in order for it to work, not only do you need to buy this level of licensing, but you have to then turn it on to all these other things to make it work.

And all of a sudden it went from, yeah, I mean, these integrate seamlessly, but it is not a seamless integration. Yep. Um, and so instead we went with a slightly, slightly comfier solution that uses regular APIs. [01:14:00] Um, it allows us to pick and choose what we want to do with our data. And was it more work initially to set this up?

Yes. Lots of API work. And did it require more from our team? Yes. But I sleep better at night now knowing that I am of course paying less because I'm cheap but also because we have full control over all the little bits and Going back to the previous question of could I rip it out tomorrow? Well, it wouldn't be totally painless but We have our control of our own data.

So to the extent that we would put something new in place, we wouldn't lose a lot of ground. Um, and then which brings me to the, my point of, we don't talk enough about the human cost of losing autonomy. We talk about other value identifiers, you know, identity and, um, the value of the stack, but it's just also about [01:15:00] people's ability to solve problems.

To innovate, to feel like they're actually in control of their domain. So we did, we did some jorts recently about heteronomy and the digitalization of the workforce. Um, if you want to find those jorts, you can find them online. But basically we're talking about how much of an effort we're putting in place to move us further and further into the digitalization world.

But as a result, how much we're, um, taking away. From people's ability to sit there and continue to solve and innovate. Or we're just making them into people that push buttons. So, like the problem solving muscles begin to atrophy. You know, Microsoft released a survey. Microsoft did a study last week. I don't know if you saw this article or not.

I did. I did. That was fascinating. 372 people they interviewed or did in the study. They find out that generative AI makes you dumber. Well, no shit. [01:16:00] Um. Yep. Anybody could told you that because you're just you're basically this great organ that you have is not being used anymore So then the question is okay Well, if you and I here we are both in 2025 if we were to say, okay What will IT autonomy look like in 2030?

Um, you and I are both seeing this massive push towards AI of course and automation And no one, no one has a fucking clue about how this is all going to work. So, and everyone's got a strategy, you know, everyone's got all AI in my strategy, AI in their strategy, but no one has any idea what it's doing. So what will that look like?

How will we, and if you think about the future, how will we maintain our interdependence between now and then, are we getting towards a place where in 2030, when you and I are, you know, walking around with canes into our CIO offices, um, Are we going to get towards a place where actually autonomy becomes impossible?

Or do you think that now, do you [01:17:00] think that we will be able to preserve some and then if we double down now on our intentionality to preserve it, we might actually have a chance of still having some left in 2030? What do you think about that? I don't know. I mean, 2030 I think we'll still have some. Um,

that's a good question. I mean, a lot could be different in five years. It depends on how fast things move, but, um, Well, maybe, maybe we table that question for another night because it definitely is something we should think about and come up because I think that that's where we get to a point where if things are automatic and they're easy to consume, people will concede.

I mean, yes. I think the point, the point of this episode that I wanted to make was like, basically this episode is the beginning of really the core of the season's theme. We, in the beginning, we talked about the four pillars, [01:18:00] productivity, risk, innovation, and autonomy, and this is where they all intersect at this moment.

So if we're too aggressive about preserving autonomy, we risk missing out on innovations. If we're too passive, we risk losing the ability to innovate and for productivity, it's always balanced between all of them. And so. Like autonomy, I don't think it's something that you achieve, right? It's something that you maintain.

Like we've already all achieved it by default. So every single day, every conversation that you have, every decision that you make, you're either strengthening or weakening your position for preserving your autonomy. And I don't know what he thought. What are your thoughts on that? I think it's, it's something you have to hold on to if you can.

I. I don't think it's something that's always top of mind with everyone, but it will become more and more top of mind as people's lives are affected. Do you think people will, um, [01:19:00] 2027, 2028, will just wake up one day and be like, what the fuck just happened? Like, where did I lose all my decision making capability?

I mean, do you think it's that soon? Do you think it's further down the road? I do think it's coming. A little further down the road, but I do think that more The optimist in me says that hopefully the decisions that really need to be made can get more time or more airtime and more focus than some of the decisions that are less consequential that we can sort of source off to AI.

But, how far does that go? You know, Is, you know, we even know a decision's been made if it's so easy to make one without a human involved. Yeah, that's a great question. So I think that over time it's, how does it benefit us? I think there'll be new creations by humans because AI exists. There'll be new things that'll be built [01:20:00] by humans because AI is getting better and better and better.

Better segue, will some things go away? Yes. Will there be things that are gonna be difficult? Yes. Um, absolutely. But I do think, well, let's say getting, getting closer to, um, to quantum or, or, or better energy or more. You know, cleaner energy or solving some of the world's problems. I think that's where some of the stuff, humans will be plugged into the may most important things.

Um, hopefully more of us can focus on that instead of some of the, the, I don't wanna say remedial things, but things that we do now, um, where maybe it can be a bigger consortium of people who are putting their skills to use to, to help do those things. But that's a rosy picture I'm trying to paint. I don't know how much that's in reality.

This is not an optimist podcast, Mike. This is all doom and gloom. I know. Hey, I'm just trying to be [01:21:00] the beacon, the lighthouse right now. We're trying to give hope and give a strategy to people to Keep the flames lit. I'm I am, I am ultimately, and always have been, that's why I'm in technologies. I think it's a, it's a great, it's a greater, good thing that everything that comes along is there's good and bad, but we, uh, ultimately need to evolve here somehow to a better place.

And. I don't know. I don't think people can, uh, they can't align on things and figure out where they want to go. So we need some help. And you're saying we just can't keep trolling and race beating each other online and social media. It's not going to work. It's too much fun, right? I mean, people have way too much fun with that.

You can turn on Bravo. You can go on to Facebook or you can do whatever you want and watch everyone. It's the Facebook, by the way, the Facebook that's right. Um, so. Like, so I, I do think that, I do think we're going to be [01:22:00] somewhere where a lot of things are going to be potentially taken care of, but it's a question of how it affects everybody, you know, is it going to be worth it?

Do you think someone will take, do you think in five years from now an AI will take care of the ceiling in your basement? No. I'm totally autonomous with this decision. You're totally autonomous? That's awesome. Preserve that. So, we're going to do an archetype tonight. Um, but before we get to that, and just close out the show, before we do that though, um, next week we're talking about

Creating tech, the topic of next week is creating technical diversity without chaos. And this is, so now, now we're in the shit, like now we're in the meat of this theme. And we have to now go, like what we talked about tonight, with this sort of single point of failure idea. Tech stack idea. We're going to have to figure out what maintaining diverse technical abilities looks like.

How do you [01:23:00] maintain a diverse ecosystem, which is basically, it's not just as by saying, well, you just have a one model. No, it's, you can have a zero and one model and still have a diverse technical ecosystem, but how do you maintain it? That's next week. And it's going to be, I think it's going to be freaking fantastic episode.

Um, so, uh, we. On episode two, no three, we did our first archetype, um, the great homogenization, which was, you know, profiling that IT leader who, uh, has that balance vision between, uh, you know, a sort of one autonomy concession or another. And so last week we had intended to talk about, um, the identity identity crisis.

As a, uh, an autonomy archetype, but we ran out of time. So we're going to do it this week. And basically what is the identity [01:24:00] identity crisis? Well, um, and again, if you listen to last week's episode, this will make perfect sense. If you didn't stop right here, go back and listen and come back to this part of the show.

So every app and service either demands or provides capabilities for an identity provider. Okay. So nobody doesn't have, nobody has an app that doesn't require authentication. Uh, well, let me rephrase that. Nobody doesn't have a cloud app that doesn't require authentication. Everybody requires authentication.

So like XYZ app won't accept SAML or won't accept OAuth or won't allow you to disable it. And research vendors, at least the ones we use in our company, all have their own built in authentication schemas for their software, and they don't use any applicable standards. They just kind of make it up and make their own authentication.

Um, then you have, you have users. Who have more passwords than brain cells, sometimes it seems, despite having a single sign on, still writing them in [01:25:00] notebooks, and you have Chrome password manager, you have Um, sticky notes, notebooks, uh, you have the problem of vendors promising that seamless identity integration.

But what they mean is, as long as you, as long as you use our identity stack, and a certain big company that begins with M does this, or else you pay for their SAML. Um, and I'm not giving Google a pass on this. Microsoft's definition of hybrid means Microsoft, and Google's idea of federation means Google.

And then lastly, Shadow IT isn't just about IT apps, unauthorized apps anymore. It's about unauthorized identities. So forgotten contractors, shared mailboxes. Um, and that actually wasn't the last one. I had two more. Everybody wants Zero Trust, but nobody wants to do the Zero Trust homework. Um, and then lastly, maintaining autonomy, maintaining autonomy over identity has become like trying to herd cats wearing invisibility cloaks.[01:26:00] 

I think I might have gone too deep in the metaphor there. But, um, so how does our fearless IT leader resolve for all this shit? Well, one, map your identity territory. Two, Build your identity stack with independence in mind. Going back to the discussion we had tonight about coming up with technological solutions that give you the best exit strategy, um, you should certainly be doing this on the identity side.

Creating an identity value framework where you map access patterns to business value, you define what makes an identity high value. Number four, you own your identity strategy. Don't let any vendor dictate your identity architecture. You maintain control. And lastly, focusing on business enablement, make security invisible, but effective, build identity flows that match business process and don't whatever you do, try to make security the reason for doing a thing, make it [01:27:00] part of the solution to doing that thing.

Uh, recently I have, I have a client. And recently I was doing my key stakeholder interviews with their executive team and I've been asking them this one question as I go along, which is, you know, if security was sort of like a spectrum on a one end on one end, you had absolutely no idea there was security.

It was all happening in the background, transparent. Another end, you were constantly being challenged and you were totally forced to recognize there was a security going on 100%. So nine out of nine people all said they did not want to see or smell or touch or feel security. They just wanted to know and be assured that it was going on in the background.

So the great, so the identity, identity crisis, Mike, I left, I listed some, um, things that happen. I listed five ways the IT leader can address this, map your identity territory, build your identity stack with independence in mind, create an identity value framework, own your identity strategy, [01:28:00] and focus on business enablement.

Thoughts? Identity is probably the most important system you own and need to make the decision centrally on. It's really your decision and the IT decision alone, I think. Um, that's one of the areas you want to make sure you fully understand the capabilities of it and the benefits, the ability to respond when you get alerts from it.

The overall architecture, you need to know a lot about it. And it's, is the, no one else should have more knowledge than your group on, on how the identity infrastructure is or the cybersecurity team, whoever manages it. Um, it's truly become the, the place in which everything connects. So that's, should be one of the first [01:29:00] things you put into place and you need to have a strategy around it.

And thinking about where it's going to go, how it's going to scale, how you're going to build it. But to Nate's point also, it's not just a governance and risk play. It's also a user enablement and user experience play. They need a place to store passwords. They need to know what they have access to. They need to know what apps, uh, they, they, they, they use every day.

They need to know how to find things. They need to know how to verify themselves. Uh, that they are who they are all things that make their will make their life easier to get their job done. Um, and also it, it's one of the constants. It shouldn't be something that's changing a lot. So it's a constant for them throughout probably most of their time there.

And that's why that's a big decision and why there needs to be a [01:30:00] strategy because removing identity platforms is a, um, is, is, is a very pricey and painful experience for a lot. And that's why so many people still use Active Directory. Is that, that transition Is that why? I think that's why. It's hard for people to get out of it.

Even though there's a couple of push button solutions that exist now, probably for smaller medium businesses, but maybe not the larger ones with multiple forests and domains and other things. It's a big, that's a big challenge. Um, but yeah, I think this is a centerpiece of a lot of how we do things for going forward and in the past as well, but it's also a big part of your.

What you learn as an employee from an identity system and a user experience piece is also how you can apply how your company uses identity in your own life and how you should manage your own personal credentials and how you access [01:31:00] applications outside of work. If you can put the right platform and that helps people to learn that if they haven't already.

Um, that's an outside and added value and identity platforms can bring that identity. Identity awareness can bring that. Yeah. Why is the help desk asking me for my pin code when I call the help desk? Well, if we can explain that, why that's important, they might be wondering why when they call the bank or whoever they've used, they just take their word for who they are and change their, their bank account number, why they may want to switch banks.

You know, why they want to switch, uh, insurance companies because they don't have the level of security that's protecting their personal data, perhaps, or they don't, they may do it, but the people aren't trained there appropriately, and they don't know to do it. Um, so there's all things that in my identity core move out, even within the company is so important, but also outside the company.

That's good. That's good stuff. I think it's a [01:32:00] big, big, big, big part of. You know, one, probably one of the few examples where it's, it's hard to distribute the responsibility of identity outside of a centralized function like it or cyber security or governance, risk and compliance like that. That's one of those platforms.

That's a centralized function. It's hard to make for me anyway, to make an argument that that should be something that can be delegated. Easily.

Okay. I dig it. All good stuff. Yeah. Good, good discussion tonight. There's so much here. It's so much to think about and how all of this ties into, just like we were talking about identity just a moment ago, the autonomy is not just an IT leadership principle or concern. It's, I think it's on everyone's mind, uh, with the emergence of our new sets of technology that are [01:33:00] in the news, in the press and are politically charged every single day.

Is, um, the number one question, uh, that I probably get from people on AI after a few beers is, you know, what, what, you know, what, what should we be teaching people to learn, um, how to be nice if AI is going to take everything over. And a lot of people think is that it's just going to take everything over.

Look at the headlines. I mean, Christ Almighty, the headlines got it so wrong every day. There's a lot of sensationalization, I don't know if that's even a word. Um, there's a lot of sensationalizing a lot of this and creating a lot of Because you're going to read that if it's Yeah, you know why? Because the free press doesn't have autonomy.

Uh, see right there. Um, all right. Well, yeah, that this was awesome. I mean, it's a lot to unpack. I think once again, we've [01:34:00] successfully asked or created more questions than we've answered, but that's important. I think what I'm what I'm learning from this season and from from these scripts and from the research is that we have to ask a shit ton of questions.

And to do this podcast, right? We can't naturally have eight hour episodes and do them. So we have to, we have to cover the most critical ones and then we should ask the other ones, but leave them open and people should think about them next week. Like I said, we're covering how to create technical diversity without chaos.

I mean, think about this. Like, technical diversity. How do you create a world where you have autonomy over your technology stack and it all works together, and it's diverse, and you have no chaos? Uh, there's a lot that goes into doing this, so it's going to be a good one. Um, I want to remind [01:35:00] everybody.

Sorry, you have something to say, I was going to say is to write on your, we, we've created a lot of questions. Uh, I think that's one of the things that is going to be more and more important is that we keep creating questions. And it's not about the answers. It's about the questions. Cause I think intrinsically in a lot of the questions there, the, the, the mass amount of questions are going to help build the answers more and more just based on the questions people ask.

It's going to help determine more of the answers. So they're more important than the answers. I think they are in our current state. Yeah. Keep asking. Um, it's a good thing to do. And like you said, the, why you use the example of your, one of your, uh, your, your habits, right. Asking why that's, that's more important now than ever before.

I made a note here. So in addition [01:36:00] to posting the podcast on Substack, I'm actually going to post a list of the questions that we, um, we essentially. It's just a one big list of questions about this particular episode and I'll put them out there just as a standalone sub stack post in addition to the podcast.

So I'll put that out there for everybody. Um, I want to remind everyone that if you like our show, please give us all the stars. Uh, again, in the description, we have links to buy stuff and donate to Wikimedia and give us a beer if you have leftover cash. Um, as always, uh, bark more. No, sorry. Bark less, wag more.

Don't be a dick. Especially don't be a dick to the hardworking IT folks in your company. Don't be a dick to old people. Don't be a dick to drivers who drive exactly the speed limit. Even if you gotta get somewhere faster, you know what? It's, you're just gonna get behind somebody else that's slow. If it's not that person, it's another person.

Just sit back, enjoy the sunrise. Um, be cool to people. It'll get paid back in spades. And be nice to animals. And be nice to [01:37:00] your friends, and just be nice to people in general. It'll make the world a better place. Uh, Mike is awesome, once again. I think next week we'll have a special guest. I'm hoping he can make it.

And, uh, he can provide some, some guidance for us on this technical diversity question. He is somebody who, um, historically has been a zero decision maker. So it'll be interesting to get a perspective of somebody who is more of an ecosystem mind. Um, not that his method's wrong. It's great for him and he's had a lot of time success with it, but you and I are, I think, more deviate towards the one side.

Uh, and I think it will be a really good discussion. Absolutely. I'm looking forward to that. Yeah. And so Ryan, everybody will be a bio IT world somewhere doing some kind of deviant stuff with, um, tables and questions and beer, maybe speaking. Um, and we were both on the USTM life sciences summit podcast yesterday or [01:38:00] not podcast webinars.

So if they posted on live and you want to hear me and Mike talk out of our butts, uh, you're welcome to do that. It'll be fun. Actually, we can talk about our butts. I think you and I have probably had the best stuff. I'm not, I'm not being biased.

All right, man. Thank you. This is coming. Yep. It was good. I, as always leave here with my head spinning and I got to make a million notes because I want to think about more stuff. Sounds good. I'll make some notes as well. All right. More questions. More questions. More paradoxes. Yep. All right, [01:39:00] dude. The calculus of I.

T., we value autonomy. Through the cyber paths

we glide, in the circuits we confide. No restraints, no need to hide, in the system we reside. Through the[01:40:00] 

code we weave. I fade in the data sees we skate zeros ones that can't obey we control it it's in a binary whispers in the night flashing screens that glow so bright in the matrix we take flight[01:41:00] 

Calculus of I. T. Without you it's only. Calculus of I. T. Without you it's only.


People on this episode