6 - When Should You Bail
Michael Berk (00:00.854)
Welcome to another episode of Freeform AI. My name is Michael and I do data engineering and machine learning at Databricks. Today I'm joined by the one, the only.
Ben (00:11.345)
Ben Wilson, I teach myself React at Databricks.
Michael Berk (00:15.534)
Hell yeah. Today we're gonna be talking about something that I have been thinking about a lot per usual, which is when to bail. And from a personality perspective, I'm very bad at this. I stay in things way too long. We're not gonna go any deeper than that, but it's a tendency where you have a lot of investment, whether it be emotional or financial or just time in a specific thing and the sunk cost fallacy.
often applies a lot harder than people think. So with that set up, Ben, I'm curious your take. How often do you see people stay down a path of solutioning or prototyping or leveraging a tool for too long?
Ben (01:02.481)
In my immediate peer group almost never I Can count on one hand the number of times that's happened in the last two years on my team It's so rare. Like nobody so everybody's so averse to that I Guess it's just like a culture of avoiding that that Problem space as much as possible. However, where I was before
working on this team, you know, doing the job that you do now, it was, it was more rare to find people who avoided that than who went down that path. Again, I would see it everywhere. I saw massive projects that people developed in the field and like tried to get to open source or something. And they're so excited in their first couple of months of doing it.
And then I go and talk to him a year later. I'm like, Hey, how's that thing going? And they're like, yeah, it's getting pretty complicated. Like, yeah, it seemed like a complicated thing to try to solve for one human. and then a year after that, they're like, I regret everything that I've done. Cause now I'm like constantly fixing this thing and it's so broken. Yup. I warned you.
Michael Berk (02:22.55)
yeah. I have said that personally many times. The amount of crap that I've built, I've been like, everybody will love this, right? And try to push it to our internal repos and maybe one person looks at it once. And they don't even like it. So.
Ben (02:41.797)
Yeah, that's like one side of the house is like building something that nobody cares about. Cause you, just not, you're not rubber ducking with other people and be like, Hey, is this valuable? And you have to get away from like the sycophantic behavior of people being like, every idea you have is amazing. You don't want that. you want people who are critical of, of what you're trying to do, just providing honest feedback. But the other side of the house is like, you have a valid business problem. Like this is a real project that we know should work.
Michael Berk (02:45.292)
Mm-hmm.
Ben (03:11.729)
100 % and the rabbit hole of going down like selecting the wrong way to implement something or latching on to the wrong tool to use because you've I've already spent so many weeks trying to make this work and they just keep on going down that path that you go and that's like the deeper you go into it the more resistant you are to try anything else because you're like I've already spent this amount of time I know this space I know this tool
You just start listing down things that are wrong with the tool instead of like stepping back and saying, should we try something else?
Michael Berk (03:50.186)
Exactly. So what I would like to use this time for, hopefully you, the listener are going to get a lot of value, is basically determining the psychology of how you know when you should bail and creating, like just being aware of the biases that you probably are going to bring in and then creating heuristics or rules that let you know when it's time to bail.
because if you, Ben Wilson, as a software engineer at Databricks, if you and your team successfully don't stay in the incorrect path for that long, that's a really, really valuable skill. And 99 % of people, at least that I've seen, could benefit from improving in that area. Sound good? Cool. So we're going keep this very abstract. When I am working on a project,
I typically have some level of emotional investment. Um, and just thinking through a recent iteration of a design choice.
I have been pretty good at being successfully detached from those designs. It's just a means to an end, but there have been some people that I've been working with very closely that have been getting attached. potential bias. Number one, I'm curious, what do you think about being emotionally attached to your work and what's a healthy boundary so that you can delete your entire code base and not feel a thing, but also have some emotional investment so that maybe your subconscious is working on it while you sleep. Maybe you get some shower ideas.
Like you don't want to be fully detached. So where's this middle ground?
Ben (05:27.409)
That's a fascinating question, I can only speak for myself. So that's a huge caveat to this. But I used to be so guilty of being so attached to, like, not what my thing, like the thing I was building was going to do. I actually was emotionally detached from that. I didn't really care. I cared about the solution that I came up with, like the actual code and the implementation. And like, this is so clever.
And that I realized after doing that a couple dozen times and never getting any sort of like feedback about it, like nobody, the only thing that anybody ever cared about was did it solve the problem? Like you might have one or two nerd buddies who are like, dude, I checked out that code. That's super cool. Or somebody like, wow, that's, that looks really sophisticated. Can you explain it to me?
three minutes into that explanation, eyes glaze over and I'm like, yeah, you don't really care. The only person that cares about that is you, the person who's writing it. And in the almost decades since I stopped doing that, I no longer care about the implementation at all. It doesn't mean I don't care about its quality and about does it run stably.
And can I change it if I need to? I think about anything that I write and push out to quote production, which is like to open source. It just, it has to work and it has to be something that people get value out of. Like nobody cares how cool the implementation is if it just doesn't work. So my whole mindset now is emotional attachment to this thing being used by people and them saying.
or feeling like this is awesome for what it does and it solves my problem. So it's shifting that focus of what your attention is going to, to more of like a product mindset and not an implementation mindset.
Michael Berk (07:39.736)
Yeah.
Yep. Yeah. One of the best pieces of early advice I got was, I had worked really hard on this implementation. and it, it like sorta kinda worked, but it also didn't. And my manager was like, great job. We're going to kill it. And I was like, what the hell? I've been working on this for like three weeks now. And it, I finally got it working. I like stayed up late. I did research. I learned about all the statistics behind the implementation. And he was like, that's fun. We're going to kill it.
And so then I chatted with a teammate and was like, you seem very cool, calm, and collected. How do you handle the situation with your work being killed? And he was like, our job is to build maps. We are explorers. In the prototyping phase, we want to see what will work and what doesn't work. That's it. And it was a pretty eye-opening moment where I didn't actually need to.
like create a village, I just needed to go and see like where the water was, where the berry bush was. We've talked about this on Adventures in Machine Learning. Just like what is the key things and then report back to the king, who in this case was my manager, like what are the resources available? Should we go explore this route? Should we go a different direction? And then once we've prototyped, then you can start putting down routes and it's less dangerous to be attached to your work. But in the initial prototyping phase, it's really important that you're fine with just
deleting a notebook, never seeing it ever again.
Ben (09:11.471)
I would expand on that to.
There'll come a certain point after you've written so many millions of lines of code that number one, you never even remember what you build, like at all. And number two, you actually get this sort of inner sense of satisfaction and joy seeing either you or somebody else on your team delete one of your implementations. Because it's like, hey, it's
Michael Berk (09:41.356)
I'm there now.
Ben (09:43.725)
It did what it was supposed to do for a period of time. Now maybe nobody uses it anymore. There's a better solution or because of the interactions that this code has with other systems, it's now kind of unstable and we don't like maintaining it anymore. Or it's just irrelevant. And you're just like, man, that's awesome. I just deleted 27,000 lines of code. remember like sometimes you're like,
When did I do that? You go back through the commit history and you're like, yeah, that was like that, that entire month of April three years ago. And yeah, good riddance. See ya. And just delete. And then you see that, that PR merging like hell yeah.
Michael Berk (10:24.024)
Yeah.
Michael Berk (10:27.736)
Yeah, exactly. I finally got to that point. We deleted a big chunk of what was built in the prior implementation for the project I'm working on. It was so fun. I was like, remember when that happened? And the interesting thing that made me still feel good about it and feel like it's not a waste is the ideas that we learned via that implementation are carrying on in a different design. So it's not like it's actually getting removed. It's just sort of
taking a new shape. It's like reincarnation almost.
Ben (11:01.403)
Yeah, the only time that you won't feel that sense of joy is if you sink too much time into something that is just really going nowhere and it's not novel. It's like, we need to like build a, I don't know, a data manipulation library or like module in our library. And this problem has been solved before and you just happen to go down at the wrong way.
And you're like, man, this one, this one thing that we thought was just like this random edge case is actually super important. So you have to like trash everything you've done and redo it from scratch. That's just like time and experience give you insight into when that might happen and like what to go test early on. So you don't go down that path, but there are times where we've shipped stuff into production and then that comes up. We're like.
Whoops. Can't wait to just delete all this and reimplement it in a much simpler way.
Michael Berk (12:09.438)
Especially things that just evolve over time via need instead of having a really good upfront design.
Ben (12:18.063)
Yeah. And there's a trade-off there. It's like different philosophies of implementation. You can do, you can put like a lot of upfront time into your design and a lot into V zero. Like really build this robust solution that has all these features and like pretty much solves the problem. That's a waterfall design, right? Your first version that you're to ship is feature complete. We don't do that for a number of reasons.
and we, we work hard to avoid that mainly because we want to avoid the case of we sunk all this time into building something that we don't know if people really care about. And if you just build that, that minimum product that it's like, like we know that this particular feature, I can use one, a concrete one, for instance.
Unity catalog AI 0.1. We released that thing with function tool calling for the OSS implementation that ran in the main process. Like a function would be fetched from UC and it would just run in your main process. Super dangerous.
Michael Berk (13:35.33)
Just real quick for the listeners who might not be familiar, what is UC?
Ben (13:39.779)
Unity catalog. So like it's a place to store like data governance as well as you can store like functions and stuff in it. So Python callable definition, and then you can, you can call it and it can run. so running the function in local, it, if you're writing relatively innocuous functions, no big deal, right? analyze this Panda's data frame and like
Michael Berk (14:05.88)
Fetch the weather.
Ben (14:08.891)
give me the maximum value of in this matrix, right? You can do all sorts of weird stuff, but it's not things that are like, if I execute this function, it could spike CPU to maximum for 30 minutes. And that's, that's a denial of service attack basically against that server. That server is not doing anything. It can't do anything and respond to any requests or do something that just blows up the memory.
like a memory consumption attack. And that could be like maintaining state on something. Like you're adding massive arrays of numbers together or, you know, recursive, like recursing over something many, many times and, you know, having to hold onto state. So we released that knowing that that was going to be basically bullshit. So we put
a note in the docs. like, hey, be careful here. You know, this runs in the main process. Double check what your function does before you try to use it in an agent. Because that agent might pass the wrong precision. You're expecting it to pass in, calculate the number of years in between these two dates, and it passes data in that would calculate seconds or something. You're like, jeez.
oops. So with the initial release, we got that out there just to get it out there and see like, does anybody care? And we're sitting there looking at downloads and we're like, yeah, it's like 20 downloads a day. It's not really that much. Okay. Let's do a little marketing. Let's let people know that this is out there. And we did that and we're like, hang on a second. It's at a thousand downloads a day. That's cool. Check back a week later.
Michael Berk (15:47.278)
Mm-hmm.
Ben (16:06.097)
3500. Why are all these people using this now? Check back a week later and like, all right, it's it's trending upwards even more. This is kind of cool. We need to get something out there that fixes that that shortcoming that we intentionally did. So I took like four days and implemented a sandbox execution mode and that went out in 0.3 recently a couple weeks ago.
And we got some feedback from people like, thanks for building that. Like, that's not trivial to do. We understand that. And yeah, that protects your agent on deployment, that iterative development of like, if I had spent an additional week on that initial release, I could have built that and our downloads never go up above 10 a day. And they just kind of taper off into nothingness. That's a week of my life. I'm not getting back.
So like, why spend the time doing that until I know that people actually want this?
Michael Berk (17:10.092)
Let's play the inverse. Let's say no one downloads it. When would you know to delete the repo?
Ben (17:17.841)
good question. It wouldn't even be my call because that's not my repo, but that would be a team coordination like between the UC OSS folks, myself and some other people like DevRel and then managers. And we would make a call like, Hey, nobody's using this. Nobody cares. let's archive it and we never have to worry about it again.
Michael Berk (17:43.576)
From an organization perspective though, what are the axes of the decision that you would look to explore to determine whether you should just stop working on it or even archive it?
Ben (17:55.353)
So for us, we collect telemetry, not on open source packages. We do not do that. But for like Databricks users, we don't know who you are that are using our APIs. We don't want to know who you are. We don't know anything about what you're doing with this API call. We just record the number of times that you call an API that we build. And it's anonymized. So we just have like...
Here's the global counts of hits to this API per day. And we're tracking that on everything that you're running, things that we write. So if you're using something in MMOflow or in Unity catalog or whatever, I can see when you create a function. I can see when you execute that function. I can't see that function. I can't see your parameters or anything that you used, but I know that you're using that software.
So that's what we use to track. What do we do when we have, when we have a suspicion that nobody's using some part of our code and we, with MLflow 3 that's coming out at summit, right? Well, the RC is out now. If you look at the change log, you'll see entire modules in MLflow just deleted. We deprecated them a couple months ago. We'll let the community know like, Hey, we're getting rid of this.
If you have an objection, file an issue and tell us why it's important for us to keep this. Crickets man, for some of these things. So it confirms our suspicion that nobody cares about this. Nobody's using it. So stuff like MMOflow recipes. That was a big thing we did a couple of years ago. Spent a lot of time working on that and it just didn't resonate with the community. So we let people know we're like, we're looking at data on our side.
like, hey, Databricks users, like nobody's using these APIs. Like it's a handful. It's like, it's like 30 customers using this per month. So we reached out to them and said, like, why are you using this? Like, what's, are you continuing to develop in this? And you have a face-to-face conversation and feedback we got was, some consultant built this for us and what should we move to if you're getting rid of this? Like use MLflow projects. They're like,
Ben (20:23.855)
Yeah, we can do that. Cool. We'll be ready when you guys delete this then. And then you make the call. So in next major release, just get rid of it. Nobody's using it. It's not worth fixing it when CI breaks or just eating that money sandwich in like CI runtime. Every PR that's filed, it triggers this thing. You're paying for that.
Michael Berk (20:42.424)
Hmm.
Michael Berk (20:54.872)
Got it. So it's basically a trade off, like a classic ROI calculation. The R, the return for the organization is users and specifically API calls. And the I is software maintenance, CI cost, complexity of the repo for no reason, that type of thing.
Ben (21:15.377)
Yeah. And even the size of the repo, when you talk about major projects, lines of code add up. And it's like, how many modules are in there? We went through and deleted recipes. We deleted the Java server. We deleted MLEAP support. You count all those modules up. It's hundreds of modules. I think it was.
Michael Berk (21:16.462)
Cool. All right.
Michael Berk (21:26.318)
Mm-hmm.
Ben (21:45.297)
Like pretty close to 100,000 lines of code we deleted in the last three days. That contributes like 100 kilobytes of install size for the library.
Michael Berk (21:50.487)
Interesting.
Michael Berk (21:59.726)
Yeah, which is notable. OK, cool. So that's a really interesting case study. And hopefully that puts into perspective this ROI calculation that maintaining down a path has inertia, but it also has costs associated with it. Here's sort of a flip side of the one to bail question is, when do you know you should stick it out? When do know you should keep working hard and just like,
Ben (22:19.121)
Hmm.
Michael Berk (22:29.326)
No matter what happens, we're going this route. I'm going to smash my head through this wall. It's going to work.
Ben (22:35.345)
More frequently than you would think.
I usually attribute that to a skill problem, even in myself, right? Like if the problem statement, I know intrinsically that this is a solved problem elsewhere or it should be solvable and I just can't make it work. I need to get good. Like I either need to like talk to some peers, bounce some ideas off of people, talk to people who are much more senior.
and be like, hey, I'm really struggling with this. Do you have any insight into why I'm thinking like an idiot here? Why can't I work through this mental block that I have? Sometimes just chatting about it for five minutes with somebody super senior, maybe they'll just ask three or four questions. You're just looking at their face as they're listening to what you're explaining and you can kind of intuit like, yeah.
what I was thinking of here is completely wrong. And you just ask that question like, hey, what do you think is wrong with this part? And they'll tell you, they're like, yeah, I don't think you would want to do that. And all of a sudden you're unblocked and you move forward. That gets into a tough situation when you don't have peers who have more experience or have no, like just don't know anything about the space. Or if
Michael Berk (23:54.477)
Yeah.
Ben (24:11.057)
if you're the one who's supposed to know and you don't know, then it becomes harder to get through that block. But the way that I do that is do like a massive context switch of my consciousness. So stop what I'm doing, go do something else. Maybe it's something else work related. Maybe I'm going to just take a half an hour break during the work day.
Maybe I need to take an entire weekend of going outside, touching grass, playing video games, playing the guitar, whatever it is, you know, chase my kids around the park. You know, just go and do something that mentally disconnects you from that problem. And then when you revisit it, maybe you go through and read through the code and be like, I don't think this is right. Let me try some different things here. And for me, usually the
Even just like a 15 minute break will usually reset my brain enough to like re-evaluate how it was going about this.
Michael Berk (25:19.884)
Yeah, that was actually my next question. So in the past few weeks, I've been acting as tech lead for one of our projects and I have, think like six people, reporting to me in theory for at least for this project, not like as a managerial capacity, but my job is basically to decide what we do and make sure the end to end design works and
I was really struggling with the complexity of it. So Ben, I'm curious if you have tips on how to approach something that's too complex for you to know every single component of. Like, yeah, you can delegate, but you need to ensure that everything will work end to end. So yeah, I can like stuff everything in my brain, then go take a walk, then come back and look at it again, stuff everything in my brain again, go take a walk. That does work. But I was wondering if there are...
shortcuts or sort of mental frameworks that allow you to think about complexity in a more abstract
Ben (26:28.561)
don't have a framework per se, but I'm a huge fan of Excalibur. So back in the day, I used to have like a mead notebook always on my desk. College ruled, had to be college ruled. And the finest fine point gel ink pen that I could find, because I'd write like chicken scratch, but it would be filled with not the things that I hate more than anything.
Michael Berk (26:33.325)
Hell yeah.
Ben (26:58.403)
are like PowerPoint sites filled up with boxes with arrows connecting them, like these ridiculous architecture diagrams that aren't unneeded for most use cases. They are needed when you're talking about like systems and services that is needed. But most people aren't building that stuff unless you're at a big tech company that's providing services to like direct consumers or you you need a lot of users to need that type of stuff.
So I'm not doing that in Excalibur. I'm drawing out ways for me to break down a huge complex implementation into bite-sized chunks that are like, what is the, like, what is it to go into APR? What will be testable like from unit tests and what, what can I build that'll then build upon that with like stacked PRs, right? So I'm like checking from one branch into another.
and just adding, okay, now I need to add this one module, but I need to know like, what is the layer of abstraction that I can do here? Like by visually looking at what all of the things this thing needs to do, I can see where would I need to have like a util function or where would I need to have like now that I'm doing some stuff in React, it's like.
what would need to be an external component that's reused across a bunch of different pages. So that gets built first and that's just a PR. And I just like do a render of that component that I'm building just as like a dummy placeholder in the site and then take a screenshot of that, put it along with the PR and then delete the rendering of that. So it's just a standalone thing. Like here's a component.
I'll be using this in these four different places. And then move on to the next thing. Like here's the next component that's going to be used somewhere else. So then I have all of this layer of abstraction that I know that these things individually work. And then I can build a page that uses all these things. And then that then I need to make like five different versions of this page with these different configs. So visually having that spread out for myself as a workflow, like helps me a ton.
Ben (29:24.081)
And for backend stuff, it's the same thing. know, go back to Unity catalog AI, we had a design doc for that about like, what are the product requirements here? Well, it needs to do all these things. Uh, for like, we need to be able to like create functions and retrieve and execute them and list them. And we need to be able to integrate with all these different gen AI tool kits. the, the general design of that.
was fairly straightforward. wasn't super complex. We knew what needed to get done. But when you compare that design doc to the actual implementation that exists now, you kind of look at it and you're like, this is kind of complex. There's a lot of stuff in here. But we didn't start from a position of complexity. We started from the simplest possible thing that you could do and then started bolting on features to it that we knew were in our product requirements.
And then as we evolved, we added more and more stuff to it, just based on feedback. People being like, hey, I need to be able to see this Python function. And looking at the metadata return from function info object from Unity catalog is a massive wall of text that my definition is buried in there as like string wrap JSON. I can't read that.
So, okay, we need a method that'll allow you to print that out basically and see it. But those methods are super complex and took us a while to build.
Michael Berk (30:56.494)
Got it. Herd on... Yeah. Interesting. Okay. Cool. Yeah. Herd on the agile development and basically only build it if it's really needed. But the basis of that question was more around getting clear signal on the ROI calculation. Cause it seems like deciding when to bail is all about that calculation. It's like 99 % of everything is about ROI calculations.
So as a TL or a leader in a project or even just working on a complex component, it sounds like you would sketch out architecture and that would give you sort of a mental heuristic to have less stuff in your brain and more on paper so that you can think more dynamically within the framework.
Ben (31:48.037)
Yeah, so I can visually look at it and then I scope out work for each of those bits of work. Be like, okay, how long is it going to take to do this? Okay, this is like an hour. What's this next component? Do I even know how to build that? Maybe that's, and I start thinking through how complex is this going to be mentally like blocking code out in my head.
And then I come up with an estimate like, okay, it's probably two days of work to do this, like to figure it out. And there might be. If that initial design and what I'm thinking in my head, has some component in it, which is like, okay, this part of this, I actually have never done. I don't even know. It's not like, I don't know if anybody's done this before. Like people have figured this out, but I've never been exposed to this and I don't know.
what I don't know. So there's a big like red box near that. It's like danger. I might be too stupid to figure this out. So that would be the first thing that I try to do. Even if it's like later on down in the component workflow of like what needs to happen, I would spend a spike just researching that. And I should have something super quick and dirty that works in an afternoon.
And if I don't, and all I'm getting is like roadblock after roadblock of like, yeah, I don't understand this. Or I think, I don't even know if this is possible with like standard public APIs within this language. Like I might need to go into like protected dev methods within the like the core language in order to solve this problem. If you're at that point, that's when you should think about like, do I need to bail from this?
Like should we even be trying to solve this? Or is there a much simpler thing that I'm just not thinking of? Now I need to go talk to some people or do some searching online. Like, has anybody solved this and written about it?
Michael Berk (33:44.558)
Hmm.
Michael Berk (33:59.254)
Right. How do you deliver the, need to bail message? Because for me, specifically in the field, we're given a statement of work or an SOW and their success criteria and a customer pays, I think upfront for it. I don't know actually. but there's a legally binding contract that we're generally going to deliver what we say we will. And sometimes it's not that well scoped for a lot of the harder projects.
They're like, yeah, we'll figure it out on the fly. And it turns out you get into the problem, and you're like, this is freaking impossible. So for me, I think it's a bit different. But in general, how do you think about framing that message of this idea is cool, but let's not do it?
Ben (34:44.945)
I mean, we get those all the time. We come up with some of those ourselves, but the answer to anybody that comes up with that idea, it's like, Hey, do you have like a free couple hours to just whip up a prototype? And we're like, Hey, we want to see what this would look like. That forces the person that came up with that idea to actually realize, I got to think through like, how would this be implemented? Not how would the final implementation be like, can I write some crappy Python script?
or a really quick Scala module or whatever language you're writing in. Like, can I just demonstrate that this is even possible? And that code is entirely throw away. You're never going to use it again. But it's just showing the possibility. And there could be some very rough edges, but you'll expose those rough edges and have an idea of what it's going to take to fix them once you go through that. But if you get to the point where you're going through that process and you're like,
hang on, this is really complex. And I don't know how long this is going to take. That's when you involve your tech lead who hopefully has enough experience to realize, yeah, there's a different way that we can handle this. that's not going to be, that one little feature is not going to be part of this implementation. Or if it's, if it's the full feature that you're talking about, that whole thing is way too complex, then
Your tech lead should be the one who's like, yeah, we're not doing that.
Michael Berk (36:16.718)
How do you navigate the politics? Like, especially in my world, I am the tech lead, right? And I have to go tell the stakeholder that this is a cool idea, but we don't have 10 million people hours to solve the problem or whatever it might be. So what's your like one liner or your template for phrasing that?
Ben (36:39.281)
We never accept work or agree to do work that we haven't scoped. You should never have any scoping work done by anybody but the technical person who's going to be executing or the leader of that.
Michael Berk (36:55.33)
What if the scope is wrong? Which happens. It's like projects are hard. Like I know, like I'm not saying people don't scope well, but it's sometimes hard to do. So what if the scope is wrong?
Ben (36:58.8)
Hmm
Ben (37:09.017)
I mean, like we make mistakes with that all the time, but it's more like we're in the weeds implementing something and we all of sudden realized that one component of it is much more complex than we thought. Then you talk about trade-offs. like, okay, since this is taking a week longer than we thought because of this completely, like we didn't know how complex this was going to be. What do we have to trade off now in our roadmap?
you drop other things if it's like, this is super important what we're working on. So we're not going to get to these other things. Those will be pushed later on. But if it's, we never get to the point where it's like, we have this work item on our list that we have to do for the quarter. And we get blindsided with some, something that's so complex. We just can't deliver it. Like that's, that's never happened while I've been working at Databricks in engineering.
We always know like, is this even possible? least in my team. And, but there's been plenty of things that we've done like a prototype of or tried to do a prototype of before planning is done. Where we're like, yeah, this is like six months of work to get this done. And there's no way that's getting included in the roadmap. Unless it's such a, like an important feature.
that will then fund it with the appropriate resources.
Michael Berk (38:40.94)
Got it. Okay. We're coming up on a nice little summary, but before I wrap us, just, are there any other biases that you can think of that you see specifically junior people or even senior people be subject to that makes them stay in a project or a design path or whatever it might be for too long?
Ben (39:04.187)
see it with like senior people at Databricks, but I have seen it senior people elsewhere of just like hubris. Like they're so confident that this thing can be done and they want to delegate the actual implementation of that to much more junior people. And they get very frustrated. Why, like, why isn't this delivered? I know how this is done. And then you ask this, that super senior person, like, where's your prototype? And.
there is no prototype. You're like, were you just like that confident in your elite skills that this would, you would figure this out? Like you haven't proven this to anybody. I've seen this behavior like that, or have seen that in the past at previous companies. And junior people have seen.
Ben (39:55.535)
Depending on the organization, I see one of two behaviors. One is the overconfidence inability. So just biting off way more than they can chew and promising stuff on a delivery schedule that they're just not capable of doing. That usually happens with like poor leadership in the team. Like their tech lead is not properly supervising them and giving them guidance about like, here's what you should be doing.
Like don't try to do all of that. That's a mountain of data or a mountain of work that you're trying to tackle with basically a breakfast spoon. Like let's not do that. Let's work on this little molehill over here first and then we'll work on the next molehill. And eventually that mountain comes down. So I see that behavior sometimes like that sort of overconfidence. Maybe they've been great at everything they've done up until that point and they think that they're invincible.
Maybe not, like don't stake an entire project's success on, you know, one person's overconfidence in their own ability. Might not be the best learning environment for them to learn humility. And then the other flip side of very junior people is the imposter syndrome, where it's, they think that they're completely worthless and they can't do anything and they're kind of scared.
to take on anything that's kind of ambiguous and complex. And those people need mentoring. They need somebody to believe in them. They need to be shown that they can figure stuff out. But don't give them that mountain to figure out. Give them things that are gradual. And you'll build up confidence.
Michael Berk (41:45.93)
It's funny, I do none of those. I'm like, I have my own biases that completely prevent me from being effective as a human and it's none of those. okay, cool. I'm ready to wrap. I'm feeling good about this one. You have any last things to say? Cool. Today we talked about when to bail. Often you're in a project that involves design decisions and design choices.
Ben (41:48.197)
You what?
Ben (42:03.141)
Yeah? Yeah?
Michael Berk (42:14.298)
And you as an implementer often have to choose which is the best route. And even if you're being a tech lead, you're responsible for the entire design as well. So choosing the right path can be really challenging and also choosing what to do and how to spend your time. It all comes down to this ROI calculation. So before we get into that, just at a very high level, there are a bunch of biases. Humans are unfortunately fallible and don't always think in the most logical and critical way. First one is hubris.
try to be humble, try to understand that problems can be very hard and, you're probably pretty smart, but you might not be the smartest person in the world. And even if you are, it takes more than one person typically to get something done. from a junior perspective, being either really overconfident or having imposter syndrome and being very underconfident, both of these can lead to incorrect decision-making when deciding whether to bail or not. and then for me personally, the things that I have seen myself
be subject to and usually cause me to stay in for too long. The first one is being attached to my code and specifically caring about the implementation. So if I spent a lot of time getting it to work, making it pretty and cool and fast, then I don't want to throw it out. Second one is just generally the sunk cost fallacy. You're not getting that time back, so suck it up. There's no reason for you to continue on a path that will not work.
just because you spent a week, a month, a year on something. And then finally, be aware if you're stubborn. I'm tremendously stubborn, and that has made me work on things that I probably shouldn't.
So back to the ROI calculation, again, the return is going to be subject to whatever you're working on, but the investment, you got to think about it holistically. So in the Unity catalog example, return is downloads and usage. Investment is developer time, CI cost, package size. All these things are really, really important to think about. So general tips to properly assess this ROI calculation. If you're prototyping, mentally prepare. Be ready to drop it.
Michael Berk (44:24.626)
And also make sure that you're actually being thorough with your prototyping. You don't want to get surprised by an actual production implementation. As we alluded to, the North Star of your decision making should be, do people use it and does it solve their problem? A quick tip is if you are really heads down on a complex implementation, go take a walk. For whatever reason, context switching to something that distracts you lets the subconscious work and you come back refreshed.
usually with a fresh perspective and better ideas. And then finally, if you are delivering the, have to bail news, frame it as a trade-off. You have 40 hours a week or however long you work. You can only do so much in that amount of time. So tell your executive or your stakeholder, what do you want to remove? If you want this thing to be delivered end to end, we have to remove a bunch of other things and it's just a trade-off.
Anything else?
Ben (45:23.365)
So great summary.
Michael Berk (45:24.728)
Cool. Until next time, it's Michael Burke and my co-host. And have a good day, everyone.
Ben (45:28.059)
Ben Wilson, we'll catch you next time.
Creators and Guests
