5 - How to Lead Engineers with Ben Johnson
Michael Berk (00:00.947)
Welcome back to another episode of Freeform AI. My name is Michael and I do data engineering and machine learning at Databricks and I'm joined by my lovely, wonderful and amazing and stylish co-host.
Ben (00:12.111)
Stylish, that's funny. Ben Wilson, I fix bugs in open source packages at Idris.
Michael Berk (00:20.637)
Damn straight. So today we're speaking with Ben Johnson. Ben started his career at Experian and then moved to two other travel companies acting as director and VP of engineering. Since then, he has held a variety of engineering leadership roles, including director of software at LegalZoom and most recently founder and CEO of Particle 41, a software services firm. So Ben, you've worked in travel, investing, transportation, law and more.
Ben Johnson (00:20.791)
Awesome.
Michael Berk (00:48.244)
And my first question to you is, what are the consistencies in good software practices between all of these industries?
Ben Johnson (00:54.922)
Yeah, the consistency is there's two. There's the customer and the team. You know, the customer is going to be the constant in any business we work with. so the more you know them, the more talk to them, the better your products will be. And then, of course, the team, the team that's going to be shoulder to shoulder with you in the trenches, getting it done. And so I think the focus on customer and the focus on team is definitely, definitely been in every situation.
Michael Berk (01:23.507)
What does that mean?
Ben Johnson (01:25.952)
So, man, you're really gonna press, I love it. So when it comes to customer focus, it just means being a good listener, not building, you many times we look at, these are some pretty cool toys we have access to. How can we make those cool offerings? And...
But the customer is like, I just need to get to the next destination. for us, focusing on the customers about stage appropriate decisions, I'm sure as you guys are interacting with customers at Databricks, there's customers at all different stages. you don't want to be building a spaceship when they just need some BI reporting. They just need visibility around their data. You don't want to be
talking about crazy data warehouses and data lakes if they just need some data visibility. So just as one example. And then on team, it's really focusing on our culture. And I know that's going to be a, what do you mean by culture? People like us do things like this. People like us don't do things like that. And then letting the team fill in some of those blanks.
And then of course, there's one thing I do micromanage and it's the culture. I want to make sure we're all aligned on what are those things that we say we do? And then how do we make sure we're being who we say we are?
Michael Berk (02:46.919)
Got it. OK, 50 questions are coming. First one, you mentioned you like to build what a customer needs. And we see this all the time in the field where executive XYZ says, use GenAI to, I don't know, do something. And we're like, OK, we can. We can create a statement of work that uses GenAI to do something. But what are we trying to do? And then there's this back and forth where we're often trying to, especially with GenAI, fit a solution to
some unknown problem. How often do you get that and what's your response?
Ben Johnson (03:20.536)
So all the time, in some context, it's become the easy button. Like, I want to use AI to do X, and then that's supposed to make it not complicated and super easy, and we're just supposed to integrate an LLM. And the LLMs really can't do anything. They don't have any action steps.
so we're seeing that what they really mean is I want to speak to, I want to speak or type to a bot and I want it to do stuff. And so I think everybody is scrambling for the text based wrappers for different workflow audit. Yes. We're doing this whole AI workflow thing. but I also see the, and we got to come up with a better acronym other than rag. Like we just have to think better about that, you know,
Because this idea of taking your proprietary data and vectoring it and then leveraging it to the text mapping, that's hard. That's hard to see, well, how are people going to use this? What are they going to want to do? We worked with an e-commerce customer, and I'll be a little bit vulnerable here. We put their product catalog
in a rag so we could start playing with it. And so of course we hit style, brand, the category, we hit all the obvious things. But in our initial version, we missed dimension and price. And this was for a furniture commerce company. And dimension and price were what the users wanted to use the most. And so we just had to do another iteration and get those things integrated into the vector database.
But I think that's what CEOs and executives are missing is that we're going to come back to you and we're going to say, okay, well, where's the secret sauce? Where's your data?
Ben Johnson (05:15.84)
And then if your users want to speak to that data, what language are they going to use? And we have to know those things before we're really going to be able to integrate an AI with that and make it useful for you. So that's what I'm running across is that at the base of these solutions is I want to do stuff with sentences. And then
And then I have some proprietary data that I need to unlock into the vector embedded solution. I don't think it's more complicated than that. mean, otherwise, we're still in this exploratory phase of what AI means to everybody.
Ben (05:57.933)
How much have you seen with the genesis of like starting effectively a prototype? Like for example, for like the rag agent, you're like, okay, I'm going to expose a public API for my users on my website to chat and find products for me based on, you know, attributes associated with them. What are you seeing a company like that do?
Michael Berk (05:57.949)
Got it.
Ben (06:27.479)
actually getting ready for production, deploying it, then basically use their monitoring data to say like, what questions are people asking and how effective are these responses? Is this actually engaging a sales funnel? And is that better than our standard browsing experience?
Ben Johnson (06:34.657)
Yeah.
Ben Johnson (06:46.454)
Yeah, if I could, if I could be so bold as to kind of spitball a what I think the general progression should be. Put that chat bot on the on the website for your frequently asked questions. Just get it out there. People are talking to it. People are going to ask it questions that it's not prepared to answer. I think that's okay.
But you could solve a really easy problem, which is speed delete, with just a simple bot that answers questions before people will schedule a meeting with you. So even if it's just speed delete, it's just a meeting setter.
gives them answers some of their frequently asked questions and gives them a Calendly link. Like that in and of itself, I think would benefit a lot of small businesses and then even larger businesses because then they're going to be able to capture. Let me go a little bit off the rails here too. I think we're the medallion architecture that you guys are probably really familiar with, the bronze, silver and gold. I've been completely taking that and applying it to the AI data set.
or the AI problem set, which is to say your internal people and your customers, they're asking questions over email or they're giving you questions. And then you're, you're just orchestrating it all yourself. I call that bronze. You're just using chat GPT to articulate an answer back, but you're doing all the orchestration. You're, and you're heavily as a, as a, as a business, as a person internal, you're
you're just kind of giving chat GPT to your crew and you're just saying, figure out workflows or go figure out stuff. now your people have the LLM as a tool, but they're putting stuff in and they're taking stuff out and giving it to the customer. You have no monitoring. It's just kind of the wild west. And so I call that bronze. And then what people are doing is man, even for us as we...
Ben Johnson (08:49.582)
In sales, maybe you'll grab the LinkedIn profile, you'll put it into a chat GPT, you'll talk about your last couple of conversations, maybe you take your transcripts from recordings and you're just using it to help you draft communication and set messages. This is very much a bronze layer kind of construct where...
You're pulling in what data you think is relevant. You're kind of manually doing the retrieval and putting that in there and getting messages out. But your business has no idea. It's not capturing that. It's just going to require you to tell them what you're doing.
And so I think that's that bronze world that we're all living in. Silver starts to become where I'm wrapping the LLM. I'm retrieving some data automatically based on a set of use cases or a set of functions that I want to do. And the beauty of at least moving to silver is now I could start to log inputs and outputs and I could start to learn something from this silver level idea.
Gold to me would represent where I've actually delegated a certain amount of work to the agent. I think like first tier customer support where I've loaded up a bunch of data and as people ask questions, they're getting some level of tier one answers that are going out to the customer and you start to trust it to do that. And you know that the AI agent is going to escalate to you for tier two. And then as you get the tier two inputs,
you're adding more to the agent and you're expanding the definition of tier one into tier two. So I think that's the process that most people are going for. The problem is, is for many businesses, the bronze is increasing productivity and helping people. it's kind of like, it's the crack that gets you hooked. But if you don't start to invest in it a little bit more,
Ben Johnson (10:53.368)
then you'll really never know. It'll stay in that wild west. And then that's when you get customer support people sending out emails that say, dear name, and have really long verbose responses to a transactional question, right? And then the customer is going like, what is this? Like, I didn't want to read a one pager when I asked.
your for your open out your operating hours or you know things like that. So I hope that answers your question but that's how I'm in the market articulating what people are going through and trying to get a little deeper into what questions are your customers asking? How do you want to answer those? What data do you have that needs to be taken into account when we're answering those questions? And how can we create something that's a gold layer tool for you that's
really integrated with the LLM, but you're adding a lot of proprietary data to that workflow.
Ben (11:55.523)
Yeah, it was largely a self-serving question. Some of the things that I work on are in support of that gold layer. Stuff like secure tool function calling. Create an agent that has the ability to fetch from a vector database as one instruction set. And then another one is, well, I need to actually pull data from LinkedIn. Well, I'm going to write a Python function that scrapes that and then puts that into some sort of structured text format that the...
Michael Berk (11:55.549)
Howdy.
Ben (12:25.455)
an LLM can really digest and draw insights from. And then I have another tool that's like, I want to pull Slack and I want to get like my conversation history with this customer because they're in our Slack channel. pretty much you name it, anything that you can interface with through a REST API or even internal functions that you just want to say, I have all this raw data coming from my actual sales data for this customer.
I want to fetch that data and it's just a massive amount of data perhaps. Maybe it's like per order level per skew that I have. Well, I want to determine, I want some sort of intelligent inference about like, what should I recommend that we discuss on our next call? Like maybe I'm a scientific tooling company. I'm selling like, you know, SEM machines. And I want to know like, should we...
Ben Johnson (13:11.789)
Yeah.
Ben (13:21.539)
double down on talking about this thing, or is there an adjacent product that other people are discussing? What is the topic of our next meeting? And building those, I see the same thing with like that. I don't really like your illusion to the bronze, silver, gold in that bronze layer. Yeah. Like most people are, they're opening up chat GPT or anthropic data.ai and like interfacing manually building something that's deployable.
You can trace all of it. can collect that data and evaluate like, what's the performance of some basic framework that I'm creating here. But then moving to that final phase, what we're seeing is that
The people that are doing the bronze aren't the people who are capable of building the gold. So then you need like a software engineer or a team of software engineers to get involved to do that. Do you see the companies that are building this infrastructure and tooling moving more towards, you know, a, common man's ability to create that gold layer. Do you think that's valuable or from like that part of your consulting business that you run?
Ben Johnson (14:11.31)
Correct.
Ben (14:34.967)
You see that as like, no, we're still going to have the nerds be involved here to make sure that this is robust.
Ben Johnson (14:40.33)
No, I definitely think the nerds still need to be involved because what you're doing is you're increasing the scope of your data pipelines. Like exponentially, you used to just be able to say, Hey, I want some data pipelining from my different operational tools, my different marketing tools. And those are kind of separate, like, Hey, I want to be really good at the data around how I acquire and then the data around how I serve customers. Now you're talking about.
all the internal tools that you use to communicate with the customer. It's not just about a 360 view in the CRM. It's about exposing all that data. And then the idea of even thinking about Slack chats and emails, and those are all new data pipelines. And then when we add on top of that that the AI agent can do stuff,
that has workflow steps, man, somebody needs to be focused on what are those controls? Because if an AI bot is told to do some stupid stuff, it's going to do the stupid stuff forever, like until you turn it off. And so I imagine that we're going to have, you know, primetime news.
about, this AI agent went and did a million things. And those million things didn't, like they happened in five minutes, and it wasn't supposed to do that. And that caused a lot of second and third order magnitude issues. So if we don't get some nerds in there to kind of think, well, what could go wrong? We're going to start making a lot of mistakes. And those mistakes are going to really impact.
the downstream businesses from whatever we're putting together. For our company,
Ben Johnson (16:32.822)
Recruiting is something that we've internalized. We're really dialed in on our selection criteria for the teammates that we add. so, yeah, if I have an AI-based resume analyzer, it's not that complicated. It's just comparing the resume to the job description and scoring it and collecting some information that helps searches in the future. And yeah, I put two threads on that thing, and now I'm getting a...
I'm getting all kinds of rate limiting from my application tracking system and I'm running up the bill on Anthropic. I need to have controls. I need to really think through how aggressive do I want some of these agents to be? And if I don't have those limits in there, then yeah, there will definitely be commercial impact.
Michael Berk (17:24.467)
Both of you talk to executives a lot. How do you accelerate the trust of a public-facing AI agent?
Ben Johnson (17:35.865)
By putting those limits in place and talking through what is the worst that could happen and then having clear human in the loop rules. I want to know, at what point... The same thing happened in RPA. Most RPA bots were 60 to 80 % successful.
and then you had to have human in the loop to resolve exceptions. Automation is still automation in the sense that it needs to have the air handling and the guardrails. I don't know that executives are really thinking about that when they're pressing the easy button. And so that's part of the education process.
Michael Berk (18:15.751)
Yeah.
Michael Berk (18:19.431)
But more so for the fact that everything can happen in five minutes, as you said. And it's fundamentally a non-deterministic implementation. So if it's a forward chatbot, it could go recommend a Chevrolet. How do you guard against those concerns? Because executives, I'm sure, read those types of articles.
Ben Johnson (18:39.648)
Yeah.
I mean, I don't have a magic answer there other than saying do not talk about it, like other than the prompt engineering and thinking about like, what do I not want this bot to do? How do I put some constraints on it? But that is a great, that's a great question in the sense that what many people are doing with AI is really a narrow problem. They're trying to solve a very narrow problem, but the LLM is a very wide solution. And so we saw the
especially in legal with the hallucinations around case law, they weren't actually feeding. This was an early story in AI where the like a lawyer got disbarred. And I think we use this as the hallucination use case. It was really around. Yeah, they needed it. They needed to have a rag with case law. And they say you don't pull anything out. You know, you don't use you don't
Ben (19:21.358)
Yeah.
Ben Johnson (19:39.06)
justify your arguments with anything else other than what I've given you. And that needed to be part of the prompt and the law firm needed to have a retrieval system for case law. They didn't have that they just had Chachi Petit try to argue a case and it made up case law. And the judge was a student have to go like, No, you don't. And you are out of here. I never want to see you again. And so what we also see is that that
the AI is causing folks to really think about automation and think about being able to do things with text, with sentences. And then we go, well, we can actually do a much more accurate job with just traditional programming here, just looking for keywords, doing taxonomies, and just keeping it.
keeping it in the world of traditional programming and not integrating with an LLM and your accuracy will be there. I'm sure it'll be a narrow use case, but the moment we include the LLM, now we have to protect against kind of the world of information that could go into these conversations. Is that really what you want? And so, yeah, I love that it's getting people to be creative.
But we still find that a lot of the problems are just, they're still traditional programming problems. The creativity was just reached in a different way.
Ben (21:05.901)
Yeah, we see the exact level of fear that is not being regarded when people are like, I have this amazing project that I want to ship and I'm going to put it on my app or my website, but I need accurate data in order to answer these questions. And I'll always be that white hat hacker in the room saying like, okay, you give the LLM access to your data set.
What's in your data? Like, it's all of the bookings for our travel service. Do you have PII data in that? How do you identify who the actual user is who's interfacing with the bot and link that up only to their data? Is that part of the query? Does it have the ability to... Like, have you written a tool interface to your data for fetching data such that it'll return null if...
if the user is not, like if they don't have any data in there. And you look at the prototype and you're like, no, it's a select star from this data set. So can I test this out real quick? And I just start asking questions. I'm like, who's the top booker of these travel packages in the last six months? And it gives an ID. And I'm like, OK, I don't know who this is, but next question. Where did it?
ID number, whatever, go in the last six months. It'll list out all of their bookings. Did they fly first class or did they fly coach? Answer some data and then look at what the next tool is, which is basically figuring out what the departure and arrival airports are for the travel. Take that ID and say for customer ID 123456.
Ben Johnson (22:40.983)
Right.
Ben (23:04.707)
where were their departures most common from? Like, okay, now I can see where they live. You can start basically hacking the system to kind of identify characteristics about the data that this thing has available and then start having it do what it does best, which is hallucinate and make stuff up, but it can infer things. It's very good at that, particularly if you give it some rich data. And you start having that conversation. Like, do you want to provide some sort of invoker, right? Associated with this tool.
and make sure that you're not giving it the ability to just like execute arbitrary SQL against your table. And then furthermore, that conversation goes into, do you have protection where this is only allowed to run DML commands? Like when it generates that query, what are you passing through to that? I saw somebody do it in a call a couple of months ago where they wrote a tool that was basically like, hey, generate SQL to query this database.
Ben Johnson (23:41.454)
Right.
Ben (24:04.713)
And you could very easily get it to do stuff like drop data or, Hey, add a thousand rows of data. You have no idea what some, some user of this interface is going to do to your system. So like that public facing stuff is scary. If you're not thinking through like bringing in traditional software engineers to come in and have, have their say of like, here's all the stuff we've learned in 50 years of like information security.
Michael Berk (24:20.036)
man.
Ben (24:33.999)
Let's make sure that's applied to this as well. Like you said earlier, it's not just the easy button. There's like a lot of complexity that goes into this.
Ben Johnson (24:36.536)
Yeah.
Ben Johnson (24:41.61)
Yeah, my concern to just being in the market and partnering with folks that there's a, I'll say a new breed of consultants, these AI consultants.
that have not been part of the internet conversation in the past 30 years. They've been very adjacent to it and they're the ones advising many of those executives. So that's maybe a little bit of a, arguably like an ego thing for me. Like my first company, we racked and stacked our own hardware. We, we didn't have the cloud. And so folks that were around like pre cloud and had to really be part of how the internet works.
how data works, you know, have gone through some of these evolutions of SQL and NoSQL and just kind of been part of the story. And then we have some young AI consultants that are just kind of like, I'm not a programmer, I'm a flow grammar. these, you know, these kinds of things. I get really concerned about the havoc that they're going to wreak on businesses. And maybe there's a little bit of ego there, but there's just...
Michael Berk (25:46.372)
The hell?
Ben Johnson (25:55.694)
There's a lot of tribal knowledge that we've accumulated over the years. And I get a little bit concerned about amplifying the AI is easy or AI makes everything easier. It's just really going to elevate the risk involved in some of the solutioning.
Michael Berk (26:13.947)
I have a question for both of you, because both of you have a wealth of software engineering knowledge. Do you think it's more efficient as a society, specifically with AI, to move fast and break things? Or should we be really cautious? Or is there some sort of middle ground?
Ben Johnson (26:34.126)
I think when people talk about move fast and break things, also implied in that thought pattern is a small iterative thing. It's not like move fast and break things and then we do a lot of big things. Yeah, it's like move fast and break things, which means I'm going to take smaller steps. my stride, my stride will be shorter so that I don't trip.
Michael Berk (26:47.187)
And ship out nuclear codes, right?
Ben Johnson (27:03.104)
And so the, you know, we try to keep customers away from big bangs all the time because they just, they're fraught with peril. And so I think that's, that's kind of what I'm fine with move fast and break things, as long as we're taking small steps, move fast and break things with a lot of larger, larger initiatives. you know, few, few years ago, five years ago, I was
I had helped the client move to NoSQL and then I helped them move back. Why? Because the effort that that turned into on the reporting side and the data intelligence side, the juice wasn't worth the squeeze. The application developers loved having...
less schema. But then when they went to the data side, they were like, shoot, this isn't quite what we wanted. And so I've seen a lot of those where we rushed to use a new thing, and then we learned what the pros and cons were of it, and then we had to switch back to something that was more traditional. So I think a lot of that's going to happen here too as the AI hype curve diminishes a little bit.
And I kind of think it's a little bit like everybody uses a different big leap. know, the cloud coming online was a huge big leap. People like you don't hire a DBA anymore. Somebody who's managing storage and, you know, messing with physical IT roles have turned into more of the cloud engineering roles.
So there's been some natural evolution, but I do think brands are probably thinking about their AI space kind of like the mobile space. Kind like, okay, well, now I have to go build things in a certain way so that I can reach people.
Ben Johnson (28:59.586)
through mobile. So I think you'll see, okay, what is my web presence? What is my mobile presence? What is my AI presence? And they'll kind of think of that as almost like an additional client or an additional medium for exposing their brand. Yeah, that's my two cents anyways.
Ben (29:19.619)
Yeah, when I think about move fast and break things, that means something very special to me as like a framework developer. We try to embrace that in a lot of different ways internally. So before we release anything to the wild, we're doing prototypes. figuring out, like we might have like a hackathon idea, like, it'd be super great if we built this thing that does all this cool stuff. Nobody's going to tell anybody like,
don't do that. It's like, take two days, see what you can come up with that'll inform, that'll help us learn like, is this a stupid idea or is this insane or is it super insecure or like what should we actually release first? And we'll break things along the way while doing that. But it helps us avoid the, the Yagny paradox of like a great idea.
Developed in a vacuum. You're gonna build a whole bunch of stuff that nobody cares about. So how do you trim that down and like really? religiously apply, you know agile principles to be like build this minimum viable product that we know it's not gonna solve all the problems that people have We know that there's features that we're leaving off the table we have to in order to get to market figure out product market fit and then learn from that and that breaking is like
don't release broken junk. It's more like release something that's kind of half baked and then get feedback. And if what we had on our design doc of like, it'd be really good to have these things, but they're not part of the MVP. If we're getting comments about that feature, we're like, yep, we were aligned. We were like on the right path. But sometimes that doesn't happen. It's like every request that you get is for this thing that we were like, won't have.
Like, or like, this is way out of scope or we just don't think that people care about this. Then you pivot and be like, all right, we need to focus on this thing and maybe adjust what we're, what we're releasing. I think that approach to move facts and break things, which is kind of agile software development in my opinion. but the risk with gen AI is that you can get from zero to what you think is hero way faster without thinking through like.
Ben (31:44.995)
Are we building an abomination or something we're going to regret?
Ben Johnson (31:49.421)
Right.
Michael Berk (31:51.815)
Hmm. OK. Do you think GEN.AI is fundamentally different than prior technologies that accelerated the speed to result? Or is it just another iteration?
And I can elaborate if that doesn't make sense. Okay.
Ben Johnson (32:12.088)
Well, definitely it'll improve speed to result. I don't think there's any doubt on that. I still think, at least for our practice, it still needs to be very cybernetic. We're not talking about apps being built without human interaction. So I don't know if it's another iteration, but I think it's another medium.
doing stuff with sentences and speech, I think that is probably around for a while. know, being able to seriate or that part is, there's kind of a race to that. Like how does my brand collaborate with customers over text?
Michael Berk (32:59.924)
Okay, cool. So pivoting a little bit, Ben, you have worked with a bunch of engineering leaders, and I'm curious, what are the tenets and traits that you have noticed make up a excellent engineering leader? Not a good one, but the top 1 % of leaders. What are they like?
Ben Johnson (33:19.086)
So it's interesting in engineering leadership, this is actually more rare, it'll sound cliche, but the care for people, that people are not just tools of performance, they're human beings.
and an engineering leader that can say, this is what I expect of you. It's really clear on conversation. I call it conversation one and conversation two. Conversation one is, hey, this is what I expect of you. This is who we are as a team. It's the conversation every leader needs to have with the people that they're leading. And then conversation is two is, hey, what's going on? I care about you. You're not an object of performance. You're a human being.
And sometimes it's really hard for human beings to hit this set of expectations. I know we expect a lot. So what's going on with you? that, think somebody who can figure out what's occurring to people, because people will perform based on what occurs to them.
And so as you're caring for people, you're discovering how they think and what occurs to them. And then you can redirect them to increase capacity and greater capabilities. But if you don't care about them, that'll never work. And so you won't be successful in conversation too, which is where you're guiding people towards better performance.
And what many leaders will do is they'll say, okay, conversation one, I've had the expectations, okay, people are falling short of the expectations. And they just won't, they won't get into conversation two in a meaningful way in a caring way. And then imposter syndrome increases. And that's my largest enemy is imposter syndrome within, within my teammates, I want to challenge them, have them overcome the challenge.
Ben Johnson (35:17.506)
and therefore realize that they're capable of more things. And that feels really good. But oftentimes we just, okay, well, I'm gonna, and I don't mean tolerate as in let bad behavior go, but I'm gonna tolerate, which means I'm gonna lower the standard.
And so the unwillingness or the lack of capacity to have conversation to and care for people turns into just lowering of the standard over time. And then the leader will inevitably not be successful.
Michael Berk (35:52.305)
How would you advise an engineering leader who potentially isn't exhibiting caring behavior to start caring more, or at least seeming like they care more?
Ben Johnson (36:02.446)
That's a tough one because there's a self-deception in leadership. There's a self-deception that, no, I'm telling everybody what's expected of them, therefore they should do what is expected of them. So one of the things we always say is it's my responsibility to be understood. As a leader, it's my responsibility to be understood.
but many leaders are deceived that one time explaining it is good enough or it's the people that work for them, work with them that just don't understand and they kind of land there and end there. that's a difficult question because there's a lot of, there's just a lot, all of us have it. There's this natural self-deception that you're a good leader. It's a
alcoholism is the disease that tells you you don't have a disease. And that's kind of the same with poor leadership. Poor leadership is the disease that tells you you're already a good leader, that you don't have anything to work on, that you... And so I think really maybe the enemy here is your own self-deception.
Michael Berk (37:18.321)
Yeah, it's interesting you bring up communication. One strategy that I use really often with my teams that is super effective is you just ask them to summarize in their own words what you have told them. And it sometimes sounds a little bit trite. And so you preface it with like, I know this might seem simple, but please just do it so that I can sleep at night knowing that you understand or whatever. And it works really, really well because if they can articulate it back to you,
Ben Johnson (37:40.43)
Sure.
Michael Berk (37:46.099)
then they've understood and that's the check that you need to know that you're being understood. And if not, it'll immediately show their gaps and it's usually a 60 second exercise that checks that off. So yeah, just food for thought there. But Ben Wilson, I'm curious, how do you approach this?
Ben (38:03.791)
I 100 % agree with everything you said, Ben. I could not agree more with that take. That identifies good technical leadership versus let's just say substandard technical leadership. And it really depends on the team that you're on, like what culture you've built. And if you do it right and foster that.
you can get into a situation where all of the people that are air quote under you as a tech leader, they all have agency. that's, I think that is the number one responsibility of a manager is to determine what ignites the passion in people. Are they like, the people on my team, or is each individual working on the things that they want to work on?
Cause if it's just top down direction of here's a list of our OKRs that we needed to hit, and I'm going to assign tasks to people, you're going to burn your team out. And not, probably not due to just pure stress. It could be if you have like a high operational velocity, but it could just be from boredom. Like there's certain tasks in software development that some people absolutely hate. Like I don't want to make another like,
DAO layer to the backend database. Like I've built 47 REST APIs in the last quarter. I hate this. That person might not say that if you're not asking that like, Hey, did you really like these projects that you were working on? And if you have a good enough rapport with your people, they'll be honest. They'll be like, actually, yeah, I really like this. Like, do you want to do more of it? And they might be like, yeah, this is super fun. I love the structure of this. Then you ask somebody else who was doing a similar project and be like,
Honestly, there's this other thing that I was thinking of. And once I hear something like that, having a one-on-one with somebody, I'm like, all right, let's open up a notes doc. And I want to hear what you think. Like, just tell me. Stream of consciousness, let's talk about. And I don't care about the project that they're talking about. It could be really cool. And that's definitely filed away for like, hey, maybe we should think about this and do like a proper prototype. It's more listening between the lines.
Ben (40:26.551)
Like, okay, this person's super excited about this area of something that we could do. What other projects that I know we have to do in the next six months would this person be perfect for? They might not be fully qualified to do it right now, but they'll grow into this if I start them on like thinking through this right now. And when you give people that agency, they're just like, hey, go play jazz. Like be creative, do the thing that you really enjoy.
you'll see team operational velocity go through the roof and people who just, they don't want to leave. They love the team and they all start getting along a lot better. So like, this is the person who's the expert on this. And they, I'm going to ask for their advice on this thing. And you get a lot better team cohesion and people that are almost, I wouldn't say like cult behavior, but I have seen stuff like that where everybody really loves the product that they're working on.
Even though to an outsider they might be like, yeah, that's cool, but I don't know. Why are they so excited about doing this thing? It's because they're just passionate about the thing, like the type of work that they're doing. That's my take on that.
Ben Johnson (41:38.442)
Yeah, what comes up for me there is this idea of working a certain percentage in the profession versus on the profession. So think what we see is this idea of the abstraction. we're being engineers, but what does it mean to be a good engineer and having that come from the team itself so that they're not just working on the work, but they're working on how the work is done.
and they have a say in that. I think that's also inspiring that, okay, we're not only, yeah, we're not just doing the project, but we're organizing the project, we're creative through it. And so it's, I like what you said though about...
different types of work and having the team kind of decide how they want to organize that. I think that's really essential. As a professional services firm, my critical principle is that the team makes the commitment to the work. There's not some architect and some project manager deciding that this is all going to be done tomorrow and the team didn't make that commitment.
So the commitment coming from the team, I think, is also critical.
Michael Berk (43:02.333)
cool. That makes a lot of sense.
Ben (43:05.251)
Yeah, it's super important. If you don't have buy-in from the people who are like hands on keyboard, good luck. Like projects going to get delayed or people will just be like, I've seen it not at my current company, but at previous companies where we're like, yeah, it's usually like a product person who's like, hey, we have this idea. It's already been approved by the CEO. Go build it. How much time do you think it's going to take? And you just like looking at this Heilmeyer diagram of like
Ben Johnson (43:17.87)
crispy.
Ben (43:35.085)
what this is and like, there's no scoping or design on any of this. And I'm pretty sure bullet point seven violates the principles, like the laws of physics. Like we can't actually do that. Like nobody can do that. So we might need to pivot away from that. And they're like, just get it done. Like, all right, we'll, we'll build to the product spec, but all of this nonsense that you said about like,
Ben Johnson (43:47.394)
Yeah.
Ben Johnson (43:56.834)
Yeah.
Ben (44:03.343)
how this thing is supposed to work. You see this wire frame diagram and architecture that some architect came up with. you're like, I don't think that they understand how any of stuff actually interrelates to each other with data science work. We can't do this latency that you're targeting here. We can't retrain this random forest model in less than a second. And you wouldn't retrain it for every request. So that doesn't quite make sense.
I see fundamental breakdowns when a design is generated where engineering is not involved. I've seen that so many times of like, this grand idea project, and then somebody has to be the bearer of bad news. If there are adults in the room saying like, this isn't possible, let's go back to the drawing board and involve the team next time when you're talking about this stuff. But I've also...
Ben Johnson (44:41.815)
Yes.
Ben (45:00.419)
been party to and like witnessed at many companies where nobody wants to step up and say that they just say, okay, meekly we'll do it. And then they, the project overrun by six months. And then people like, why is engineering so incompetent? Like nobody spoke up.
Ben Johnson (45:09.891)
Yeah.
Ben Johnson (45:19.466)
Yeah, that speaking up is super important. Many of my team or many of my teammates are offshore. And so that speaking up thing is something that we really have to support and help with. Because then yeah, it just shows up as to KPI viewers, it just shows up as engineering performance.
As a leader, I've started to talk about the default feature versus I realized that this product paints a picture of a future that you want to create. But if we keep operating like this, this is how it's going to go down. This is kind of the default path that you're on doing things in this way. How can we pull everybody together to talk about the vision that we
that we want to create and those teammates have really cool possibilities. Like they can say, well, if we did it this way, then I could get this. And OK, then the product guy is like, oh, that's even better than what I thought. So getting people to talk about the future that they want to create together rather than contriving some default future in a silo, that's really the key.
Michael Berk (46:33.191)
Going back to the topic like two seconds ago, how do you guys create a safe environment for people to speak up?
Ben Johnson (46:43.558)
you, well, you model it with vulnerability, I think you, you, tell them you don't really have all the answers, you know, I don't have all the answers. I need your input. And so there's a vulnerability component that's essential to that. So if you're modeling, this is a safe environment. I'm going to tell you what, what, what I think and what I don't know and being vulnerable and that it generally helps. but if you're
command and control, then of course it's not going to be a safe environment. But I do think it's that simple. I don't think it's some psychological trick to convey safety. It just starts with vulnerability.
Ben (47:26.275)
Yeah, that was almost exactly what I was going to say. Like just start with honesty. If you are a tech leader, if you're in management or like first line, I see leadership, there's the first line I see might know how to do something. like, yeah, here's how I would do it. But that hubris is not beneficial to the team. It's better if you need to like what I do when a new project comes up. I do a private design talk just so I can.
rock the problem and think through how I would do it. And I use that as a reference. I share it with no one, but I use that as a reference to evaluate somebody else's design that they come up with to see, did we hit all the same points? Did they think of things that I didn't? If so, I put that in red text in mind as like Ben reminder in the future, you totally like didn't think of this, but I don't say anything. I'll just comment on their design. Be like, this is really well thought out. It's great that you brought this point up.
But anything that they're missing that I think is really important, I don't say it's not a direction of you totally didn't, you missed this. Like this is super important. Why didn't you think of this? It's, phrased it as a question. Like, do you think maybe we might need something like X, Y, Z? And that builds a rapport with that person where they're almost like, hey, thanks for catching this. They learn from me. I learned from them. And it's humility. It's, if you present it,
in that way, in a very honest manner of like, I don't know what's going on in everything. You can learn from people and also give them knowledge in such a way that is not dictatorial.
Ben Johnson (49:09.516)
Yeah, Ben, I think that reminds me of kind of the difference between a consultant, a coach and a teacher. And if it and what you're kind of modeling there is coaching, asking questions to arrive at a solution. Coaches probably also kind of say hey, like remind people of standards and certain principles, but
consultants and teachers, like if you don't do that patient hold back that you were just talking about there, then you become a dependency.
Like then they need you to think they need you to know what's next. They need you to approve the tasks like you're all of a sudden going from leadership to you're a part of the team and they can't move without you. And I think that that's huge like for for leaders to say hey this is where I want to go. I don't know how to get there. I'm looking to you guys to help me with the path. And then I have a couple of ideas but that hold back that you did to just not
say, here's my idea, is how we're going to do it, really allows you to be scalable in helping multiple teams arrive at good outcomes. But yeah, I've made that mistake many times where I started to consult and advise the team. And then I'm a dependency. I'm needed throughout the life of the project and in great detail.
And the team just kind of shuts down their creativity and they just want to know, what's the next step? And yeah, that turns into a scalability problem for the leader in a hurry.
Michael Berk (50:51.763)
the next four lessons.
Ben (50:51.767)
And it also adds stress to the leader. Because you're now like, now I'm actually responsible for this thing executing. Instead of being like supportive of the project, I'm now like, how many hours a week do I have to be in meetings to discuss this stuff? Because the team thinks that I don't trust them. That's not good. Like I already do trust them. I want them to.
Ben Johnson (50:54.284)
Yeah, for sure.
Ben (51:17.391)
And I've always done that as part of my career. Like Michael and I have talked about this. I always approach any position that I'm in as how do I train two or three people to replace me? Do they just take over my job? And the best way that I've always found to do that is to empower people to be like, you got this. Like come up with your way of thinking through how to do this. And then I'll be here in case you have questions.
Ben Johnson (51:45.26)
Yeah, well, and I think there's a natural inclination from some team members to resist. And so if you're that kind of consultative, advice giving person that just wants to tell them how to do it, you'll get sucked in by just normal, ordinary resistance.
And so they'll be like, they'll start asking questions. Well, how do we do this? How do we do that? A good leader will say, OK, yep, that's what you're here to figure out. Let's see some ideas. what you resist persists. So for them, the resistance should be met with persistent ownership and responsibility. Yep, you're the guy.
And so you're so now the problem isn't being solved because they're resisting and you kind of want that you want them to okay All right. This is my problem to solve Here's some ideas that I have because they they're the executors. They're the ones that need to work through it But if you start giving those answers You're getting sucked in from just a very natural and I'm not saying the resistance. It's just a natural reaction to a context switch
And then now you're getting sucked in, whereas what you're talking about is just so good. Like, yeah, these are the questions of the project that the team should answer. Let's see the ideas and talk it through rather than you becoming the task giver. Being goal-oriented as a leader versus task-oriented and expecting the same from your team. You guys figure out your tasks.
I'm gonna keep shaping the goal and providing that direction But the moment you start delegating tasks like now now you're the ringleader you got to be there all the time
Michael Berk (53:40.071)
That really resonates. There's a threshold where as soon as you start defining what is required to do, now you're the bottleneck. But when you define the requirements, you're the arbiter between the stakeholder and the implementation. And that scales really well, because typically there's not that much that needs doing relative to the actual implementation. And as I'm saying that, it's some product
Research is very detailed and very deep. So it's case specific. But typically, that's a lot more of a well-defined and small task relative to a software implementation. So that makes a ton of sense.
Ben Johnson (54:21.89)
Yeah, coaches ask questions and provide standards. Just in my coaching consultant or teacher. the coaching questions precede clarity. So if you're the one asking the more questions as opposed to answering them, you're in the right place.
Michael Berk (54:47.357)
Cool, so Ben, I would love to conclude with one question that's specifically AI focused. So we've been talking a lot about engineering leadership and how managers can be excellent and good. Are there any angles or modifications to your advice that is AI specific?
Ben Johnson (55:06.878)
I love using AI to help me kind of jog some of those questions. I mean, many times I'm like, Hey, here's a bunch of artifacts I've been given, put those in Claude and ask, Hey, what kind of questions should I be asking? What's not being answered here? that that's kind of helpful, you know, like, Hey, it doesn't always do a perfect job, but, just kind of jogging the questions. Where's the discovery? Where are the gaps?
So I think that's useful. That's super useful so that you can go in kind of speeding up some of your preparedness time to get to good questions quicker. I think that's AI specific advice. The other one would be a cautionary tale. So I was in a Bible study and it was like a 13 week
series and on the second one it was like, write a personal Psalms. And so I put a bunch of a bunch of personal details into a chad gpt and I think I'm just going to knock this Psalm out, you know, but I put a bunch of vulnerable transparent information into the prompt. And it pumps out a idiomatic Psalms, you know, just like man, maybe it should have been in the Bible. I don't know.
And so I print that out and I take that to my second class and I think I ended up being kind of the sixth person in a line to go. But the second guy goes and he's having emotional trouble getting through the Psalms that he had written. And I get to mine and I read it and I have to reflect back later going like, man, I...
I accomplished the task, but I missed the point. And so I think that the you know, your authenticity is always going to decrease by how much you use a GPT or an LLM. And there's just some tasks that yeah, you can, you know, you could use it to accomplish the task. But should you is maybe a better question that I think we're going to start asking a lot in culture.
Ben Johnson (57:22.218)
And I just think back to like Neil Stevenson sci fi where the upper class were Victorians, they had shunned technology and they started using it very, very discreetly. Like they all used it, but it was like the tablet inside their lapel that they would like reference once in a while. And then the, the common class in society was getting cybernetic implants and really replacing parts of their humanity with, with
the cyborg, right? I think that metaphor is like something we should consider. You do we want do we want our kids to be trained by the iPads? You know, do we want the AI training? And I'll just go one point deeper, you know, young teen men will struggle, I believe in the future with completely artificial relationships. You know, the the OnlyFans model won't be real anymore.
It'll be completely artificial. And if you ask it to be more affirming, it will just do that. And that is not like what I experienced with my wife. You know, if I asked my wife to be more affirming, that conversation is probably not going to go well. And I think we're just we're entering a really interesting time where maybe not all technology is good.
Michael Berk (58:44.627)
Damn. Yeah.
Ben (58:44.771)
That is an amazing point. Something that I hadn't even considered before. But wow. I wonder how far people are into building stuff like that.
Ben Johnson (58:57.934)
I think probably closer than we like to think, right? Like, I do think, you know, if you're an eight year old, 10 year old, then you won't necessarily struggle with how to proceed in a digital, with sexualization being digital and all that, it'll talk back, it'll be able to change its appearance.
Ben (59:01.508)
Yeah.
Ben Johnson (59:21.386)
I think there's some real concern with just how we're going to have these digital relationships with AIs as they get smarter. And yeah, I don't think much good can come from that.
Ben (59:35.755)
Nor do I. Like, probably any good can come from that.
Ben Johnson (59:41.762)
Right.
Michael Berk (59:44.353)
I thought the movie Her was pretty cool.
Ben Johnson (59:48.202)
Even the lesson in the end of that movie was that he had kind of eroded, if I remember the movie correctly, he had kind of eroded the relationship with his sister because he had the artificial relationship with the AI girlfriend. And that AI girlfriend was not visual, if I remember. Like imagine if that's now got some pretty solid immersive visuals along with it. That's a real issue.
Ben (01:00:06.639)
Mm-hmm.
Ben (01:00:17.869)
Yeah, I'm now really seriously considering sticking with my Luddite approach to child rearing. I did so with my teenage kids and now I have two young ones. We don't allow them screen time really. They don't have iPads, they don't have phones. Yeah, maybe just sticking with that. I always felt like, I'm this old fashioned person.
Ben Johnson (01:00:25.218)
Yeah, you should. I, what, when,
Ben Johnson (01:00:33.528)
Mm-hmm.
Ben Johnson (01:00:37.23)
That's great.
Ben (01:00:47.747)
keeping my kids away from all this stuff. I have like nanny blockers on phones for teenagers. like there's 90 % of the internet you cannot see for good reason. But now it's like, yeah, for app controls too on anything that interfaces with Gen.ai, who knows what people are going to be launching.
Ben Johnson (01:01:08.598)
Yeah, I mean, from from me showing up a little like, less authentically in a Bible study, which, you know, you could just say, well, that's minor. I think that's significant that I that I didn't write that and wasn't emotionally tied to my words.
to i just don't leave the house anymore because i'm i'm being affirmed by my artificial girlfriend at home that seems like a big gap but it's really not it's it's the same issue
Ben (01:01:36.751)
you
Ben Johnson (01:01:40.276)
just in two different illustrations. And when any person gets to their deathbed, they don't say, they don't say, man, I wish I had watched some more Netflix. Not a single person would say I wish I'd spent more time on an iPad. They will always say, wait a minute, what about my relationships? I wish I had more time. And it'll always be in the context of relationship.
Ben (01:02:01.935)
Mm-hmm.
Michael Berk (01:02:07.005)
Hmm. God, we just started a whole new episode with like... Yeah.
Ben (01:02:11.299)
Yeah, what a profound series of statements you just made. That was fantastic.
Michael Berk (01:02:18.003)
All right, well, I would love to continue, but we're at time. So we'll use this as a breaking point. Today we talked about a lot of really interesting things and it on a profound note. But some things that stood out to me were, first, when you're looking to build stuff, build what the customer needs, not what the customer is requesting. Sometimes they're the same, sometimes they're different. When you're looking to get buy-in for trust for AI, make sure you include limits.
And for instance, for retrieval augmented generation or RAG, limit the data provided. Make sure that people can't do what Ben does, which is go and track people's locations via destinations and airports. For AI that executes commands, make sure you add a translation layer that parses the request so you're not dropping tables willy nilly. And then from a manager perspective, really good managers, have the responsibility to be understood.
to give their people agency and specifically tasks that their direct reports like. And then it's also important that you create a space where people can speak up. And you can do this by leading with vulnerability. And finally, if you're going to write songs, don't use Chatsheet BT. So Ben, if people want to learn more about you or your work, where should they go?
Ben Johnson (01:03:28.824)
There you go.
Ben Johnson (01:03:34.218)
Yeah, so I'm extremely easy to get. I just love having good digital face to face meetings. So if you go to my website, particle 41.com, there's a book a meeting link, and that goes directly to me.
And also on my LinkedIn, I have that same link available there as well. Don't have the most uncommon name as marked by having two bends on the same podcast, but Benjamin R. Johnson is my handle on LinkedIn.
And if you find me through Particle 41, that's probably the easiest way. So really easy. would love to speak with whoever that has any kind of questions about software development, data science, or DevOps.
Michael Berk (01:04:20.403)
Amazing. All right, well until next time, it's been Michael Burke, and my co-host. And have a good day, everyone.
Ben (01:04:25.167)
Wilson. We'll catch you next time.
Ben Johnson (01:04:28.665)
Thanks guys.
Creators and Guests
