Libraries, Law and LLMs: How Unconventional Paths Are Shaping the Future of AI (E.15)
Michael Berk (00:00)
Welcome to another episode of Freeform AI. My name is Michael and I do data engineering at Databricks. I'm joined by my wonderful co-host. His name is...
Ben (00:08)
Ben Wilson, I build backend database tables for REST APIs at Databricks.
Michael Berk (00:14)
Damn straight. ⁓ Today we're speaking with Laura and Maria. Laura started her AI career in data science as a consultant at FTI. Then she joined Databricks as a customer success engineer and eventually moving into the solution architect role three years later. And she currently works at OpenAI. So we'll be chatting about that for sure. ⁓ Maria started her career as a research data scientist and five years ago joined Databricks as a solution architect.
and is currently ML and AI product specialist for EMEA. So we don't really know what that means. We'll probably chat about that as well. ⁓ But starting off with you, Maria, ⁓ what is data science like at the British Library?
Maria (00:53)
It was the most beautiful job ever, to be honest. Imagine in a beautiful building full of books where you have so much data that no one knows about. Like who is reading what? What are the oldest newspapers you can find across the world? Scanning data for the building and how things are positioned because there are exhibitions, people going and taking books, paid exhibitions, or manuscripts. So all of these are data that British Library had.
and they didn't know how to take advantage of. And that was my role really, actually bringing the data in one place, like the lake house we were talking about in Databricks, and then start unlocking it with analytics. And we started with market models, as you can imagine, but then it went beyond.
Michael Berk (01:36)
Okay, so lots of scanning books, it sounds like.
Maria (01:40)
scanning books,
and digitalizing these assets, but also a lot around what kind of exhibitions we need to build, what is people interested in learning about, how we bring these books into more people that are not familiar with.
Michael Berk (01:45)
Mm-hmm.
Got it. So it's like using the digital information to reach new people. That makes sense. ⁓ Did you guys ever try to do any meta-analyses? Because I feel like the British Library probably has a couple of books at least. A lot of information in there. Anything about the insights those books provided?
Maria (02:00)
Yes, yes, yes.
I, you know, a lot of things we could have done at that point when I was there, if we were not even in the cloud and there was a lot of data transformation that had to happen first before anything else could be done.
Michael Berk (02:13)
Hahaha
Okay.
Got it. OK, cool. ⁓ Yeah, that sounds like a really fun use case. And I'm just curious.
Maria (02:27)
because one of
the most beautiful buildings, a lot of cultural people, people are interested in all of this kind of stuff, a lot of ideas, but then we just start to go in practical about it and there is a lot of steps before you go meta.
Michael Berk (02:42)
Yeah, yeah, I'm Googling it right now. Holy crap. This is a nice building. ⁓ Anyway, how did you guys meet?
Maria (02:45)
Hmm. Hmm.
Okay. So we met, so I'm me and Lara, we call her each other kind of husband and wife, maybe like wife. And we met in Databricks when we reached out to each other to do some recruiting videos or help actually in the recruiting process. Right. So, and this is how it started. We connected or how can we help our recruiting kind of hiring process get more
Lara (03:08)
True, yeah.
Maria (03:16)
visibility be more open around how people can prep to get a better chance in the interviews. And this is it started. And then I think you reached out to Youssef or something.
Lara (03:27)
Yeah. And then you just slowly morph into how could we like, you know, we were sometimes doing the same demos for lots of different customers. And so we just started thinking about a way that we could build reusable demos, like recording them. And that's slowly morphed into us collaborating on this.
Michael Berk (03:47)
How do you walk the line from being a Databricks branded ⁓ video tutorial service versus your own thing?
Maria (03:48)
And this is.
It was a conscious decision, right? So before we started and building the brand and thinking about the name, we're brainstorming a lot in the office on names, you know, and what it should be, it with Databricks or not. But then we decided to not be with a Databricks brand because we didn't want to only have Databricks technology focused videos. It is a little bit more biased on that side for now, but we're planning to bring more technology. So it was a conscious decision, I'd say.
Lara (04:22)
I would say Gen.AI ML focus was our goal. And yeah, I'm happy that it was like this because then it meant we were not limited to that technology.
Maria (04:24)
Yeah.
Michael Berk (04:32)
Yeah. And also you can have a lot more velocity. Like I know Databricks does in theory have a podcast run by Brooke and Denny. Denny is on DevRel. Brooke is an ML practice leader. And they have really great content, but it is audited and scripted. because it's the Databricks brand, it has to go through layers of red tape. ⁓ So we did the same thing, where was like, we just want to talk about cool shit and be able to curse. And ⁓ yeah, true. Because you guys are nice people, unlike us.
Maria (04:49)
Right? Yes.
We don't curse in ours though. ⁓
Michael Berk (05:01)
⁓ So yeah, that's super cool. Are you guys enjoying making content? Like what's that experience like?
Lara (05:07)
Yeah, I think to be honest, I mean, I'd be curious to know how it is for you. I don't think you can do it long-term if you're not enjoying it. For me, this is like my Friday joy. We do it on Friday afternoons, UK time. And it's like so nice to be able to interview like super smart people building cool stuff and learn every week. And also like just getting to know people within your company and outside your company.
And so like, if we didn't have this, I don't think you could keep up with the work that, you know, it represents. Then of course you start having like some shortcuts on how to process videos fast, et cetera. But I think it would be difficult to like keep it on the long run if it was not like bringing joy like generally.
Maria (05:54)
me, I'm enjoying it even more now that you're in OpenAI because I learned a completely different tech that on a day to day, I am not on it. I see a completely different perspective on the sales side, on the product side, on competition side. It's really interesting. We learn a lot from each other and from all the interviews we bring because we try to bring product people, engineering people.
people on the field, so very use case focused as well. So what are the real things we can solve? Yeah, so it's fun.
Michael Berk (06:25)
Ben, are you having a good time?
Maria (06:27)
Yeah.
Ben (06:27)
think if we had scripted stuff or if we had a marketing person involved, I would have stopped this years ago. ⁓ The fact that we can have an episode where we can vent about stuff in the tech space or we can then have another episode that's like, here's tips that we've learned about all the stupid things that we've done in our career so that the audience can be like, wow, that dude was an idiot.
But he learned from it. I like talking about stuff like that. It's kind of nice to adopt a self-deprecating tone sometimes.
Maria (07:02)
I have to ask,
what did you learn and what did you do?
Ben (07:06)
⁓ Michael knows. I've talked at length about stupid shit that I've done in my career. ⁓
Maria (07:08)
haha
Lara (07:13)
But was it like this at the beginning? I'd be curious because at the beginning we were more scripted. We're going to say this, then we're going to say that. And it's just with time we realized like, no, like there's no need for a script. And it's actually so much better if you have a genuine reaction of like, I don't know, like we did a recording today, like where I was like, can somebody remind me like what ds5 does exactly? know it's like, you know.
automatically like optimizing the prompt, but actually I don't really understand like how it compares to other things. And at the beginning we would like cuts when we had a question or something. And now we've started to be a bit more like chilled about this. Was it the same for you guys? Or was it from the beginning, like free for.
Ben (07:56)
It's always been like, like riff off the cuff. ⁓ for this show, definitely we started it with that intention. And then our previous show, there was like, when I started the show with the panelists that was on there at the time before Michael came on, ⁓ there was kind of a script that they would go through 15 minutes before recording. And I was just like, I hate this.
Maria (08:19)
No.
Ben (08:19)
Because I never wanted to stick to it because somebody would say something interesting and like, no, the episode is now about what you just said. Like, I don't care about these other questions. So I found that I would have this script of like 30 questions and I never got past the third one. And I told the other panelists, I'm like, I'm not doing this anymore. They're like, yeah, it's your show. Now you do whatever you want. And then we had Michael on a show and reached out to him, I think the same day, like, dude, do you want to just come on this podcast and like run it with me?
Maria (08:33)
Hehehe.
Ben (08:47)
And that's how we met. It worked. I do have a question for you, though. DSPy, who did you interview?
Lara (08:54)
Oh, it was like some colleagues. So Anton and Sam, so in pre-sales, how did we come to talk about it? Oh, it's because we were talking about MLflow 3.0. And then I think we were talking about the fact that you could integrate that in the like prompt versioning in MLflow 3.0. And I was like, somebody tell me like, is DS5 like an orchestrator? Like, is it a competitor?
Maria (09:01)
Okay.
Lara (09:19)
to, know, Landgraaf or like the agent SDK. I was just a bit curious.
Michael Berk (09:24)
Yeah, what the hell even is DSPi?
Also for the people who don't know what it is, ⁓ it's average- Yeah, Ben, I heard you know about it. What is it?
Maria (09:29)
Let's ask Ben about this, right? ⁓
Ben (09:34)
I'm not a committer to it, but ⁓ the two maintainers, like the two primary maintainers work on my team. So if you want to interview them, I can arrange that. ⁓ Tomu is awesome. And so is Chen. ⁓ basically, ⁓ you take the principles of traditional ML hyperparameter tuning of like, say, opt-tuner or hyperopt, where you have an objective function, like I'm trying to minimize loss.
Right? I, my target optimization metric is say like RMSI. And I test all of these different hyperparameters in my model that will effectively allow me to explore the search space and not just through pure brute force. I'm coming in and doing random sampling, but it's sort of an intelligent search and it uses Bayesian optimization to do that for like, for say, Optuna.
So you take that principle of automated optimization and apply that to text. ⁓ So in the case of Gen.ai, it be like your prompt coming into setting the system prompt of what you want this LLM or a Gentic system, how it should behave as a set of base rules. And when you're getting started, you write that out as a human. You just type it up and you're like, I think this is good, hopefully.
And then you interact with the thing and you're like, yeah, this sucks. like it's, it's hallucinating with this one thing or it's trying to be like its tone is off. So you imagine you're building some sort of agent system or not even an agent, just a chat bot that's like, Hey, I want a chat bot that can help me write emails really well. And if you just flat out talk to an LLM and tell it to do that, if it doesn't have state for a conversation history of like who you are or what that
that session is like, and there's no history to draw on of like what style is going to be, it's going to do whatever it's trained to be, right? Like whatever that final RLHF step was done while finalizing that model before it becomes a service, ⁓ it'll just create this sort of generic output. And you could interact with it in a chat session and say, well, I want the tone to be professional or I want the tone to be, you know,
approachable, I want you to use like second person references or something, like however you would want that to be. That's effectively the prompt that you would inject into that thing. The process of going through and adjusting that and iterating that for like a production service that you'd be deploying is in my ⁓ professional opinion about as enjoyable as having your fingernails removed by a pair of pliers. It's just as fun as like manually tuning an ML model.
those of us who did data science back in the day, which I think is all four of us. Have you ever like tuned a model by hand, like straight up by hand, not even using cross like cross validators or random search, just guessing? I used to do it all the time. Anybody else?
Michael Berk (12:31)
Yeah.
Maria (12:31)
Yeah.
Ben (12:32)
Yeah, it sucks,
Like it's days or weeks of time going through and brutally doing that. So you use Optune or HyperOpt or any other optimizer and that becomes a couple of hours till you get something that's like pretty good. That's what DSPy is doing. It's like evaluating the results of that prompt being connected to an LLM and it's actually rewriting that prompt to hit a target that you configure.
Maria (12:57)
But this is then a case where it can be used against lang chain and orchestrator frameworks.
Michael Berk (13:01)
Yeah, that was the whole point. ⁓ So there's a prompt optimization step where it will optimize the system prompt. It will also add demonstrations. And it will do a couple of other things within that system prompt template. But it also advertises that it can orchestrate agentic frameworks and theoretically even fine tune stuff. And there are also different optimization algorithms. So Ben mentioned Bayesian optimization, but it also has random search, which I typically use. It's a bit more cheap. ⁓ And so yeah.
Ben (13:02)
Yeah.
Hmm.
Michael Berk (13:30)
Is it positioned as a competitor because it does orchestrate agentic steps, right? There's signatures, there's modules and other, other like abstract classes.
Ben (13:39)
it's an authoring library, but it's, it does a lot more than just that. So it's, the intention is that this is hopefully going to be because of the power that it provides and the, the things that it does for you for making a better, you know, production agent or chat bot or whatever you're going to create. They're thinking that
Maria (13:44)
Mm.
Ben (14:04)
or the hope is that most people sort of navigate towards that. The interface is a little bit more software engineering focused, like you're creating, you know,
it's not a script that you're writing, it's more object oriented that you're creating if you want to leverage all of the features in it. So it is kind of a competitor, but ⁓ it plugs into those things. So it's a meta library.
Lara (14:30)
But if we have that, why isn't that used more widely into automatically suggests, hey, you should use this prompt instead. Like right now, I can't think of technologies that you can use like GenAI technologies where it's going to automatically tell you like, no, like rewrite the prompts. Where do you think that is?
Michael Berk (14:49)
Well, Databricks KBQA knowledge based question and answer where it's basically set of documents, put it in a vector search index and add a retriever on top of that. That uses DSPy. And I think there are a few other like internal tools that use DSPy as well. But Ben, you know the answer.
Ben (15:03)
I
Maria (15:03)
I think most of
bricks in agent bricks will use it. Not already. Yeah.
Ben (15:07)
Yep. And
Tomu is working on the PR this week for, think, KIE is going to be more like porting over to you. Yeah. So, and there's other projects that I can't disclose yet that we're working on that are more embedding DSPy functionality into sort of autonomous agentic systems. And that's the end goal for our midterm view is.
Maria (15:13)
Hmm.
Michael Berk (15:14)
Thank God.
Ben (15:34)
is like these things are becoming so powerful with the architecture that you can set up around say an open AI model.
build state cases where you can connect to a tool usage server and create tools that can do some pretty crazy things like optimize itself or edit, you know, your source code live, like look at, you know, Anthropics, cloud code implementation. We had a whole episode a couple of weeks ago on just that, about what it can do and what happens when you connect MCP server to it with like tools that can do more than just what cloud code can do. You can do the same thing with OpenAI, hook it up through light LLM.
connected to any of these agentic systems that do like real-time feedback and it's crazy what you can do with it. So that's the midterm feature is like building systems that can improve themselves without needing human intervention.
Maria (16:19)
See
Michael Berk (16:24)
Lara, are there really cool proprietary non-discussable features at OpenAI?
Maria (16:29)
I guess a lot of them.
Lara (16:30)
Even if they are, guarantee you, they don't keep me in the loop. yeah, actually you'd laugh because very often like it's funny.
Michael Berk (16:34)
Okay, good answer.
Maria (16:36)
Thank
Lara (16:40)
that's, you know, like you can have a tweet a morning, like you wake up and there's been a tweet releasing something that, you know, you don't know yourself, you've been working at the company. And then the customers would be like, ⁓ Sam Aftman tweeted this. And I'm like, I just woke up. I don't, you know, I'm not aware yet. Yeah. That's, yeah.
Maria (16:56)
Thank you.
Michael Berk (16:58)
You
Yeah, I'm
curious your perspective on that. So obviously, like, this is our own internal opinions. I'm speaking freely outside of Databricks, and I Laura is speaking freely outside of like the OpenAI brand. But one thing that I really like about Databricks is how transparent it is. ⁓ Relative to my prior job, which was like working for a streaming service, we didn't really know the game plan for a lot of executive decisions. And I think
the all hands at Databricks and the ask me any things at Databricks. Like you truly can say the sauciest question. Like the common one is when's the IPO and they don't tell you obviously, but every, yeah, in two years, every single day it moves back two years. But ⁓ yeah, it's been really cool. So I'm curious, Laura, what is the culture around that at OpenAI? Are they super transparent? Because there's a lot of secret sauce to protect. So how would they communicate while still, I don't know, protecting the IP?
Maria (17:37)
In every two years.
Ben (17:39)
hehe
Lara (17:57)
Yeah, there are definitely Chinese walls. I know that at Databricks there was too much information, which was great because if I wanted to get some information, I know I could find it because we were even in the field, were cooking in every PRD that exists, et cetera. So our job was more to try and filter through the noise to get to the information that you actually need. I think like Adobe now, obviously like there are like lots of guardrails are
There are set up around that. Everything is very tented. That said on the culture, I find from the perspective that you share is very similar in the sense that we have very regularly like ask me anything with the leadership team, all of the leadership team and like people are asking very, very open, strong, unfiltered questions and those questions are addressed for me.
It does feel very similar in that sense.
Ben (18:54)
So are you able to actually, like when you're at Databricks and Maria, you can do this right now and you do it all the time, just straight up talk to one of the R &D engineers to be like, hey, how does this thing work? Or when is this getting released? Is it like that? Or are they like, yeah, we don't even allow that communication channel.
Lara (19:10)
I think people are very friendly and it's because the team is much smaller than Databricks now. Like we're just under 3000 people. It's actually easier to just go and talk to people, especially being from the field. We're just, I don't know.
think like just 500 people all together, like pre-sales, et cetera. And so I would say the collaboration with product is very, very tight compared to Databricks where there's just so many more people that people will specialize much more. have people like Maria who are more like the people talking to product, whereas in pre-sales, would very rarely, if ever like talk with engineers. So I would say I'm more in contact, but then obviously there's like a lot that they.
are not allowed to share that they're working on.
Michael Berk (19:57)
Who do you think is in the next two-ish? Let's call it two years, gonna be the best LLM.
Ben (20:04)
That's a hard question.
Maria (20:04)
No.
Michael Berk (20:06)
You can also totally block it. I will say mine filtered opinion. I understand that you might be associated with one of the providers. Maria, I'm curious as well.
Lara (20:13)
My very honest opinion,
OpenAI 100 % checking. Go for it, Maria.
Michael Berk (20:17)
Definitely.
Maria (20:19)
I would say I would give two. Okay. I think one is either open AI, some model of open AI, or I would say Google. I think I love the, I really like the results of Google specifically on the workspace side of things. So I believe, I believe Google, Google.
Lara (20:34)
For me, I don't know about you guys, but I do not dare ever to say like this LLM is the best because I got burned when was it? Like when we first released EBRX and I had a customer today, we had a customer workshop the week after. And so I went in with all my stats. This is the best open source model ever released, blah, blah, blah. And then that lasted one week. And then.
Michael Berk (20:34)
Nice.
Lara (20:58)
That's when I decided like, I'm not like making a song, not assumptions, but I think it can be true at a certain point in time, but things are just changing so quickly. And it really depends on the question that you're looking to answer. so people can always bring like this benchmark and that benchmark. And so like, since that day, I decided I'm not ever again, making a claim that I like whatever company I represent, we have like the best model. And I think.
Sometimes we over-focus on the benchmark when in fact you should really focus on the use case and what you're trying to achieve. And if you spend like more energy on this rather than benchmarking everything, maybe like you'd have better outcomes, but I don't know why you guys.
Maria (21:41)
Yes,
this DVRX burned a lot of people. ⁓
Michael Berk (21:45)
Well yeah, what happened?
Why did it only last a week?
Maria (21:47)
Because then Lama came out, right? Yeah. But like when we built the BRX, then we took an interview from Jonathan Franco on how they built it and the model architecture and all of these things. And I remember he introduced at that point the BRX as REX, like the REX, the dinosaur. And then that became kind of the puppy, like a dinosaur puppy for the BRX. It was fine. The BRX model.
Michael Berk (21:49)
Right, Lama 4.
Maria (22:16)
And now we are not serving it anymore.
Michael Berk (22:20)
Yeah.
Ben (22:21)
and we are not making a part two of that for good reason. ⁓ I actually can't.
Maria (22:25)
Yeah.
I didn't know, but we are
never going to focus on model training ever.
Ben (22:35)
⁓ I think we're going down the route of empowering people to do fine tuning of an open source model because of capitalism reasons. It just makes more sense to be a platform that enables that rather than a research organization where that's not like a primary monetization path as a company. But my take on your question, Michael.
Maria (22:42)
Okay.
Ben (23:00)
I would never be able to answer that, but I also don't know if it's going to be relevant two years from now.
Maria (23:06)
through.
Ben (23:07)
So my take on it is the base models that are being served, they're definitely going to improve over time. But I think you have sufficient sort of momentum and velocity at all of the major research orgs that are out there. So you're talking about all big tech. So you got Meta, Google, ⁓ AWS with Bedrock, ⁓ and most certainly OpenAI. ⁓
You have all these, this like brain trust that's at these companies and they've built street cred with being able to like iteratively improve these things. So they're going to, I don't think those companies are from a profitability standpoint, going to be competing with just how good their, their base model is. Cause they're all going to be pretty darn good to, I mean, they're all pretty good right now. Some of them are fantastic, but I think that those companies are
And you can see it with the agentic revolution that's happening now. Like log into a chat GPT this morning, two new pop-ups came up introducing agent mode. So that's, you know, opening as response to Anthropics, you know, agentic CLI tools. Like, Hey, we now have this the same as they do. That's the race that I think is going to be going down is these companies building not just a service because you're not, you don't make a lot of money off of a service.
Maria (24:05)
mmm yes
Ben (24:26)
You make a lot of money off of providing useful utility usage of that service. So I think that's where it's going. And that's going to be the race. My top two contenders for that are definitely Anthropic and OpenAI right now. I think that's going to be like the war that's going to go on in the next 10 years. And I think they're both going to be successful.
Lara (24:43)
And do you think,
do you think the goal is also going to be to all follow? I don't know. Like it seems like in the industry, there's this trend right now that we're all going towards the FDE model, like the Palantir FDE model. Do you think that's a thing for deployed engineer?
Michael Berk (25:00)
What's an FTE?
Lara (25:02)
So sending like equivalent to an RSA, like sending an engineer deeply, deeply integrated into a customer's like, you know, workflows so that they can build really cutting edge, you know, AI systems that can also be replicated for similar customers. So for example, you could go to a pharma customer and revolutionize, you know, some of their, I don't know, supply chain, for example, and then go and replicate that.
I don't know, I feel like every day when I'm looking at like job postings, it seems like we have that more and more, especially in big tech companies. I was wondering what you guys think.
Ben (25:38)
Nobody's going to like my answer to this one. ⁓ Laura, I think that your company is going to make that job obsolete.
Maria (25:40)
Mm-hmm.
Ben (25:48)
Largely. think five to 10 years from now, you will have agentic systems that are capable enough that that'll be competing with human capital at organizations that don't, that can't hire somebody to like build something. I think that agentic systems will be advanced enough. mean, they're, they're not there yet, but they're, they're slowly approaching it to the point where you can have like a senior engineer.
sit down with one of these, these code prompting and like optimization tools and build a moderately complex feature from scratch. ⁓ like we use it every single day, everybody in Databricks R and D, ⁓ we can't not use it these days. ⁓ so you need that sort of supervised assistance and meet your velocity targets.
of just like shipping code. We're shipping 10x the number of lines of code than we did three years ago, each individual engineer. So once these systems get enough testing out in the real world and people have come up with clever ways of doing stuff like, I need to add these additional features to this tool set. this deployment needs not just the ability to look at individual files in it.
in a source code repository, but also be able to pull in data about like, let me have references to the design doc. Let me have references to the Heilmeier or the PRD that is about this feature. And let me also be able to look at code bases that this might interface with, not just within this one repo. When you start being able to have an agent that can look at the actual infrastructure of, of your deployment. Like, Hey, where, where am I getting the data from that this thing is using? Let me.
go check the source code for the ETL pipeline and let me go sample the data. And it'll get to a point where you can just, in a few weeks, just using human language, have it build an entire full stack for you. That'll be production grade.
Maria (27:44)
And who do you think is gonna go off first? Is it gonna be the senior engineer, is gonna be the lead engineer or the junior engineer?
Lara (27:44)
So do you think?
for June 18th.
Ben (27:52)
I
engine, I think you'll be able to take a.
You'd be able to take like a junior engineer, somebody would take two or three years experience and have them build something that today would take a staff engineer and two senior engineers to build. And it'll be of a sufficient quality. I don't think people are going to go away from an engineering perspective because you still need somebody to tell it like, this isn't exactly what we need or somebody to audit it. You need a PR reviewer. But the consultant stuff, I think that is at risk.
I think just people that currently doing consulting might shift to a different sort of task. They'll still be interacting with customers. They'll still be helping people, but they'll be more like teaching people how to use these tools instead of going in and just writing code for people.
Lara (28:41)
So either you're an engineer and you have your fleet of agents doing work for you and you just review and merge your PRs at the end of the day while you're working on more interesting stuff. Or if you're customer facing, you go from my role where I'm currently like building demos for customers to being more of a sales role where I'm just saying, Hey, build that demo for me. And then I'm focused more like on the...
communication aspect of the job, change management. Is that what you're saying basically?
Ben (29:10)
think you'd be teaching people how to fish. You'd be teaching them how to use that tool properly.
instead of like going in as not so much as like a post as a pre-sales role, but like that post sales role that is currently like ⁓ a coder for hire. Like what Michael does, he goes into a company, they pay Databricks an obscene amount of money for him, his time to go and build an implementation. A couple of years from now, I think that he would be more of an advisor for them.
Maria (29:38)
So Michael, you building though like the, what I'm thinking it's not gonna go away is people who go into a customer or a place or a system and they see new ideas or different ways or design the system before they go and implement it, pair code it with a vibe code it or whatever, right? So a person who can see the challenges and design quick, like frame a problem, frame the challenge that we need to solve and then go and code it out.
Lara (30:06)
You could delegate the strategy to an agent when it's powerful enough to do it. Right now they're not there, but.
Maria (30:12)
Yeah, yeah.
Michael Berk (30:13)
Yeah,
Maria, I'm super curious for a take, but yeah, that's exactly like, even with my job now, like we were just assigned a, like a ML migration with 150 ML pipelines. And of course there's an abomination of like a proprietary repository that's horribly managed. It's bloated, like the classic story. And they want to move it onto a leaner and more open like stacks using MLflow and like industry standard tools instead of proprietary.
And if you give me Claude code and a week, I will have everything done. Of course you can't because they're LLM policy. They're scared of data leakage of their super fancy bloated repository. ⁓ if you get like, that's where we're at. And the things that need discussing are basically, all right, how does this team upskill to the new stack? Are we building the right process for this customer? Are the ML models optimized? Are they actually doing what we need them to be doing?
Maria (30:50)
I'm so sorry. I'm so sorry.
Michael Berk (31:07)
And so I think that's where my role will shift because you sort of can look over the edge and currently like legal blocks a lot of these types of features. So yeah, and I think a rising tide will lift all boats. I don't think engineers are going to go away. I think the jobs will change. And as long as you are dynamic, you'll just use the new tools in a different way. Like look at all of revolutions in history when electricity was, or like the classic is the car when like a car was invented. All right. If you're a horse breeder. ⁓
Ben (31:35)
you
Michael Berk (31:35)
Sorry. Yeah. You probably have to go take a Udemy course on how to like build wheels or whatever it might be. But yeah, I think that if you're just dynamic, you'll be really, really fine. But Maria, what's your take?
Maria (31:40)
Yeah.
Yeah, this one that I believe a lead engineer, staff engineer won't go away because someone needs to validate the system and come up with different design ideas. Or at least that is my hope that like this, all of this knowledge and whatever differentiates you'll be able to at least utilize it, not code it yourself, you back code it or the LLM codes it for you. But you need to gather and give it a different direction and see where it's going wrong.
And which takes me to a very controversial question. How do you feel about interviews where they say you should not use LLMs to do your coding test?
Michael Berk (32:26)
We were talking about that two days ago on my team.
Maria (32:27)
yeah? ⁓
Michael Berk (32:30)
I'll start. It's really simple. You want to interview people in the way that they're going to be doing the job, ⁓ but you also want to know if they're bullshitting. With take homes, for instance, we're currently discussing how to handle that because an LLM can ace our take home. One option is basically make the questions harder or abstract enough an LLM won't pass it, but then we're just pushing this, kicking the can down the road a year until an LLM can do it. I think where we're going to land is we
You can use an LLM and then during the onsite, we'll ask you very deep questions about what is going on, why you made these decisions and that you can't fake. But over to the group. What do you guys think?
Maria (33:09)
Not good!
Ben (33:10)
We don't do take home tests.
We most certainly don't do take home tests on our side. ⁓ Not because of Gen.ai, but because it's not a good adjudication of somebody's skill. it doesn't apply to... You give a software engineer who has 15, 20 years experience and tell them like, ⁓ build like, build a system that'll allow you to store state on...
like financial transactions. If you have, if you've been employed for 15 years at a, as a software engineer at certain companies, that is, it's almost insulting to ask somebody to do that. It's like, why did you just give me like an entire project in a week? That's very, that I've done 50 times before. Like most people can, can knock that out. Or do you give them like some really complex thing that's going to be
incredibly mentally taxing but is not related to your core job. What did you just validate that they're a highly intelligent human being? There's more efficient ways to do that that doesn't waste their time and your interviewer review time. So like the live coding questions are way more effective and you just make sure that you're pulling from a bank that is frequently generated from humans and you're rotating questions all the time.
So there's always novel things people can't share. It used to be there's loads of places on the internet that you can go for any tech companies, question bank as a software engineer, and just kind of study the patterns and be like, OK, I can solve all these. Nothing's going to be a surprise.
But that live face-to-face, can't. There's no way to gain a tactical advantage there. You either know it or you don't, or you can think through it.
Lara (34:50)
So you're saying face-to-face like not remote, like you cannot be a call. It has to be face-to-face or just are you happy with remote?
Maria (34:51)
it.
Ben (34:58)
I mean, we do remote for those, but if somebody's pausing for an inordinate amount of time for something that if they knew it, this should be a three second pause to start explaining the solution, you know.
Lara (35:12)
When you see their eyes
going like this. I've seen that. Yeah. It's just so cool.
Maria (35:14)
Yeah? Interesting.
Ben (35:21)
We ask them to
turn up the volume on their mic so you can hear if the keyboard is going.
Lara (35:25)
I think it's nice to have like, like life coding interviews can be hard. I find them hard just because it's, I'm a bit more stressed because, know, I felt like somebody's watching, which I don't like when I code at the same time. agree that then it would be better to like see what a person is capable of. I'm really.
pro like using LLMs for an interview, provided that maybe they share their screen and you can even see how are they using LLMs. I think it's a bit not realistic to be like, you're never used like AI, it has to come from your own intelligence because we're all using it every day now. And so I feel I'm more interested in seeing how is that person thinking, what are they actually using AI for? And do they actually reflect on what they're doing and what's their...
thinking on how to solve for a problem. But yeah, I don't know.
Ben (36:20)
Yeah, I'm.
Maria (36:20)
Yes, they
for use AI on the intervention, my opinion, as well as give them code to go through and read and see what they understood. I think that cannot go wrong. Right. Give them some script and see, let them explain how, how they understand the code and what's happening.
Ben (36:37)
Yeah, that was my favorite interview question that I used in the field for people. ⁓ I don't know if I interviewed you or not, Michael. I don't think we did. Yeah. And you like, he's that. But when I would do the live technical interview at Databricks for the field for RSAs, like the post sales people, ⁓ I would have, I would write up some code in an IDE.
Maria (36:39)
Thank
Michael Berk (36:48)
You did my take home.
Ben (37:06)
and intentionally put at least 10 bugs in it. And then I would tell them, I'm like, there's seven issues with this code. There's like seven bugs in it. ⁓ Find all of them live right now. You have 45 minutes. Just tell me what you're thinking, train of like stream of consciousness. Tell me like where you're looking and you're not allowed to run the code. And it was sizable. was like an entire screen's worth of.
Maria (37:16)
Hahaha
Hmm.
Ben (37:31)
like a really crappy function. And I would, there were only a couple of people in the history of interviews that actually found all 10. And then, and then that's also another way of measuring who are they as a person? Are they like aggressively telling me like, found 10 issues and, and do they assume that I'm an idiot? Or are they like, like very kind about it? They're like, well,
think there might be more than seven. And then there's other people that the ones that ace that interview are the ones that are like, I see what you did there. Very clever. And I'm like, man, okay, now we have somebody who's like really special because they like understood the metapsychology of like why I did that. ⁓ But those really expose, because that to me that test tests both your understanding of code as well as
Maria (37:59)
Aww. Okay, go there. ⁓
like.
Ben (38:24)
how well you could work with GEN.AI. Because that is the GEN.AI workflow model. It generates code. Can you tell it where it screwed up or how to adjust it?
Maria (38:26)
Yes.
Yeah.
Yeah, totally agree. Not how you take home test, but give them some script, let them explain how they think about it. And this idea with the bugs, I haven't thought about it. It's Yeah.
Michael Berk (38:45)
Yes.
So I would love to pivot a little bit to hear more about your guys's background. You're like objectively very successful. You work at two of the best companies in the world in prestigious roles. ⁓ And I'm sure you had to potentially work hard to get there, made some mistakes, learned from them. Can you think of any key moments or key learnings that you could pass along to our listeners?
Maria (39:10)
go. Okay, so some background, guess, which is applicable for both of us is first of all, we are not in the country that we were born, right? So we had to go to different countries, study, learn, learn the language. So there are a lot of hard moments, I believe, in our journey. And one of the hardest was when we started on Data Science, think for me, it was a really, really male-dominated.
like world, right? So honestly, my team was always and has been majority guys. And I love, I love that. have grown into getting used to these, you know, being more open. ⁓ you know, for me, the, usually the challenge comes from the fact that I was a little bit more introvert and I had to speak up more in a very male dominated world. And that required a lot of self understanding, accepting and
growing into this, you know, take some more leadership because I was perceived as not as vocal and that didn't help me to grow. Right. And so I think that would have been my first, if I could go back and change or do something differently, I would try to be much more vocal and like push myself a little bit more on.
taking actions, being more responsible, more accountable for projects, take more ownership, put yourself out there more in order to be seen more, because people just perceive you as introvert while you are not really. And I think I grew from this, not much better. Like Lara doesn't think I'm an introvert anymore. ⁓ So I think I grew on that front. But it was...
Ben (40:45)
No.
Michael Berk (40:45)
I don't think you're an introvert.
Maria (40:53)
War.
Mm.
Lara (40:54)
⁓ for me, well, I don't have like a typical background, I'd say, because when I grew up, I was really hesitating. I was always a bit nerdy, but I really hesitated. And then I chose to study law and economics. And then when I started working and I was on those huge investigation, like swing Google,
and we had to review millions of documents and it was just so manual and then I was like, gosh, there must be a better way. And so that was the very beginning of my career. And that's when I realized actually, I really don't want to work like in economics or law anymore. And that's when I started geeking on NLP at the time, because that was what everybody was doing. And so that's how I started like moving to
like a data role and ultimately like join this data science team. I think at the beginning was also really hard just because everybody around me, were all PhDs and like super, super smart. Very, very kind though. So very patient and explaining a lot, which honestly was a complete shift from the industry that I was used to, which was a bit more cutthroat and you know.
like not the same. And so that's something I loved about like just a computer science environment and data world that's yeah, I agree. It's very male dominated, but I felt like the overall atmosphere was much more welcoming at least. Yeah, my biggest problem was like I had to become as technical as possible as quickly as possible and
Maria (42:20)
No, everything is baseline. We have to compare it again.
Michael Berk (42:27)
Hmm.
Lara (42:35)
People talk a lot about, you know, imposter syndrome, everybody has it and you should have, but like here it was not imposter syndrome. Like I was like literally changing careers and it was just hard, not understanding anything that was going on. then everything took, like everyone took everything for granted. I think what saved me is just, I don't know if it's, if my background helped me, but I love documenting everything.
at a time where you didn't have note takers or anything. I quickly became like the person who was not the most technical person, but I was the one who was documenting everything. And so when there were questions about the projects that we worked on, I knew the answers just because I was... So I think the...
The advice I would have for somebody breaking into this field would be like, just find like the one thing that you do well that nobody else does. Because for me, as silly as it sounds, I was like the person who was really good at documenting and doing project management and orchestrating. And at the same time, I love geeking on the technical stuff, which meant that I could go into details. So that's how I managed to.
to start getting into data science. And for me, that was like the most difficult part of my career, I think, more than afterwards joining fancy companies like Databricks or OpenAI, I would say.
Maria (43:59)
Yeah, on the changing of careers, yours was kind of a big change. For me, it was a little bit going from more on the ML engineering background to kind of field engineering. And while it seems like subtle, it's actually a big difference because when you're on the engineering, you are building a product, you know, you are in your team, you work together on solving a problem. But now suddenly you go into more customer facing role, if not massively customer facing role.
You have to be accountable to so many people in your team, like your sales, your post sales, like your manager, all of these people that are around you and having different responsibilities, right? You are accountable to the customer, so you have to respond, always there, know everything, right? You are the first person, everyone's going to reach out if there's some bug or anything. And when I joined Databricks, was at that point, the platform mainly focusing on
landing the platform, landing data, building the data in, data ingestions, mainly ETL stuff. Well, I was a male engineer, right? So had to upskill a lot on what do we mean? How do we bring the data in? I don't know. I don't know. What are the connectors? We don't have any. And so, yeah, there was a lot of learning on that side, which then helped me, I guess, on building a bigger picture.
How do we think about data? How do we think about data platform? And then build on top on the ML data science stuff.
Michael Berk (45:22)
Got it. So ⁓ consistency between your two experiences is you had to learn a lot and sort of upskill going one level deeper. How do you guys approach learning?
Maria (45:26)
Yeah.
Good question. For me, I will say this Lara because I love you. So for me, for getting up to speed with whatever happens in Databricks, I was always going to Lara. Lara, tell me, what is a TLDR? So I guess the first thing is network. Have people around you that you can trust and then you can share and share your opinions, thoughts. Get up to speed because some people are better at some things versus you. So build a network of people where you can trust. And then...
I guess do activities that help you learn. For example, for us was the YouTube channel, the newsletter that you came up with. things that keep you up to speed in any case, and it's part of your workload.
Lara (46:14)
Yeah, agree on the network, or especially at Databricks, the product was so wide, you couldn't be like an expert on engineering and warehousing and data science and ML, and GenAI, et cetera. So having a network of people that you can trust and rely on when you're like, I've been like,
I have to do a two hour presentation on Terraform tomorrow. Never use Terraform in my life. Please save me. Like that was really so important. And at the same time, it allowed me like to grow so much in three years at Databricks, like the amount of.
things that I learned just from my peers. It was really, it was really amazing. I think also what's interesting in our like industries that you know how journalists, they always try and get the latest scoop on what's happening in the world. think in order to be a good engineer, you need to have a bit like this mindset where, know, what's being talked about, you know, need to know what the latest feature is, especially when you're customer facing and you need to be like the first person like talking to them and representing your company.
So there's also like this almost investigative, like you need to always be aware of what's happening. so yeah, for me, like that would be the main aspect. What about you guys?
Ben (47:26)
⁓ I mean Michael knows for me. ⁓ I've had
six career changes and like from when I graduated high school to where I am today, it's not like, ⁓ he like went to different companies, ⁓ completely different industries. So from nuclear engineering to ⁓ process engineering ⁓ to like MOCVD, ⁓ R &D, and then to semiconductor manufacturing and then to the fashion industry. And that's
Maria (47:57)
Wow.
Ben (47:58)
That's where I did data science work for the first time, like full, ⁓ like as a job title, ⁓ which landed me the job at Databricks in the field and then going to engineering. So I like that because it constantly reminds me how stupid I am. Every time that I do like one of those, those moves, ⁓ you know nothing, but you have all this work experience beforehand. And that's why I love what you said, Laura, about.
I relied on what I learned as ⁓ economics and law to apply to this next journey that I took. That's how I've handled my career transitions is like, it's like you get into a new place, you don't know anything, and you assume naturally that everybody around you has everything figured out, but you kind of look at things through a different lens than other people that have stayed in that microcosm. And you start to come up with ideas of
Is there like a different way to do this that maybe is a little bit more efficient? I remember when I was at like, you know, at this company or working on this, not just that how that company did it, but like how this profession does that or approaches like experimentation. And it helps you over time. Once you get a little bit more street cred and like knowledge, you can apply some of those concepts. One of the things that, that made that very apparent to me was actually at that fashion company.
where a project I was given was something that Michael is very familiar with, like A-B testing. But what they were doing at that company was they would launch these campaigns for like marketing campaigns. Here's some coupons, use them for your next purchase, like $50 off or whatever. they came in, like somebody came to me and they're like, could you automate this? I'm like, automate what?
the coupon creation? Like, I don't know how to do that. And they're like, no, no, not the graphics or anything. ⁓ Although you could do that today. They're like, we want to know what we should do next. Like, who should we send these things to? I'm like, OK. So you want a probability model if somebody clicked through generation. And they're like, yeah, just optimize that. I'm like, OK. I know nothing about this.
Maria (50:00)
You
Ben (50:07)
I'm just going to go talk to like 50 people and take notes and like interview people to understand like, what is this problem space? But what I got at the end of that, the thing that derailed the project for me that broke my brain was when I started asking people, well, how do you know that your marketing tests worked? And they're like, like our sales numbers. I'm like, well, the data's not in the database. Well, there's no way for me to join the data. I tried to do it.
They're in two separate technology stacks. they're like, well, no, no, no, we just look at like sales numbers day by day. but you're, you're testing out like three different hypotheses at the same time. And you want to know like, which one's the best one to go with. And they're like, well, it's, just the one that like we know works the best. So like no data analysis, just gut check. So I was like, I was like, we need to do some sort of AB testing for everything that we do here. And.
Maria (50:41)
you
Michael Berk (50:55)
Classic.
Maria (50:55)
you
Ben (51:01)
people are like, A-B testing. I think the website people do that. I know the website people do that. And I'm going to go check with them on how they do it. And what I ended up seeing was like, they're also doing gut check. Lick your fingers, it up, and be like, yeah, I think the wind's coming from this way. So that was one of the first big projects that I did was we're going to use statistical modeling and basically continuous improvement monitoring.
Maria (51:15)
Aww. ⁓
Ben (51:27)
stuff that I learned in factories. Like, hey, we have these things called statistical process control charts. We collect data over time. We need to understand degrees of freedom to know like how much difference can we actually declare that there's a difference between these distributed data sets. I built an entire system for that for people to use. And I was like, I never would have known this if I was like working as like in the fashion industry or something my entire career.
I never would have come up with this. So like that, that like transition thing, even though it's scary and it lets you, it kind of opens your eyes into looking at problems in a different light than you would have normally. But I, I, I like that process of coming in and feeling super dumb and asking all the dumb questions. And another thing to your point, Laura is like, it's amazing how different people are in different industries.
about how nice they are about when you're dumb or just ignorant about things. I was actually curious as a follow-up, ⁓ why do you think in the tech space people are nicer?
Lara (52:29)
I don't know. It's so hard to explain. I think what I loved in the tech space, well, first I was like, yeah, annoyed at having to review so many contracts, like where it was manual. So, but then I went to a meetup and it's stupid, but I just remember people were in jeans and t-shirts and just, then I was like, I'm going to change careers. So then I started like to learn how to code, et cetera. And so I don't know. I just loved the nerdy atmosphere. I loved the fact that.
academic achievements, like not academic from like a diploma perspective, but that the intellectual thinking was really valued. I think that contributes to people being kind, because that definitely felt like a change. Now that people look at diploma too much, but more like people are interested about intellectual discussions and reasoning and...
That's the only explanation I can find about it really. But for me, that's what attracted me. Like, I can wear jeans at the office. People want to like optimize, like reviewing millions of documents using natural language processing. That was like, you know, this, I just, I found that fascinating. And then I don't know, I can't really explain it. What do you guys think it's like that? Because really I don't want to generalize, but in this case I have to generalize. I really think.
There's a massive difference between different industries.
Michael Berk (53:52)
Nerds are nicer, usually. They've been bullied and so they know how it feels.
Maria (53:55)
you think that's that?
It's nerds that grew up.
Michael Berk (54:00)
That is my take.
Maria (54:01)
Sorry Lara, I want to make a joke when you said people wear jeans in the technology space and yet we never wear jeans at the office. You and me, we never wear jeans in the office.
Lara (54:10)
Yeah, true.
Yeah, that's true. I think we were not... Yeah, that's a whole different discussion that we've had at some points, which is a bit... It's a bit...
Maria (54:20)
Mm.
Michael Berk (54:23)
What's the story?
Maria (54:25)
The
story is that, at least my story on me or my take on this is that when I go to the office and I'm not going very often, I'm going probably once a week, I'm gonna wear my nice clothes. I'm gonna be like full on outfit out there. And I think that people pay attention.
Lara (54:29)
As
No, but it's just we were talking about the fact that especially when I started my career, I really wanted to have like a Dev look. was yet like more jeans and t-shirts. And I felt, especially as a woman, like, you know, you're the odd one. And so that, yeah, then it makes you look like smarter and more technical. And so I usually wear lots of dresses and skirts and whatever. And actually a lot of women, talk about it sometimes in the tech world that we feel.
It's not really like what you're supposed to wear in a technical environment. But then, I don't know, like I just decided to stop doing that like a couple of years ago. was like, I don't care. I'm just going to wear like a red dress at the office, you know, like French style. I don't care. Like people can judge. And I've never talked about it with men before. Like it's only in between women that we talk about this.
Maria (55:36)
Yeah, it was what it came to my mind when you said Z.
Lara (55:38)
I don't know
if you want to keep that in the podcast, but it's... ⁓
Michael Berk (55:41)
No, definitely.
Ben (55:42)
yeah, for sure.
Michael Berk (55:43)
Yeah. No, it's funny you say that. For the listeners, I'm wearing a really loud tie-dye t-shirt. It says, save the Earth so we have a place to boogie. And there's skeletons dancing on it. And I wore that and basically basketball shorts on a Friday once. And I'm rounding the corner and coming face to face with ⁓ sales executive dressed to the nine. And they stopped and audibly gasped.
Maria (55:49)
Thank
Michael Berk (56:09)
And I was like, damn right, this is the tech industry. I can wear what I want. But you are judged by it. And so I feel like, especially as a woman, there's a lot more, it's a lot more loaded. ⁓ But like our chief AI officer, whatever his title is, Jonathan Frankel, he has green hair. Like, well, yeah, I used to. And then it was blonde and now it's natural again. But like, yeah, I think that there is an interesting like self-expression of like nerds want to dress down so they look smarter.
Maria (56:10)
Ha ha ha
Used to.
Michael Berk (56:37)
⁓ but it still feels like at least that Databricks relatively welcoming. The other story actually that I remember I was at, ⁓ data and AI summit for the first time. And one of the directors of the professional services org named Paymon, ⁓ he was like, are you going to the happy hour? And I was like in basketball shorts and like going to the gym. And I was like, no, like, look at me. And he was like, I don't care. Like you can wear whatever you want. I was like, my God, what a concept. And Ben is wearing a bathrobe right now.
Maria (56:53)
So.
Ben (57:05)
I always wear a robe. It's.
Michael Berk (57:06)
You
Maria (57:08)
That's a little... Easy.
Ben (57:10)
Yeah, I
Michael Berk (57:11)
Here's multiple.
Ben (57:12)
always wear it. I've got like 20 of them, but it's comfortable. like my take on the whole, like I've worked in industries where people have like a very similar academic background to what software engineers would have at like a premium tech company, like where I work now. But the culture is so different. Like exactly as you said earlier, Laura, it's like it's more cutthroat or it's just more
there's more like social rules and people dress to their job title. ⁓ I remember being counseled a couple of times in my first job out of the Navy. Cause after doing 12 years in the Navy, you're like, I don't want to wear a uniform anymore at work. Like I'm done with that. Or like having that sort of professional bearing appearance all the time. It gets very old, at least it did for me. And then going in,
with like jeans and, and, ⁓ untucked button down shirt into like my factory role. And I'm an air quote engineer there. There was like six engineers at the factory and we had probably 400 technicians and maybe 2,500 operators and all of the engineers would always wear like either most of them came in, in a suit without a tie.
top button unbuttoned. I never wore that stuff. I would always wear like ⁓ a flannel shirt, know, button down, never tucked in dark jeans. And I remember my director coming up to me one day. He's like, he's like, I don't appreciate like the disrespect you give to our profession. And I'm like, where's this coming from? What is this in reference to? He's like, you, you dress like a slob. And I'm like, yeah.
Maria (58:54)
HR, straight with HR.
HR complain. ⁓
Michael Berk (58:59)
Yeah
Ben (58:59)
He was a pretty direct guy
and that company was in a very interesting part of the United States. But he dressed me down about like, bearing, about like engineers need to look professional. I'm like, yeah, but I don't want to replace my shirt, like an expensive, like hundred dollar button down shirt every month because I got some chemicals on it.
He's like, well, you shouldn't be touching chemicals. That's what technicians are for. You're the one. I'm like, yeah, but it's faster if I just, like, how am going to design and build these tools if I'm not getting my hands dirty? And he's like, well, you have a point, but you should dress better. I'm like, this is so stupid. But everybody was very stodgy and like, they had this air about them, all the other engineers, like we are better than the technicians.
Maria (59:39)
You
Ben (59:49)
And all the technicians hated all of them, except me. Like I was like buddies with a lot of the technicians and they're like, yeah, you're like one of us. You like, get into the tool and you're like, sometimes we come into the production floor. We know it's you because we can see dark jeans hanging halfway into a tool. I'm like crawling around inside this thing. I'm like, yeah, that's the best way to maintain these things. Like how are you going to work on something if you don't know how it's built or like what's going wrong?
Maria (1:00:08)
No.
Ben (1:00:17)
And getting it like moving from different industries and working in like the semiconductor industry, there's a dress code there for engineers there. Like it's very strict and I would break it all the time because I hated wearing suits. ⁓ But it's like the more formal people are expected to dress, the less nice they are to one another, the less more welcoming they are, I found. And then you get to like pure tech company type vibes and
There's even a shift that I saw between the field and R &D. The field is a little bit more ⁓ competitive. It's what?
Maria (1:00:53)
Presentable.
Presentable?
Ben (1:00:56)
Definitely more presentable with like what they they dress as. ⁓ And but like the the culture among team members, like people are friendly to each other and like help each other out. But there's still this undercurrent competition and like some people kind of like one up each other about like, this project that I'm working on, or I got this this I did this amazing thing, this account. And then on R &D, it's like the complete inverse. It's almost like everybody
Maria (1:00:57)
Yeah.
Ben (1:01:24)
When you're at that level and you're surrounded by people who are as competent as we have in R &D, everybody realizes that, or everybody feels like they're the dumbest person in the room. But everybody feels that. Like everybody's like, I have no idea what's going on. Like this is so complex. I'm such an idiot. And if you have everybody feeling like that, everybody's super nice to one another. Cause you're just like, Hey, I feel for you. Or like, Hey, how can I help you out?
Like, let's do this together, we're a team. It's very much like that.
Maria (1:01:55)
So when I joined this product specialist team, as I kind of welcomed the team, there was this product and engineering kickoff happening outside San Francisco in February. And so I went there and I was so proud of myself that I was there because of that. Everyone was so clever, all the deep engineers discussing about the new innovations. But also I was impressed by the teamwork, right? So how proud everyone was for each other.
and how close they were as a team and with the leadership and the founders. I was impressed by how good the team was on that side.
love.
Michael Berk (1:02:32)
Yeah, sorry to go back real quick, just to close the loop on the fashion thing. ⁓ Maria, you mentioned you dress to the nines when you go into the office because you want to wear your nice clothes. Yes. ⁓ How has that impacted your reception, you think?
Ben (1:02:32)
Yeah.
Maria (1:02:36)
No.
I do my best.
Michael Berk (1:02:48)
Cause you, you straddle technical and sales and customer facing, but you're supposed to be an expert. So what's your take?
Maria (1:02:48)
Hello?
I think it has helped me. ⁓ Specifically when I have more C-level conversations because sometimes I'm brought in to discuss strategy and direction for the account and make sure we have the funding and all of these things. And at that level, everyone is more presentable in that sense. So I think that works well.
I believe it works well because you give a first impression. I don't think it really matters at the end of the day. As long as you can back it up by some technical expertise and opinion, it works fine. If I only had the clothes, probably it wouldn't have worked so well for me. But I try to back it up with some expertise or something that I can bring in the table. So I believe it works because of this.
And it's not a requirement. It's just I enjoy it. It makes me feel good.
Michael Berk (1:03:49)
Yeah.
As my high school baseball coach said, look good, feel good, feel good, play good. ⁓
Maria (1:03:54)
Exactly. Yes.
But what I've noticed is people notice and so you get a lot of positive feedback from colleagues and usually women colleagues, which I love. So I love when we bring each other up, ⁓
Michael Berk (1:04:07)
OK, cool. ⁓ We're over time, so I will quickly summarize. But this was an awesome episode, already one of my favorites. All right, so winning the LLM provider war, ⁓ seems like there are three organizations that this group is excited about, Google, Anthropic, and OpenAI. Laura, of course, respectfully declined to participate. But we are bullish on all three of those. But Ben's take was that integrations
Maria (1:04:28)
You
Michael Berk (1:04:35)
⁓ With these LMS that's gonna make them win basically they're here to provide value and pretty soon LLM quality will be a commodity and so Tool calling and sort of how well it interacts with the infrastructure around it. It's gonna be super important Good interview question is make a code snippet with ten bugs and ask them to find seven This is both a personality check and a technical check and then some miscellaneous tips ⁓ Speak up if you are comfortable being loud ⁓
Also try to give those people who might be more introverted the space to speak up. And I've seen that a lot at Databricks. Please.
Maria (1:05:04)
Good. Yes. I actually, sorry, can I add something on that front? It's
education on both sides, right? So you as a person need to understand where you stand and how you are perceived, but also in the room, people need to be aware of other people's spaces and give space to others to speak up. So I think it's both ways.
Michael Berk (1:05:22)
Exactly. yeah, leaders, it's really important to like know your team and know that some people won't interrupt and other people will. And you need to give space for the people to have, so that there's basically an even playing field. ⁓ Be creative and find a way to be valuable within a group skillset. Laura mentioned that she was the note taker when she transitioned industries. And so just find a way to be valuable and stay in the room. And then for learning, it's really important to build a network so that you can tap a friend when you have an issue.
And then also find extracurricular activities that help you learn. For instance, I wrote a big blog post, like 50 blogs that basically helped me learn data science, made a podcast. Laura and Maria also are creating content to learn. So there are many ways to do it, both inside and outside your job. So yeah, anything else, Ben, you want to say?
Ben (1:06:09)
⁓ dressed tactically.
Michael Berk (1:06:11)
Dressed tactically, but beautifully. Awesome. Well, until next time, it's been Michael Burke and my co-host. And have a good day, everyone.
Maria (1:06:11)
Yes, but beautifully.
Ben (1:06:15)
Hehehe.
Ben Wilson,
we'll catch you next time.
