The AI Engineer’s Dilemma: Mastery or Versatility? | The Pair Program Ep37

Jan 9, 2024

The AI Engineer’s Dilemma: Mastery or Versatility? | The Pair Program Ep37

Join us in this episode as we dive deep into the world of artificial intelligence through the eyes of two industry leaders. In this episode, we’re joined by Maggie Engler, a Technical Staff member at Inflection AI, and Andrew Gamino-Cheong, the CTO and Co-Founder of Trustible.

Maggie and Andrew share their first-hand experiences of working in the AI engineering space, offering insights into the challenges and innovations that shape their daily lives. They explore the critical question of whether AI engineers should strive to be jacks of all trades or masters of one specific skill, shedding light on the diverse skillsets demanded by this dynamic field.

But that’s not all – this episode also delves into one of the hottest topics surrounding AI: ethics. Maggie and Andrew engage in a thought-provoking discussion about how AI should be used, the ethical dilemmas faced by engineers, and the inherent limitations in AI’s application.

About the guests:

-Maggie Engler is a technologist and researcher focused on mitigating abuse in the online ecosystem. She currently works on safety for large language models at Inflection AI.

-Andrew has spent his career working at the intersection of policy and AI. Prior to founding Trustible, Andrew was a machine learning engineering tech led at FiscalNote. Now, at Trustible, he’s flipped the script and is working to apply policy to the AI landscape.

Sign-Up for the Weekly hatchpad Newsletter: https://www.myhatchpad.com/newsletter/

Transcript
Tim Winkler:

Welcome to The Pair Program from hatchpad, the podcast that gives you a front row seat to candid conversations with tech leaders from the startup world. I'm your host, Tim Winkler, the creator of hatchpad.

Mike Gruen:

And I'm your other host, Mike Gruen. Join us

Tim Winkler:

each episode as we bring together two guests to dissect topics at the intersection of technology, startups, and career growth. And welcome back to another episode of The Pair Program. I'm your host, Tim Winkler, joined by my co host, Mike Gruen. Mike, uh, given our episode today is centered around AI, I was inspired to think Um, back through some of the greatest AI references in Hollywood over the years and some of the, some of the names that kind of popped up, some of the characters that popped up or agent Smith from the matrix, um, Skynet terminators from the Terminator data from Star Trek and then Wally. So my question for you to kick things off, if we're going to set up a battle Royale between these four AI characters, who would be the, who'd be the last bot standing in

Mike Gruen:

the ring? I mean, Skynet, they time travel. I think that trumps. I think that trumps everything.

Andrew Gamino-Cheong:

Data time traveled once or twice, though. No. Yeah.

Mike Gruen:

But through like wormhole like that, I feel like that was more discovery, not invention. But I would like to point out, and I think I've mentioned that you left off that list, Whopper, the original Skynet from War Games. So it's good to start. That's not really a correction. I'm not correcting. I'm just adding to the list. But just adding. Yeah. What about you? Which one do you think

Tim Winkler:

wins? Well, I wanted to kind of keep on, on theme, so I just plugged it into chat GPT and the answer was in a, in a straight up battle royale, my money might be on Skynet Terminators due to their combat focus and relentlessness. Agent Smith could be a close second if he manages to take over the other bots or manipulates the environment. Data and Wally are strong contenders, but they might be limited by their programming and lack of aggressive intent. But then again, Wall E might just charm everyone into laying down their arms, who knows? Interesting.

Andrew Gamino-Cheong:

That's

Mike Gruen:

a good answer. That's a much better answer. Uh, well, more, well, much more thought out,

Tim Winkler:

clearly. Yeah. But yeah, I kind of like the Wall E just charming folks, like, let's just, let me win this one. Nice. Um, good stuff. All right, well, let's let's give our listeners a preview of today's episode. So today we're going to dive into a discussion around AI engineering and the debate on should AI engineers kind of be jacks of all trades or masters of one. Uh, and we'll tackle everything from, you know, career paths to ethical kind of quandaries and, and, uh, and a few other things. But, uh, we've got two very qualified guests joining us, uh, Andrew Gamino-Cheong and Maggie Engler. Andrew is the CTO and co founder of Trustable, uh, an AI governance startup has over seven years of experience as a AI ML engineer and architect at Fiscal Note prior to starting Trustable. And he's a very passionate about the intersection of AI and policy. Maggie is an experienced AI engineer, currently at Inflection AI, an AI studio based in San Francisco, creating a personal AI for everyone. And prior to Inflection, Maggie spent two years at TwitterX building models to detect policy violations and has taught data science courses at UT Austin and Penn State Law. So, before we dive into the discussion, first off, thank you all for joining us today. But we will kick things off with a fun segment called Pair Me Up. This is where we pretty much go around the room and give a complimentary pairing. There we go. Teed up. Um, Mike, you always kick us off. What do you got for us today? Keeping it

Mike Gruen:

nice and simple. Going back to the basics with food, hash browns and sour cream. Um, I was recently reminded that there was a restaurant in, um, well, Rockville, South Rockville, North Bethesda, but really South Rockville, uh, called Hamburger Hamlet, uh, for a long, long time. When I first moved down here, that had, that was, it was called those potatoes or something like that. And it was a hash browns and sour cream and it was. Went excellent with the burger or whatever you're having

Tim Winkler:

for dinner. Yeah, I wasn't seeing that coming. The sour cream threw me off a little bit. The hash browns. Ketchup maybe. Uh, sour cream. And I'm

Mike Gruen:

not even a huge sour cream guy. But sour cream and hash browns, it was, it's a nice pair.

Tim Winkler:

Okay. I'll have to take your word on that. I love sour cream too. I've never tried it on hash browns, but we'll give it a shot. I'm

Mike Gruen:

talking like the potato. I'm not talking like the round potato ones. I'm talking like the like. Where it's the the hash browns are like the breakfast, like the skinny, shredded potato.

Tim Winkler:

Yeah, yeah, I gotcha. Yeah, I'm following you. I'm just thinking of a McDonald's hash brown for some reason. That's what's coming to my mind.

Andrew Gamino-Cheong:

I see. I'm getting hungry now.

Mike Gruen:

Good thing. It's Friday afternoon. Yeah,

Tim Winkler:

I'll, I'll, I'll jump in. So, um, my, my parents going to be fitness and fellowship. So earlier this year, I joined this kind of workout community in my town. It's called F3. It stands for faith, fitness, and fellowship. Um, it turns out there's actually hundreds of these kind of small outposts across the country and actually some, some global locations as well. But essentially what it is, it's small workout groups of Anywhere from like three to 20 guys that meet up a few days each week, usually pretty early in the morning for like a 45 minute hit workout. And for me, it's kind of been a valuable way to combine community and exercise. And so like the fellowship piece of it, it makes the fitness part, in my opinion, more enjoyable and I guess more sustainable. You kind of get a guilt trip if you don't show up or people, you know, kind of rag on if you're, if you keep missing a couple of days consecutively. So. Um, another perk of it for me has been to my family. We live in Northern Virginia, but we We have a place in Chattanooga, Tennessee that we try to come to, you know, a couple of months out of the year, and especially where I'm recording from today, um, but, uh, there's little F3 post here as well. And so it's a way to just kind of quickly plug right into the local network here, kind of get a workout in, um, but also kind of meet some, some people as well. So. That's, uh, that's my parents, um, fitness and, and fellowship. Um,

Mike Gruen:

definitely agree. I went to the gym the most when I had a coworker, he and I, it was like this, like, he was like, all right, tomorrow at 6am. I was like, I, I have no choice. Like you've already, we've set the challenge. So yeah. I'll see you there tomorrow at

Tim Winkler:

six. Yeah. I mean, Jim's capitalized on that community style, like Orange Theory is another one. Like people want to. Get those flat points. They want to, you know, post it through their, the little, the little network, the app and everything. So it's, it's, it's pretty, it's pretty genius. Um, but, um, definitely something that kind of keeps me motivated and keeps me coming back. Um, cool. So let's pass it along to our guest. Um, Maggie, I, I'll start with you if you want to give us a quick intro and

Maggie Engler:

tell us your pairing. Um, well, I'm Maggie. I'm in Austin, Texas, and y'all already introduced, uh, me a little bit, but my pairing, I'm also kind of moving into this fall mode, even though, um, it really has only just started cooling off, uh, down in Texas. Um, and so I was thinking about also, uh, food related, um, my pairing is coffee and pie because I love, uh, like Thanksgiving, obviously, and having food and family, but what I really love is like leftover pie and then having that for breakfast, uh, with like black coffee. To me, that is like the perfect breakfast. That's

Tim Winkler:

awesome. Now I'm salivating because pie is one of my favorite desserts of all

Mike Gruen:

time. Dessert for breakfast is great. This morning, I had my leftover bread pudding for breakfast, so.

Tim Winkler:

So do you, your coffee, you just go in straight black coffee or do you do like a flavor? I'm just going straight black coffee.

Maggie Engler:

Nice. Yeah, and I actually, I don't really do like iced coffee, which is kind of unusual. Uh, especially when it's hot out, but I'm pretty much just, uh, yeah, old fashioned that way.

Tim Winkler:

I love it. Coffee and pie getting geared up for Thanksgiving. Um, awesome. Well, thanks again for joining us, uh, Andrew, how about yourself? Quick, uh, intro and you're pairing.

Andrew Gamino-Cheong:

Yeah. Uh, my name is Andrew. Really excited to be here. Um, I'm actually calling in from DC. So a close, uh, I know for some of the other places you guys were talking about for me, I'm actually going to go with like a really great. a competitive strategy board game and a craft cocktail. There's a lot of places that I love to go to in college that had like, you know, board games and drinks. And then during the peak pandemic, my wife and I, I think bought every good competitive two player board game out there. And then some of those pandemic nights where you couldn't go out and do anything, we'd make cocktails and play those for hours on end. So I think something about the pairing of those that just work really well, getting vibes on both sides. So

Tim Winkler:

nice. That sounds awesome. You have a favorite cocktail or board

Mike Gruen:

game, board game,

Andrew Gamino-Cheong:

board game. There's one, um, seven wonders duel. It's a dedicated version for two players. Uh, my wife and I got competitive in that real fast and that was amazing.

Mike Gruen:

Nice. Cheers. My wife and I was, uh, Starfarers of Catan is our, uh, was our go to for a two player. So, yeah, love that.

Tim Winkler:

Yeah, we were having a debate, uh, at Hatch, uh, not long ago about the, the goat board game out there. And, um, Clue came up quite a bit. Clue was, was one of, uh, uh, group favorite, but it's not really a two person thing, but Catan was definitely also top of the list. All right, good stuff. Uh, well, again, thanks for joining us, Andrew. And, uh, we'll, we'll transition into, uh, the heart of the, the topic here. So, uh, as I mentioned, we're going to be talking about, you know, depth versus breadth as it relates to AI engineering. Um, and we want to kind of tackle this from a few different perspectives. Um, Andrew, you know, why don't you kind of lead us off here? You know, you were a double major on computer science and government. Sounds like it played a part in your career path as an engineer. So what are your kind of thoughts on the topic of specialization versus

Andrew Gamino-Cheong:

generalization? Yeah, happy to dive into that. So as you mentioned, in undergrad, I double majored in both political science and computer science. Honestly, at the time, I was unsure whether I wanted to become a lawyer doing stuff in the tech space, or if I want to go into the tech space and do stuff related to the law. I ended up choosing more of the latter because I always had these dual interests. Um, yeah. Really partly informed by, you know, the kinds of things I did growing up. You know, I was the biggest fan of the West wing really shaped my whole view and started to come to DC for college. Um, but I also loved watching every sci fi show out there. You know, I watched all of Star Trek and watch data and all these cool ideas about AI, um, and the impacts of those could have on society. And so what I was always thinking about is actually how could we apply these awesome, powerful ideas in AI to this space? You know, I always saw a lot of. Similarities in the kinds of logical things that actually are embedded in laws, you know, these policies, these ideas, there's logical principles, there's interpretations and how that could actually be perhaps assisted by A. I think like a lot of people, I always tried making my own like, Hey, could you create an A. I that could actually interpret laws, make recommendations based on that. I think now I've got a much better sense of the ethical or safety challenges around that. Um, but my advice sometimes is actually to when I talk to the former students, you know, pick two things, pick one thing that will give you some technical skills, pick another that's really piques your intellectual curiosity. And you can find a really, really great career path working at the intersection of those two, because you can basically be always understanding the latest technologies and ideas and applying it to the problems in your other space. And you can do that in both directions. And that's where I think. We see innovation happen the most and you're taking ideas and solutions that have been developed in one space and applying them to another. I think my career has been really successful doing that. Um, that's definitely something I recommend to everyone else.

Tim Winkler:

Yeah, that's really sound feedback. Um, and advice, you know, this is probably a good jump off to also like, you know, explain a little bit more about trustable because obviously it sounds like this played a big part in you building this, this business and what you all, the problems that you all are solving.

Andrew Gamino-Cheong:

Yeah. So right before I started Trustable, I was working for a company that basically was using AI to understand the policy landscape. We'd scrape every piece of proposed regulation legislation initially in the U. S. and then globally. Use AI to try and identify what was relevant, you know, which legislation was more likely to pass or not based on individual voting preferences. You know, which things were more likely to be significant to different industries. One of the biggest trends we saw was on regulation for AI itself. So I remember, for example, reading the first draft of the EU's AI Act almost two and a half years ago when it was proposed and immediately starting to think through how would this impact actually my life, right? I was on a day to day basis proposing new ideas for AI. I was never having to go through our legal team, though, to discuss them, to understand the compliance requirements or the legal context around that. So I was literally starting to think through, actually, how could I make sure that I don't have to spend all my time dealing with just compliance and legal teams? Like, could I give them all the information they need up front to help do the kinds of risk assessments that these laws require? That was then the origin of Trustable. You know, our, our goal is to make compliance with all of these AI focused regulations as easy as possible. Where that's understanding even what risk category use case of AI falls into for the AI act, conducting some of the workflows to do like risk or impact assessments on individual risks, and actually helping to just, um, helping organizations adopt ethical principles and helping them actually document all the ways in which they are being ethical with AI in a provable way so they can kind of build trust with their stakeholders. So this is actually, and we ourselves are using AI as part of this, right? We've actually done a lot of research now on AI incidents. We also have a large language model system that can help, uh, kind of teach some of our customers the different requirements are and help them interpret and even document things in a smarter way. We won't use generative AI to write your documentation, but we will actually evaluate how good your documentation is with AI.

Tim Winkler:

Yeah. And it's a, another topic that we'll, we'll get a little bit deeper into, uh, later on in the conversation because you had a, you know, some interesting perspective on like the, the doomers and the utopiast and fun, um, no ground there as well when it comes to AI, but, um, let's, uh, let's get Maggie's perspective on this as well. So, um, Maggie, I guess, uh, your, your initial thoughts when you, when you think about AI, when it comes to, you know, engineering as a specialist or, or generalist. Yeah.

Maggie Engler:

Yeah, I think that's, um, I think there's room for both. Um, and, um, just thinking back, I, it's super interesting, Andrew, that you are SPS in government. Um, I was, I think, pretty much throughout school, pretty much a specialist. Uh, I actually don't have a SPS degree. Uh, I was, uh, did a bachelor's and master's in electrical engineering, and I was focused on statistical signal processing cuts. So kind of very like, Applied math, um, focused, not really at that point with too many, um, kind of practical, uh, applications, um, but the first, uh, role that I had in industry, I was working on a, um, cyber security platform. So doing malware detection, um, with machine learning and. I think that, um, from that point on, I was kind of like, Oh, well, first of all, um, just from a purely academic standpoint, like the data science and machine learning world, uh, aligns really well with my skillset. Um, but then also working in that, um, field, uh, kind of by accident, really, uh, I found that cybersecurity was super. Interesting to me on a personal level, I became really interested in how, uh, responsive different machine learning systems are to, uh, different types of attacks and how, um, there's kind of this, um, cat and mouse game, uh, where as soon as you sort of harden a model to some type of attack, like you then start to see, um, novel things come up and, um, um, For me, like that sort of adversarial nature, um, meant that it was, it was always kind of fresh and felt, um, um, like that I, uh, that I was always learning. And so, um, ultimately I think we kind of ended up at the same place, even though I was certainly not as broad as Andrew when I was in school. Um, in that I kind of. Uh, selected, um, my career opportunities boards, uh, first sort of explicitly cybersecurity information security. And then, um, after that, much more towards trust and safety more generally. Um, so I worked at, um, you already mentioned I was, I was at Twitter and, uh, now X, uh, for, um, over two years working on their, in their health engineering organization on policy enforcement. And in my current role, I took a lot of that, um, background to an AI startup, uh, where a big part of my job is just trying to understand, uh, and improve, uh, the safety of. Um, our large language model product. And so understanding, uh, what are the risks, um, associated with, um, what generations the model produces in these different conversations. Um, how can we measure that? And how can we kind of prevent, um, unsafe generations from happening? Um, so I've also kind of somewhat narrowly focused in on on a certain problem area, even though obviously, data science machine learning is is super broad. You can do almost anything with that. Um, and so I really like, um, uh, this proposition around like, If you can find, um, uh, a broad enough skill set, but also narrow in on like a particular area where you're interested in applying it, um, that seems like a recipe for success.

Tim Winkler:

Yeah, it's really fascinating. Um, Mike, I'm kind of curious on your input on this too, just kind of, you know, I mean, it's a very similar, a lot

Mike Gruen:

of, yeah, it's just, it's funny. Cause, uh, as. Um, as you were talking, I was thinking about my own story, my own journey on like how to, I went into natural language processing and then I was on a cybersecurity product where we're using inferential statistics to try and find bad actors and stuff like that. And I, and like the whole idea of, for me, I went to school for computer science. I minored in English. Basically poetry. There's not a good intersection there. Um, but, um, I do agree with the idea of like if you can find certain areas that like you're really passionate about. Like when I was doing the stuff with the natural language processing and we were, um, it was it. It was really for intelligence communities to find like bad actors, uh, looking at different things. Um, passionate about that and being able to apply my sort of software engineering expertise to that. So I agree with that sort of like, if you can find that, those niches that really interest you. And I don't know that you need. I think it can change over time. I'm, I've been working for a long time and I've moved from thing to thing to thing. Um, I think that's an important part of a career is find, you know, being able to find the next thing that you want to work on or the next area. Um, once you start maybe losing interest in a particular area, I know other people that have like stuck in the same thing for years and years and years, and they've never lost interest in it. And I can see that also happening, but it's also, I think having a broad skill set that you can apply to very specific problems is a, it's a good way

Tim Winkler:

to go. Are there any verticals that you would say, like, you know, being a specialist is, is preferred, right? Um, uh, healthcare, anything, anything that comes to mind, uh, that you all would say that you've heard from colleagues or friends that have been truly very dialed in and it's proven beneficial for them?

Andrew Gamino-Cheong:

I know, um, a couple of people from some grad school times who have kind of combined a lot of background in the medical space with that deep understanding of computer science. And there's so many amazing applications actually of, um, even taking computer science concepts that are now almost a decade old and they're still actually quite novel when applied to computational biology problems. Um, you know, like sequencing of different things and now looking at applying some stuff around like DNA sequencing and the algorithms developed to that sequencing actually stuff in tumors themselves. And, you know, what you realize is that there's actually so much good work to be done there that isn't, it's not viewed as cutting edge anymore, right? Even the idea, I knew someone who is taking the concept of word embeddings should now feel ancient in terms of NLP technologies. Then we're applying that actually to like sequences of DNA that they were collecting, right? Because if you're talking about the context of things that are next to things and how that can actually represent stuff. They're actually able to learn a lot and improve their own, um, topic model kinds of algorithms in the computational biospace using what is practically ancient technology now in the NLP space. And I found that to be like a fascinating application. What about you, Maggie, anything

Tim Winkler:

that comes to

Andrew Gamino-Cheong:

mind?

Maggie Engler:

Um, that is fascinating. I think that, um, I have seen, uh, certainly sort of very specialist, uh, individuals, usually like PhDs, uh, be extremely successful at sort of the cutting edge of AI research. Um, I think that, um, uh, I worked with in the past people who have done, for example, like large language models, model research for, you know, at this point, much, much longer years longer than most other people in the field. Um, and I think that does give you an edge, but it does seem to be like a very small slice of the sort of total opportunity that I think in health is a great example of a vertical where all of this. Specific domain knowledge is really, really helpful and you can use, um, like Andrew said, um. Uh, just the application of even a technique like word embeddings, um, could produce novel knowledge. Um, so I think there's, I think there's a mix and I think, uh, that was what I was trying to get at at the beginning, uh, is that I think that in a lot of cases there's room for specialists and, um, folks who are, are, have that more broad, um, knowledge, but in particular, uh, sort of the cross pollination of ideas, um, seems to hold a lot of value.

Tim Winkler:

Yeah. And I think with the, you know, the, uh, generative AI models, like the, some of these foundational models that have come out of the last year, um, these are really kind of spicing things up, I think, and, and adding, adding to this debate, because, and Maggie, maybe we were talking about this on our initial discovery call about, you know, Um, The power of these tools and how they can be applied to not just a computer science, you know, graduate, but like folks in marketing or folks in finance or something there where they can now consider themselves going down this path of a, of an AI opportunity that maybe wasn't quite as present before. So I'd love to just kind of pull on that thread a little bit and, and, and maybe, you know, starting with like how. How these have impacted your all's work, or, you know, how, how do these kinds of models like limit or enhance career prospects, especially for those folks that are coming out of school and, um, you know, exploring that next opportunity

Andrew Gamino-Cheong:

for themselves? I'll let you go ahead, Maggie, first, since I went first on some of the last few questions.

Maggie Engler:

Uh, yeah, I think it's as we talked about, I think it's a. Really interesting time right now because of these foundation models. Um, one of the things that strikes me, so in my own workflow, I've started integrating coding assistants, things like that, um, that. Don't necessarily produce anything for me that like I, I wouldn't have known how to do, but can, uh, make things more efficient, um, and make it faster to do things like documentation. Um, but for me, the big question, right, is, uh, how will this change kind of future work opportunities and even in field? I do think that the argument that I think I've made at that point in time is. Um, that it will not necessarily, um, replace entirely a lot of a lot of the professions that people are kind of worried about, um, losing, um, but that it is always going to be sort of an advantage, like any tool. Uh, to be able to use generative AI well and, um, understand its limitations and understand its capabilities. Um, actually my my, um, uh, colleague from Twitter, uh, NMA Dimani and I, um, have a book coming out. Uh, actually in a couple weeks with Manning on introduction to generative AI. Um, but, uh, shameless plug. Um, but in that where we talk a lot about, um, sort of the things that people, uh, do use it for already. And then like things that they really shouldn't be using it for. There was a super famous example of. A lawyer who, um, submitted a legal brief, uh, with chat deep written with chat, GPT, and didn't really, uh, or didn't even back check it, um, and so it caused this whole kind of delay in the case. And I think he was penalized professionally in various ways. Um, because ultimately, uh, people are still going to have sort of the responsibility to ensure quality of their output, but if you're able to, uh, produce, uh, writing that is, um, You know, to the quality that it needs to be. And, um, you're able to do that much faster, much cheaper, and that's always going to be an edge. I

Mike Gruen:

mean, I think just jumping in a little bit on that, like the, I think back to the nineties when the web was starting. Um, I think what we've seen is really an enablement of certain career opportunities that didn't exist. So like when I first started. You had some artists and some graphic designers on staff that were sort of helping to do things, but now you, like, I've worked with people who are just straight up graphic designer artists who can now do a whole web application front end the whole, you know, and most of the logic to it. And I think that like, right, we, the software engineers, computer science, we, we build these tools that then enable others to use their special talents, their, whether they're artists or whatever, to be able to sort of take it to the next level. And I think that that's what AI is going to be able to do is sort of make some of these like have impact to other careers that we can't even think of, um, and enable them to be more efficient in their jobs.

Andrew Gamino-Cheong:

You know, one thing that I always find funny is that we call it prompt engineering, but so oftentimes it feels more like prompt art, right? It's more like there are some funky things that can happen depending on what prompts you put in there. I know Maggie, this is like the big problem that you're trying to actually solve for. But I think it is amazing because I've seen some very incredibly intelligent people who don't have a technical background do some really amazing things with these algorithms and these generative AI systems. I do think though, the limitations of them aren't well understood. You know, some people it's like, Oh yeah, I had it like calculating all this stuff for me. I'm like, Oh, it really doesn't have actually an understanding of math. And like, if you didn't check the math, you could get into real trouble in doing that. I think that's one of the biggest challenges people have, even on like their day to day basis, right? It's knowing like, what are the things that it can do? What it can't do? You know, if you ask him stuff, most of its information, its core training data set for chat, GBT only goes up to 2021, right? It has some other ways of adding in some other context about some things, but like that itself could be a huge deal for some. Use cases and they've got a small disclosure now and like the left hand corner for it, you know, our point of view and is always that, you know, you need to be doing a lot more thoughtfulness about what tasks it can, can't do. And I worry that a lot of people, they don't understand it well enough in those limitations, um, that itself can bring some risks.

Tim Winkler:

Yeah, it's, it's an interesting space. It's like handing, you know, somebody the keys to like a Lamborghini and not knowing exactly, you know, it's capable of a lot of things, but you know, half of half of the bells and whistles you don't even know about. So it was still so early on just to kind of understand like some of the hacks and the tips, how to best use the tools. Um, so it'll be interesting, but I think with that, You know, with, with where it's at right now, I, I, I'm curious to know, um, you know, we'll say for like data science bootcamps and things of that nature, right? Are those already being crafted, uh, you know, in terms of like really emphasis around AI and have you all seen any of that or, or, or just generally in, in, in pure academia at large, um, are you seeing these programs being built around? Career paths within AI and what does that look like?

Andrew Gamino-Cheong:

I know there's a lot of focus on training programs to learn how to use them. I haven't necessarily seen that, um, in like academia itself. There's still very much desire to teach how these systems work under the hood, partly because there's now so much focus on how to mitigate the risks. And you really only do that once you do understand the underlying levels and like. You know, these models aren't yet explainable and yet the necessity potentially legally for them to be explainable for certain use cases. So high. So I do suspect that'll be one of like the biggest areas of focus. Um, on that research side of things, I think one of the, the challenges in one area as well, why sometimes I recommend like explore a multidisciplinary approach is that there's fewer and fewer orgs who are working out at the kind of cutting edge of things. And that's partly because these models are so large, you need so much data, so much compute, that there is kind of a concentration, right? If you want to work on a truly large language model, you need a billion dollars or at least a hundred million dollars in funding to be able to. Really support that kind of stuff, and that's only going to be accessible to a smaller and smaller number of organizations. You know, I actually knew some professors in grad school who they used to be some of the world leaders in machine translation, but they no longer have access to actually the algorithms and the data and the compute to do that still cutting edge work without actually then just associating with a lab working in big tech. So I think that itself can pose challenges to accessibility for like those specialists in academia versus in big tech. Is that because

Mike Gruen:

I'm sorry, is that because the what was cutting edge now just record the new cutting edge just require so much more compute and the access to it? Or is it more nefarious is I guess is it more that big tech is actually gobbling it up and preventing it from being done at academia or something like that? You'd rather not

Andrew Gamino-Cheong:

say I don't think they're deliberate in gobbling it up. Like they're not trying to be, I'll say predatory in that sense, but they are the only ones who can literally have like, Oh, we can spend a hundred million dollars to train GBD five. Right. Right. No university. Can throw that kind of resources at that.

Tim Winkler:

Sorry, Maggie. I think you were going to say something like rudely interrupting them.

Maggie Engler:

No, um, I was just going to add that, uh, that is absolutely true. And I do think that is a problem, um, for the field having. Thought a lot around, um, sort of the I safety space. There is a lot. Well, first of all, I guess the point one is that it is harder to do cutting edge work because of the resource constraints that entry brought up. I think that's starting to be, um, remedied a little bit through like the sort of open source development. Um, an ecosystem. Um, so you're not going to be able to, or it's going to, at least for a long time, cost a huge amount of money to do foundation model development. But, um, I've seen so many cool things around like, um, um, replicating performance, the performance of these cute large models in a smaller model, um, on, you know, 100 worth of harm hardware and things like that. So I think, um, it's, yeah. We're starting to move in an inner direction where it's slightly more accessible, but not for the not if you want to do the sort of cutting edge research. Um, and so, like, it's very going to be very accessible for, like, building applications and things like that, but not necessarily the type of work that, um, that you'd like, um, sort of professors to be working on with respect to, um, some of the safety risks and things like that. Um, The other thing that I was going to mention is that I think for these big companies, it becomes Sort of a there. There is kind of a situation where they're all racing for different resources. And that does, yeah, um, drive up the cost of development for other folks. Um, and I know that some leaders in the space have, um, proposed things like licensing, uh, for the, if you want to have a model that's You know, at GPT 4 level or higher, like, um, having some, uh, you would need to get approval, um, or a license for that, um, which is, I mean, I guess a good idea from a safety perspective, um, because you just have fewer people, um, at least legally developing them, but I, even, even as a person who works in AI safety, I, I. Very much have like a reticence towards like any type of limitation around like who is allowed to, uh, develop them. And so, uh, that I was just going to, uh, sort of, um, also reference that proposal because I think it is interesting to see and, um, Andrew, you're trustable. I know is super involved with this, but, uh, in thinking around this, but like, it will be very interesting to see, I think, how the AI governance develops over the next few years.

Andrew Gamino-Cheong:

Yeah, there's really good questions on like the liability, right, who owns that a big thing that, you know, we focus on is it's really important to have models like the ones that you guys are building an inflection, you know, disclose what the risks are for something. But then you can't, there's no way you guys can really understand all the ways that can be used. Right. And that itself presents challenge. So even if you license out actually that, yeah, you're allowed to build these models. It's there still has to be a lot of responsibility on the groups who are actually deploying it to make sure that they're doing an ethical decision like, Hey, are the benefits outweighing the risks, right? I can look at the risks declared by my model and then I need still need to decide whether those risks are appropriate or not for my use case. And how that kind of maps out. I do think one of, to tie it back to just what we were talking about just a second ago as well, you know, academia, they love open source stuff because then they can get access to actually do things at the edge of this model. But the danger there is actually, um, all of the, I think the worst uses of AI that we're worried about. They're not actually going to come from like open AI system that has a trust and safety team with a 10 million dollar a year budget looking at stuff. It's going to come from the open source systems. If you want to run a misinformation campaign, do some illegal shit with AI, you're going to use the open source models. And then the problem is like, who's responsible for that? You know, what are the conditions there? And so there's been a couple of policy papers that came out earlier this week, recommending that large frontier models actually not be open sourced at all. And the government's forbid that, which actually could again, impact the ability for academia to be able to do some of their own frontier research on that. And there's a good kind of trade off there. I mean, I think it's,

Mike Gruen:

as you guys were talking, I was sort of thinking about how it's so. In the past, these types of big, expensive types of endeavors and new frontiers, space, nuclear technology, whatever, all started in the government. The government was the only ones who could possibly have the budgets to do this. There wasn't an immediate commercial application for X, Y, Z with A. I. There's an immediate commercial. Use, and that's what's driving business to sort of be at the forefront of it. And therefore I think government is playing catch up as opposed to in the past on some of these like, right? Like what stops somebody from building a nuclear bomb in the past? Like we we figured it out the government funded all that they put in all these regulations to make it really really difficult for Someone to do this but for on AI that's just not the case the the forefront is commercial application So I think it's an interesting as you guys were talking sort of some things click there that I hadn't really Thought

Tim Winkler:

about in the past, I think it's a good segue to, and, uh, Andrew, in our initial disco call, you were, we were kind of talking a little bit about, you know, the, the doomers out there, the utopias, and then you had a third one, the AI pragmatists, you want to kind of expand on that, uh, just kind of. Explain a little bit more of what you mean by that.

Andrew Gamino-Cheong:

Yeah. So, you know, like any media thing, media loves really, you know, uh, eye catchy headlines, like AI is going to create, you know, solve all of our problems and you can read blogs from famous VCs about how AI is the solution to all of our problems and also read, you know, we talked, we began this podcast talking about Skynet, right. AI is going to kill us all kinds of things. Those are great for headlines, but the danger is that that kind of distracts from actually trying to solve some of the real problems out there, right? You don't need to have military AI to still have AI harms. One of the first instances that almost set off this entire industry now of AI safety research was around use of AI to recommend prison sentences. ProPublica did a great expose about like Hey, this is biased towards a certain group. Underlying that actually was a discussion about how you measure fairness in an algorithm and arguably an ethical debate about what fairness was used to optimize things, right? AIs are trained to maximize some value. If that value is Arguably has an ethical aspect to it that needs to be discussed. You know, the truth is that we're never going to be able to pause all of AI, nor should we really assume that AI can really solve all of our problems. Cause there's a lot of things that frankly are beyond its realm. And so the question is really, let's assume AI is going to be everywhere pretty quickly. You know, how do you actually set up the right conditions to do that responsibly? You know, we can't really prevent it. And so what are the policies that should, that we should adopt instead? One example of that, and you know, this may sound a little bit cynical, is it's always going to be cheaper and faster to generate content with AI. And so trying to say we're going to watermark everything, it's going to be really difficult. And also again, with any open source system, any watermarking things can be evaded. And so instead, I also say like, let's look at what is quote certified human content look like, you know, it's like the equivalent of an organic label on something. Let's define the criteria for that and actually get that set up because there's going to be a lot of interest and demand to say like, yeah, I, I will only buy journalism that's certified human content. Right. Or like certain unions will want to enforce a certain level of that kind of stuff. Um, you know, that's just facing the reality that probably the majority of content that we'll see coming out within five years is maybe even kind of conservative, is, uh, will be AI generated.

Maggie Engler:

Yeah, I think it's also so important to, right, think through kind of the context in which all of this content is appearing and Um, what we really need to do as kind of a society, um, in order to respond. Um, I guess that, that might be your definition of pragmatism, um, but it reminds me I was recently, um, at an event, uh, organized by the, uh, give CT global internet forum on counterterrorism and talking about a lot of this stuff, generative AI and how, um, we've already started to see sort of deep fakes from, or of political figures and things like that being used for various. Um, purposes. And when it comes to something like watermarking, I think it, that to me strikes me as like, um, an example of a technocratic solution where, right, like, even if you're saying like, okay. Setting we're setting aside open source models like all of the big AI generation, uh, models have agreed that they're going to all watermark their content, but then ultimately, like how many people who are scrolling through X or or like other social media platforms are going to be like, Oh, I wonder if like. This clip of President Biden is is real. Like, let me go just check it against all these different waterworking systems. No one's going to do that. You know, 1 percent of people less than that are going to do that. And so I do think like what I'm most interested in is how it is exactly what Andrew is getting at, like how we can set ourselves up for this, um, in the way that in a way that is, um, kind of the most, um, productive as possible and the most Um, uh, sort of realistic around what is, what has already happened and not trying to stuff the genie back into the bottle, so to speak.

Andrew Gamino-Cheong:

One of my favorite, um, I'll say like pragmatic AI ideas I heard out there was instead of schools and teachers trying to prevent people from using, you know, GPT to generate their stuff, which is. You know, that's a, that's a losing battle. It's going to be impossible to ever like truly restrict it. They said, all right, you have to turn in one copy that is generated by GPT and you have to disclose what your prompt was and all the stuff you did. And you also have to hand in the handwritten version as well. You know, that shouldn't necessarily reflect that. I thought that was like really pragmatic because actually they'll end up with a whole corpus of like 30 plus essays written by GPT to then compare against all the ones that weren't. And it's kind of like. You know, use this as a tool, but still have to like show that original and creative side of things. Those are the kinds of solutions I think we just really need to be talking about more instead of just like banning this because I think that'll be a just a waste of time and effort. Yeah,

Mike Gruen:

the one that I saw that wasn't also classroom was the idea of like, we're just going to change what's homework and what's classwork. Rather than going home and writing this paper on your time, we're going to use the class, like read the book at home, if you want to use chat GPT to whatever to come up with ideas or whatever, but like we're going to actually use class time to write the paper, which I thought was an interesting way of doing it to make sure that people get the concepts and

Tim Winkler:

stuff like that. Yeah, I think it's all a pretty fascinating conversation at large. I mean, it, yeah, everybody's gone through that, that point. Uh, probably at one point in the last six months or years is my job is my job in jeopardy. Is my role going to be one that's replaced? And, you know, I think one of the biggest things that we've always kind of preached is just like, it should happen. Be a part of everybody's job. Just a matter of like, how do you use it as a tool in your tool belt to become more efficient or what have you? But, um, yeah, it's just, it's still so early to, I'm very excited to see how everything's, how things play out over the years, but, um, this is a great kind of starting point to keep the conversation moving. I love the pragmatic outlook on this too. I think it's a, it's a really fascinating, uh, addition, Andrew, but, um, yeah. Why don't we, um, put a bow on it and transition over to our final segment, uh, of the show. So this is going to be the five second scramble where I'm just going to ask each of you a quick series of questions. Uh, try to give me your, your best response within five seconds. Um, some business, some personal, I'll start with you, Andrew, and then I'll, I'll jump over to you, Maggie. So, um, Andrew, you ready? Yeah, let's do it. All right. Uh, explain trustable to me as if I was a five year old. Okay. Okay. Okay.

Andrew Gamino-Cheong:

We help you do all the legal paperwork for AI. How would you describe

Tim Winkler:

the culture at

Andrew Gamino-Cheong:

Trustable? I mean, we're an early stage company, so it feels like a family of friends, family of friends working together. I don't know if that makes sense, but

Tim Winkler:

I got it. What, uh, what kind of technologists would you say thrive at, at Trustable?

Andrew Gamino-Cheong:

One who is comfortable kind of learning stuff on their own. There's a lot of unknowns for what we're doing on the regulatory and AI front. Very cool.

Tim Winkler:

And what would you say, uh, are some exciting things that folks can gear up for, uh, heading into 2024?

Andrew Gamino-Cheong:

Yeah, I mean, be ready. Uh, the number of new applications of AI we're going to see is going to be explosive, I think.

Tim Winkler:

Nice. If you could have any superpower, what would it be and why?

Andrew Gamino-Cheong:

Ooh. I'd have the ability to, um, go back in time and re even just to re observe things that happened in the past. Nice.

Tim Winkler:

All right, kiss, marry, kill, bagel, croissant, English muffin.

Andrew Gamino-Cheong:

All right, kill English muffin, uh, kiss a bagel, marry a croissant.

Tim Winkler:

Um, what's something that you like to do, but you're not very good at?

Andrew Gamino-Cheong:

Ooh, um, probably bike rides. I, I love to go on some trails, but I'm like. I'm not particularly fast or athletic about it. So keep, keep that helmet on. Yeah, I crashed a lot. What's,

Tim Winkler:

what's a charity or corporate philanthropy that's near and dear to you? Um,

Andrew Gamino-Cheong:

my wife and I have volunteered at a dog shelter, um, here in DC. Cool. Very

Tim Winkler:

nice. What's something that you're very afraid of?

Andrew Gamino-Cheong:

Ooh, something I'm very afraid of. Uh, dairy. Definitely afraid of dairy.

Tim Winkler:

All right. I appreciate the honesty. Um, who is the greatest superhero of all

Andrew Gamino-Cheong:

time? Greatest superhero of all time. Uh, I've got a soft spot for Iron Man. Nice.

Tim Winkler:

That's the first time I've heard Iron Man on the show. That's good. I like

Andrew Gamino-Cheong:

that.

Tim Winkler:

All right, that's a wrap. Uh, Andrew, Maggie, are you, are you ready? I think so. All right, perfect. Uh, what is your favorite part about the culture at Inflection?

Maggie Engler:

Uh, I think my favorite part is that, um, Because this area is so new, like there's a lot of just openness to experimentation and, um, trying different things out. Very cool.

Tim Winkler:

What kind of technologists thrives at Inflection?

Maggie Engler:

Uh, quite a range. Um, but definitely people who are open to Um, iterating fast, but also kind of, uh, robust evaluators, um, and, and, uh, like to borrow a term from, uh, cybersecurity, really like pen testing and, and kind of relentless in terms of, um, trying to find all the chinks in the armor.

Tim Winkler:

Nice. Red, red team stuff. Um, what, uh, what can our listeners be excited about with inflection going into 2024?

Maggie Engler:

Oh, uh, I think we'll have a lot of, uh, improvements on the model side coming out. Um, So yeah, I don't want to, I can't say too much about it, but definitely stay tuned. Um, and, uh, the product, uh, pie, um, will be, it will be, we'll be continuing to iterate on our, on our

Andrew Gamino-Cheong:

product.

Tim Winkler:

Cool. Excited for that. Uh, how would you describe your morning routine in five seconds? Um,

Maggie Engler:

I usually work out, uh, Peloton and, um, have like some kind of breakfast, like toast, simple, uh, toast, peanut butter, anything like that. What do you love

Tim Winkler:

most about living in Austin?

Maggie Engler:

Oh, I love Austin. My family's from Central Texas. tailgating at UT, lots of sand volleyball, fun town.

Tim Winkler:

Cool. I'm going to flip it from what I asked Andrew. Um, what's something that you're good at, but you hate doing? Oh,

Maggie Engler:

um, that is interesting. Um, let's see. I'm always, I'm, I'm very good at, there are certain like household chores that I have like, um, kind of a systematic approach to, but I don't like enjoy doing. So. Um, like, I don't know, um, like big loads of laundry. I guess. I

Tim Winkler:

hate laundry. Um, what, well, if you could live in a fictional world from a book or a movie, which one would you choose? Hmm.

Maggie Engler:

Wow. Um, I. Would love to live in, um, like the kind of Gabrielle Garcia Marquez, like magical realism, um, based, so like kind of a South America tropical area, but like with magic.

Tim Winkler:

Sounds awesome. What's the worst fashion trend that you've ever followed?

Maggie Engler:

Oh gosh, um, crepe pants.

Tim Winkler:

Well played. Um, what was your dream job as a kid?

Maggie Engler:

I was actually just talking about this with someone. I really wanted to be a farmer, uh, for a long time as a kid, kid, because, um, my grandpa was a farmer. Um, and I thought like pigs and sheep and all that was really cute or were really cute.

Tim Winkler:

That's such a wholesome answer. Um, and we'll end with your favorite Disney character.

Maggie Engler:

Um, probably Mulan. Uh, I feel like she was early to the, like, strong female lead, uh, game. And, um, yeah, is just a badass.

Tim Winkler:

Yeah, she's a badass. And great soundtrack too. Great soundtrack also. Alright, that is a wrap. Thank you all both for participating and, uh, joining us, uh, on the podcast. You've been fantastic guests. Uh, we're excited to keep tracking the innovative work that you all We'll be dealing with your companies and building in the AI space. So appreciate y'all spending time with us, uh, on the pod. Thanks for having

Andrew Gamino-Cheong:

us. Thank you.

LET’S DISCUSS YOUR HIRING NEEDS

Build a custom hiring solution to grow your product, data, and
engineering teams.