The AI Engineer’s Dilemma: Mastery or Versatility? | The Pair Program Ep37
Join us in this episode as we dive deep into the world of artificial intelligence through the eyes of two industry leaders. In this episode, we’re joined by Maggie Engler, a Technical Staff member at Inflection AI, and Andrew Gamino-Cheong, the CTO and Co-Founder of Trustible.
Maggie and Andrew share their first-hand experiences of working in the AI engineering space, offering insights into the challenges and innovations that shape their daily lives. They explore the critical question of whether AI engineers should strive to be jacks of all trades or masters of one specific skill, shedding light on the diverse skillsets demanded by this dynamic field.
But that’s not all – this episode also delves into one of the hottest topics surrounding AI: ethics. Maggie and Andrew engage in a thought-provoking discussion about how AI should be used, the ethical dilemmas faced by engineers, and the inherent limitations in AI’s application.
About the guests:
-Maggie Engler is a technologist and researcher focused on mitigating abuse in the online ecosystem. She currently works on safety for large language models at Inflection AI.
-Andrew has spent his career working at the intersection of policy and AI. Prior to founding Trustible, Andrew was a machine learning engineering tech led at FiscalNote. Now, at Trustible, he’s flipped the script and is working to apply policy to the AI landscape.
Sign-Up for the Weekly hatchpad Newsletter: https://www.myhatchpad.com/newsletter/
Transcript
Welcome to The Pair Program from hatchpad, the podcast that gives you
2
:a front row seat to candid conversations
with tech leaders from the startup world.
3
:I'm your host, Tim Winkler,
the creator of hatchpad.
4
:Mike Gruen: And I'm your
other host, Mike Gruen.
5
:Join us
6
:Tim Winkler: each episode as we bring
together two guests to dissect topics
7
:at the intersection of technology,
startups, and career growth.
8
:And welcome back to another
episode of The Pair Program.
9
:I'm your host, Tim Winkler,
joined by my co host, Mike Gruen.
10
:Mike, uh, given our episode today is
centered around AI, I was inspired
11
:to think Um, back through some of the
greatest AI references in Hollywood
12
:over the years and some of the, some
of the names that kind of popped up,
13
:some of the characters that popped up
or agent Smith from the matrix, um,
14
:Skynet terminators from the Terminator
data from Star Trek and then Wally.
15
:So my question for you to kick
things off, if we're going to set up
16
:a battle Royale between these four
AI characters, who would be the,
17
:who'd be the last bot standing in
18
:Mike Gruen: the ring?
19
:I mean, Skynet, they time travel.
20
:I think that trumps.
21
:I think that trumps everything.
22
:Andrew Gamino-Cheong: Data time
traveled once or twice, though.
23
:No.
24
:Yeah.
25
:Mike Gruen: But through like wormhole
like that, I feel like that was
26
:more discovery, not invention.
27
:But I would like to point out, and
I think I've mentioned that you
28
:left off that list, Whopper, the
original Skynet from War Games.
29
:So it's good to start.
30
:That's not really a correction.
31
:I'm not correcting.
32
:I'm just adding to the list.
33
:But just adding.
34
:Yeah.
35
:What about you?
36
:Which one do you think
37
:Tim Winkler: wins?
38
:Well, I wanted to kind of keep on,
on theme, so I just plugged it into
39
:chat GPT and the answer was in a, in
a straight up battle royale, my money
40
:might be on Skynet Terminators due to
their combat focus and relentlessness.
41
:Agent Smith could be a close second
if he manages to take over the other
42
:bots or manipulates the environment.
43
:Data and Wally are strong contenders,
but they might be limited by their
44
:programming and lack of aggressive intent.
45
:But then again, Wall E might
just charm everyone into laying
46
:down their arms, who knows?
47
:Interesting.
48
:Andrew Gamino-Cheong: That's
49
:Mike Gruen: a good answer.
50
:That's a much better answer.
51
:Uh, well, more, well,
much more thought out,
52
:Tim Winkler: clearly.
53
:Yeah.
54
:But yeah, I kind of like the
Wall E just charming folks, like,
55
:let's just, let me win this one.
56
:Nice.
57
:Um, good stuff.
58
:All right, well, let's let's give our
listeners a preview of today's episode.
59
:So today we're going to dive into a
discussion around AI engineering and the
60
:debate on should AI engineers kind of be
jacks of all trades or masters of one.
61
:Uh, and we'll tackle everything
from, you know, career paths to
62
:ethical kind of quandaries and,
and, uh, and a few other things.
63
:But, uh, we've got two very qualified
guests joining us, uh, Andrew
64
:Gamino-Cheong and Maggie Engler.
65
:Andrew is the CTO and co founder of
Trustable, uh, an AI governance startup
66
:has over seven years of experience
as a AI ML engineer and architect at
67
:Fiscal Note prior to starting Trustable.
68
:And he's a very passionate about
the intersection of AI and policy.
69
:Maggie is an experienced AI engineer,
currently at Inflection AI, an
70
:AI studio based in San Francisco,
creating a personal AI for everyone.
71
:And prior to Inflection, Maggie
spent two years at TwitterX building
72
:models to detect policy violations
and has taught data science courses
73
:at UT Austin and Penn State Law.
74
:So, before we dive into the
discussion, first off, thank
75
:you all for joining us today.
76
:But we will kick things off with
a fun segment called Pair Me Up.
77
:This is where we pretty much go around
the room and give a complimentary pairing.
78
:There we go.
79
:Teed up.
80
:Um, Mike, you always kick us off.
81
:What do you got for us today?
82
:Keeping it
83
:Mike Gruen: nice and simple.
84
:Going back to the basics with
food, hash browns and sour cream.
85
:Um, I was recently reminded that
there was a restaurant in, um,
86
:well, Rockville, South Rockville,
North Bethesda, but really South
87
:Rockville, uh, called Hamburger
Hamlet, uh, for a long, long time.
88
:When I first moved down here, that
had, that was, it was called those
89
:potatoes or something like that.
90
:And it was a hash browns
and sour cream and it was.
91
:Went excellent with the burger
or whatever you're having
92
:Tim Winkler: for dinner.
93
:Yeah, I wasn't seeing that coming.
94
:The sour cream threw me off a little bit.
95
:The hash browns.
96
:Ketchup maybe.
97
:Uh, sour cream.
98
:And I'm
99
:Mike Gruen: not even
a huge sour cream guy.
100
:But sour cream and hash browns,
it was, it's a nice pair.
101
:Tim Winkler: Okay.
102
:I'll have to take your word on that.
103
:I love sour cream too.
104
:I've never tried it on hash
browns, but we'll give it a shot.
105
:I'm
106
:Mike Gruen: talking like the potato.
107
:I'm not talking like
the round potato ones.
108
:I'm talking like the like.
109
:Where it's the the hash browns
are like the breakfast, like
110
:the skinny, shredded potato.
111
:Tim Winkler: Yeah, yeah, I gotcha.
112
:Yeah, I'm following you.
113
:I'm just thinking of a McDonald's
hash brown for some reason.
114
:That's what's coming to my mind.
115
:Andrew Gamino-Cheong: I see.
116
:I'm getting hungry now.
117
:Mike Gruen: Good thing.
118
:It's Friday afternoon.
119
:Yeah,
120
:Tim Winkler: I'll, I'll, I'll jump in.
121
:So, um, my, my parents going
to be fitness and fellowship.
122
:So earlier this year, I joined this
kind of workout community in my town.
123
:It's called F3.
124
:It stands for faith,
fitness, and fellowship.
125
:Um, it turns out there's actually
hundreds of these kind of small
126
:outposts across the country and actually
some, some global locations as well.
127
:But essentially what it is, it's small
workout groups of Anywhere from like
128
:three to 20 guys that meet up a few days
each week, usually pretty early in the
129
:morning for like a 45 minute hit workout.
130
:And for me, it's kind of been a valuable
way to combine community and exercise.
131
:And so like the fellowship piece
of it, it makes the fitness part,
132
:in my opinion, more enjoyable
and I guess more sustainable.
133
:You kind of get a guilt trip if you
don't show up or people, you know,
134
:kind of rag on if you're, if you keep
missing a couple of days consecutively.
135
:So.
136
:Um, another perk of it for
me has been to my family.
137
:We live in Northern Virginia, but we We
have a place in Chattanooga, Tennessee
138
:that we try to come to, you know, a couple
of months out of the year, and especially
139
:where I'm recording from today, um, but,
uh, there's little F3 post here as well.
140
:And so it's a way to just kind of quickly
plug right into the local network here,
141
:kind of get a workout in, um, but also
kind of meet some, some people as well.
142
:So.
143
:That's, uh, that's my parents,
um, fitness and, and fellowship.
144
:Um,
145
:Mike Gruen: definitely agree.
146
:I went to the gym the most when
I had a coworker, he and I,
147
:it was like this, like, he was
like, all right, tomorrow at 6am.
148
:I was like, I, I have no choice.
149
:Like you've already,
we've set the challenge.
150
:So yeah.
151
:I'll see you there tomorrow at
152
:Tim Winkler: six.
153
:Yeah.
154
:I mean, Jim's capitalized on
that community style, like
155
:Orange Theory is another one.
156
:Like people want to.
157
:Get those flat points.
158
:They want to, you know, post it
through their, the little, the little
159
:network, the app and everything.
160
:So it's, it's, it's
pretty, it's pretty genius.
161
:Um, but, um, definitely something
that kind of keeps me motivated
162
:and keeps me coming back.
163
:Um, cool.
164
:So let's pass it along to our guest.
165
:Um, Maggie, I, I'll start with you if
you want to give us a quick intro and
166
:Maggie Engler: tell us your pairing.
167
:Um, well, I'm Maggie.
168
:I'm in Austin, Texas, and y'all
already introduced, uh, me a little
169
:bit, but my pairing, I'm also kind
of moving into this fall mode, even
170
:though, um, it really has only just
started cooling off, uh, down in Texas.
171
:Um, and so I was thinking about also,
uh, food related, um, my pairing is
172
:coffee and pie because I love, uh, like
Thanksgiving, obviously, and having food
173
:and family, but what I really love is
like leftover pie and then having that
174
:for breakfast, uh, with like black coffee.
175
:To me, that is like the perfect breakfast.
176
:That's
177
:Tim Winkler: awesome.
178
:Now I'm salivating because pie is
one of my favorite desserts of all
179
:Mike Gruen: time.
180
:Dessert for breakfast is great.
181
:This morning, I had my leftover
bread pudding for breakfast, so.
182
:Tim Winkler: So do you, your coffee,
you just go in straight black
183
:coffee or do you do like a flavor?
184
:I'm just going straight black coffee.
185
:Maggie Engler: Nice.
186
:Yeah, and I actually, I don't
really do like iced coffee,
187
:which is kind of unusual.
188
:Uh, especially when it's hot out,
but I'm pretty much just, uh,
189
:yeah, old fashioned that way.
190
:Tim Winkler: I love it.
191
:Coffee and pie getting
geared up for Thanksgiving.
192
:Um, awesome.
193
:Well, thanks again for joining us,
uh, Andrew, how about yourself?
194
:Quick, uh, intro and you're pairing.
195
:Andrew Gamino-Cheong: Yeah.
196
:Uh, my name is Andrew.
197
:Really excited to be here.
198
:Um, I'm actually calling in from DC.
199
:So a close, uh, I know for some
of the other places you guys were
200
:talking about for me, I'm actually
going to go with like a really great.
201
:a competitive strategy board
game and a craft cocktail.
202
:There's a lot of places that I love
to go to in college that had like,
203
:you know, board games and drinks.
204
:And then during the peak pandemic,
my wife and I, I think bought
205
:every good competitive two
player board game out there.
206
:And then some of those pandemic
nights where you couldn't go out
207
:and do anything, we'd make cocktails
and play those for hours on end.
208
:So I think something about the
pairing of those that just work really
209
:well, getting vibes on both sides.
210
:So
211
:Tim Winkler: nice.
212
:That sounds awesome.
213
:You have a favorite cocktail or board
214
:Mike Gruen: game, board game,
215
:Andrew Gamino-Cheong: board game.
216
:There's one, um, seven wonders duel.
217
:It's a dedicated version for two players.
218
:Uh, my wife and I got competitive in
that real fast and that was amazing.
219
:Mike Gruen: Nice.
220
:Cheers.
221
:My wife and I was, uh, Starfarers
of Catan is our, uh, was
222
:our go to for a two player.
223
:So, yeah, love that.
224
:Tim Winkler: Yeah, we were having a
debate, uh, at Hatch, uh, not long ago
225
:about the, the goat board game out there.
226
:And, um, Clue came up quite a bit.
227
:Clue was, was one of, uh, uh, group
favorite, but it's not really a
228
:two person thing, but Catan was
definitely also top of the list.
229
:All right, good stuff.
230
:Uh, well, again, thanks
for joining us, Andrew.
231
:And, uh, we'll, we'll transition into,
uh, the heart of the, the topic here.
232
:So, uh, as I mentioned, we're going to
be talking about, you know, depth versus
233
:breadth as it relates to AI engineering.
234
:Um, and we want to kind of tackle this
from a few different perspectives.
235
:Um, Andrew, you know, why don't
you kind of lead us off here?
236
:You know, you were a double major
on computer science and government.
237
:Sounds like it played a part in
your career path as an engineer.
238
:So what are your kind of thoughts on
the topic of specialization versus
239
:Andrew Gamino-Cheong: generalization?
240
:Yeah, happy to dive into that.
241
:So as you mentioned, in undergrad,
I double majored in both political
242
:science and computer science.
243
:Honestly, at the time, I was unsure
whether I wanted to become a lawyer
244
:doing stuff in the tech space, or
if I want to go into the tech space
245
:and do stuff related to the law.
246
:I ended up choosing more of the latter
because I always had these dual interests.
247
:Um, yeah.
248
:Really partly informed by, you know,
the kinds of things I did growing up.
249
:You know, I was the biggest fan of the
West wing really shaped my whole view
250
:and started to come to DC for college.
251
:Um, but I also loved watching
every sci fi show out there.
252
:You know, I watched all of Star Trek
and watch data and all these cool
253
:ideas about AI, um, and the impacts
of those could have on society.
254
:And so what I was always thinking about is
actually how could we apply these awesome,
255
:powerful ideas in AI to this space?
256
:You know, I always saw a lot of.
257
:Similarities in the kinds of logical
things that actually are embedded in
258
:laws, you know, these policies, these
ideas, there's logical principles,
259
:there's interpretations and how that
could actually be perhaps assisted by A.
260
:I think like a lot of people,
I always tried making my own
261
:like, Hey, could you create an A.
262
:I that could actually interpret laws,
make recommendations based on that.
263
:I think now I've got a much
better sense of the ethical or
264
:safety challenges around that.
265
:Um, but my advice sometimes is actually
to when I talk to the former students,
266
:you know, pick two things, pick one
thing that will give you some technical
267
:skills, pick another that's really
piques your intellectual curiosity.
268
:And you can find a really, really great
career path working at the intersection
269
:of those two, because you can basically
be always understanding the latest
270
:technologies and ideas and applying it
to the problems in your other space.
271
:And you can do that in both directions.
272
:And that's where I think.
273
:We see innovation happen the most
and you're taking ideas and solutions
274
:that have been developed in one
space and applying them to another.
275
:I think my career has been
really successful doing that.
276
:Um, that's definitely something
I recommend to everyone else.
277
:Tim Winkler: Yeah, that's
really sound feedback.
278
:Um, and advice, you know, this is
probably a good jump off to also like,
279
:you know, explain a little bit more about
trustable because obviously it sounds like
280
:this played a big part in you building
this, this business and what you all,
281
:the problems that you all are solving.
282
:Andrew Gamino-Cheong: Yeah.
283
:So right before I started Trustable,
I was working for a company
284
:that basically was using AI to
understand the policy landscape.
285
:We'd scrape every piece of proposed
regulation legislation initially in the U.
286
:S.
287
:and then globally.
288
:Use AI to try and identify what was
relevant, you know, which legislation
289
:was more likely to pass or not based
on individual voting preferences.
290
:You know, which things were more likely
to be significant to different industries.
291
:One of the biggest trends we saw
was on regulation for AI itself.
292
:So I remember, for example, reading
the first draft of the EU's AI Act
293
:almost two and a half years ago
when it was proposed and immediately
294
:starting to think through how would
this impact actually my life, right?
295
:I was on a day to day basis
proposing new ideas for AI.
296
:I was never having to go through our
legal team, though, to discuss them, to
297
:understand the compliance requirements
or the legal context around that.
298
:So I was literally starting to
think through, actually, how could
299
:I make sure that I don't have to
spend all my time dealing with
300
:just compliance and legal teams?
301
:Like, could I give them all the
information they need up front
302
:to help do the kinds of risk
assessments that these laws require?
303
:That was then the origin of Trustable.
304
:You know, our, our goal is to make
compliance with all of these AI focused
305
:regulations as easy as possible.
306
:Where that's understanding even what
risk category use case of AI falls into
307
:for the AI act, conducting some of the
workflows to do like risk or impact
308
:assessments on individual risks, and
actually helping to just, um, helping
309
:organizations adopt ethical principles
and helping them actually document all
310
:the ways in which they are being ethical
with AI in a provable way so they can kind
311
:of build trust with their stakeholders.
312
:So this is actually, and we ourselves
are using AI as part of this, right?
313
:We've actually done a lot of
research now on AI incidents.
314
:We also have a large language model
system that can help, uh, kind of teach
315
:some of our customers the different
requirements are and help them interpret
316
:and even document things in a smarter way.
317
:We won't use generative AI to
write your documentation, but we
318
:will actually evaluate how good
your documentation is with AI.
319
:Tim Winkler: Yeah.
320
:And it's a, another topic that we'll,
we'll get a little bit deeper into, uh,
321
:later on in the conversation because
you had a, you know, some interesting
322
:perspective on like the, the doomers
and the utopiast and fun, um, no
323
:ground there as well when it comes
to AI, but, um, let's, uh, let's get
324
:Maggie's perspective on this as well.
325
:So, um, Maggie, I guess, uh, your,
your initial thoughts when you,
326
:when you think about AI, when it
comes to, you know, engineering
327
:as a specialist or, or generalist.
328
:Yeah.
329
:Maggie Engler: Yeah, I think that's,
um, I think there's room for both.
330
:Um, and, um, just thinking back,
I, it's super interesting, Andrew,
331
:that you are SPS in government.
332
:Um, I was, I think, pretty much throughout
school, pretty much a specialist.
333
:Uh, I actually don't have a SPS degree.
334
:Uh, I was, uh, did a bachelor's
and master's in electrical
335
:engineering, and I was focused on
statistical signal processing cuts.
336
:So kind of very like, Applied math,
um, focused, not really at that point
337
:with too many, um, kind of practical,
uh, applications, um, but the first,
338
:uh, role that I had in industry, I was
working on a, um, cyber security platform.
339
:So doing malware detection,
um, with machine learning and.
340
:I think that, um, from that point
on, I was kind of like, Oh, well,
341
:first of all, um, just from a purely
academic standpoint, like the data
342
:science and machine learning world, uh,
aligns really well with my skillset.
343
:Um, but then also working in that, um,
field, uh, kind of by accident, really,
344
:uh, I found that cybersecurity was super.
345
:Interesting to me on a personal level,
I became really interested in how, uh,
346
:responsive different machine learning
systems are to, uh, different types of
347
:attacks and how, um, there's kind of this,
um, cat and mouse game, uh, where as soon
348
:as you sort of harden a model to some type
of attack, like you then start to see,
349
:um, novel things come up and, um, um, For
me, like that sort of adversarial nature,
350
:um, meant that it was, it was always
kind of fresh and felt, um, um, like
351
:that I, uh, that I was always learning.
352
:And so, um, ultimately I think we
kind of ended up at the same place,
353
:even though I was certainly not as
broad as Andrew when I was in school.
354
:Um, in that I kind of.
355
:Uh, selected, um, my career opportunities
boards, uh, first sort of explicitly
356
:cybersecurity information security.
357
:And then, um, after that, much more
towards trust and safety more generally.
358
:Um, so I worked at, um, you already
mentioned I was, I was at Twitter and, uh,
359
:now X, uh, for, um, over two years working
on their, in their health engineering
360
:organization on policy enforcement.
361
:And in my current role, I took a
lot of that, um, background to an
362
:AI startup, uh, where a big part of
my job is just trying to understand,
363
:uh, and improve, uh, the safety of.
364
:Um, our large language model product.
365
:And so understanding, uh, what are
the risks, um, associated with, um,
366
:what generations the model produces
in these different conversations.
367
:Um, how can we measure that?
368
:And how can we kind of prevent, um,
unsafe generations from happening?
369
:Um, so I've also kind of somewhat narrowly
focused in on on a certain problem area,
370
:even though obviously, data science
machine learning is is super broad.
371
:You can do almost anything with that.
372
:Um, and so I really like, um, uh, this
proposition around like, If you can find,
373
:um, uh, a broad enough skill set, but
also narrow in on like a particular area
374
:where you're interested in applying it,
um, that seems like a recipe for success.
375
:Tim Winkler: Yeah, it's
really fascinating.
376
:Um, Mike, I'm kind of curious on your
input on this too, just kind of, you
377
:know, I mean, it's a very similar, a lot
378
:Mike Gruen: of, yeah,
it's just, it's funny.
379
:Cause, uh, as.
380
:Um, as you were talking, I was thinking
about my own story, my own journey
381
:on like how to, I went into natural
language processing and then I was on
382
:a cybersecurity product where we're
using inferential statistics to try and
383
:find bad actors and stuff like that.
384
:And I, and like the whole idea of, for
me, I went to school for computer science.
385
:I minored in English.
386
:Basically poetry.
387
:There's not a good intersection there.
388
:Um, but, um, I do agree with the idea of
like if you can find certain areas that
389
:like you're really passionate about.
390
:Like when I was doing the stuff
with the natural language processing
391
:and we were, um, it was it.
392
:It was really for intelligence
communities to find like bad actors,
393
:uh, looking at different things.
394
:Um, passionate about that and being
able to apply my sort of software
395
:engineering expertise to that.
396
:So I agree with that sort of
like, if you can find that, those
397
:niches that really interest you.
398
:And I don't know that you need.
399
:I think it can change over time.
400
:I'm, I've been working for a long time and
I've moved from thing to thing to thing.
401
:Um, I think that's an important part
of a career is find, you know, being
402
:able to find the next thing that you
want to work on or the next area.
403
:Um, once you start maybe losing interest
in a particular area, I know other
404
:people that have like stuck in the same
thing for years and years and years,
405
:and they've never lost interest in it.
406
:And I can see that also happening,
but it's also, I think having a broad
407
:skill set that you can apply to very
specific problems is a, it's a good way
408
:Tim Winkler: to go.
409
:Are there any verticals that you
would say, like, you know, being a
410
:specialist is, is preferred, right?
411
:Um, uh, healthcare, anything, anything
that comes to mind, uh, that you all would
412
:say that you've heard from colleagues or
friends that have been truly very dialed
413
:in and it's proven beneficial for them?
414
:Andrew Gamino-Cheong: I know, um, a
couple of people from some grad school
415
:times who have kind of combined a lot of
background in the medical space with that
416
:deep understanding of computer science.
417
:And there's so many amazing applications
actually of, um, even taking computer
418
:science concepts that are now almost
a decade old and they're still
419
:actually quite novel when applied
to computational biology problems.
420
:Um, you know, like sequencing of different
things and now looking at applying some
421
:stuff around like DNA sequencing and the
algorithms developed to that sequencing
422
:actually stuff in tumors themselves.
423
:And, you know, what you realize is
that there's actually so much good work
424
:to be done there that isn't, it's not
viewed as cutting edge anymore, right?
425
:Even the idea, I knew someone
who is taking the concept of word
426
:embeddings should now feel ancient
in terms of NLP technologies.
427
:Then we're applying that actually
to like sequences of DNA that
428
:they were collecting, right?
429
:Because if you're talking about the
context of things that are next to things
430
:and how that can actually represent stuff.
431
:They're actually able to learn a lot
and improve their own, um, topic model
432
:kinds of algorithms in the computational
biospace using what is practically
433
:ancient technology now in the NLP space.
434
:And I found that to be like
a fascinating application.
435
:What about you, Maggie, anything
436
:Tim Winkler: that comes to
437
:Andrew Gamino-Cheong: mind?
438
:Maggie Engler: Um, that is fascinating.
439
:I think that, um, I have seen, uh,
certainly sort of very specialist,
440
:uh, individuals, usually like PhDs,
uh, be extremely successful at sort
441
:of the cutting edge of AI research.
442
:Um, I think that, um, uh, I worked with
in the past people who have done, for
443
:example, like large language models,
model research for, you know, at this
444
:point, much, much longer years longer
than most other people in the field.
445
:Um, and I think that does give
you an edge, but it does seem to
446
:be like a very small slice of the
sort of total opportunity that I
447
:think in health is a great example
of a vertical where all of this.
448
:Specific domain knowledge is
really, really helpful and you
449
:can use, um, like Andrew said, um.
450
:Uh, just the application of even
a technique like word embeddings,
451
:um, could produce novel knowledge.
452
:Um, so I think there's, I think there's
a mix and I think, uh, that was what I
453
:was trying to get at at the beginning,
uh, is that I think that in a lot of
454
:cases there's room for specialists and,
um, folks who are, are, have that more
455
:broad, um, knowledge, but in particular,
uh, sort of the cross pollination of
456
:ideas, um, seems to hold a lot of value.
457
:Tim Winkler: Yeah.
458
:And I think with the, you know, the,
uh, generative AI models, like the, some
459
:of these foundational models that have
come out of the last year, um, these are
460
:really kind of spicing things up, I think,
and, and adding, adding to this debate,
461
:because, and Maggie, maybe we were talking
about this on our initial discovery call
462
:about, you know, Um, The power of these
tools and how they can be applied to
463
:not just a computer science, you know,
graduate, but like folks in marketing or
464
:folks in finance or something there where
they can now consider themselves going
465
:down this path of a, of an AI opportunity
that maybe wasn't quite as present before.
466
:So I'd love to just kind of pull on
that thread a little bit and, and, and
467
:maybe, you know, starting with like how.
468
:How these have impacted your all's work,
or, you know, how, how do these kinds
469
:of models like limit or enhance career
prospects, especially for those folks
470
:that are coming out of school and, um,
you know, exploring that next opportunity
471
:Andrew Gamino-Cheong: for themselves?
472
:I'll let you go ahead, Maggie,
first, since I went first on
473
:some of the last few questions.
474
:Maggie Engler: Uh, yeah, I think it's
as we talked about, I think it's a.
475
:Really interesting time right now
because of these foundation models.
476
:Um, one of the things that strikes
me, so in my own workflow, I've
477
:started integrating coding assistants,
things like that, um, that.
478
:Don't necessarily produce anything
for me that like I, I wouldn't have
479
:known how to do, but can, uh, make
things more efficient, um, and make it
480
:faster to do things like documentation.
481
:Um, but for me, the big question,
right, is, uh, how will this
482
:change kind of future work
opportunities and even in field?
483
:I do think that the argument that I
think I've made at that point in time is.
484
:Um, that it will not necessarily,
um, replace entirely a lot of a lot
485
:of the professions that people are
kind of worried about, um, losing,
486
:um, but that it is always going to be
sort of an advantage, like any tool.
487
:Uh, to be able to use generative AI
well and, um, understand its limitations
488
:and understand its capabilities.
489
:Um, actually my , my, um, uh,
colleague from Twitter, uh, NMA Dimani
490
:and I, um, have a book coming out.
491
:Uh, actually in a couple weeks with
Manning on introduction to generative AI.
492
:Um, but, uh, shameless plug.
493
:Um, but in that where we talk a lot
about, um, sort of the things that
494
:people, uh, do use it for already.
495
:And then like things that they
really shouldn't be using it for.
496
:There was a super famous example of.
497
:A lawyer who, um, submitted a legal brief,
uh, with chat deep written with chat,
498
:GPT, and didn't really, uh, or didn't
even back check it, um, and so it caused
499
:this whole kind of delay in the case.
500
:And I think he was penalized
professionally in various ways.
501
:Um, because ultimately, uh, people
are still going to have sort of the
502
:responsibility to ensure quality of
their output, but if you're able to, uh,
503
:produce, uh, writing that is, um, You
know, to the quality that it needs to be.
504
:And, um, you're able to do that
much faster, much cheaper, and
505
:that's always going to be an edge.
506
:I
507
:Mike Gruen: mean, I think just
jumping in a little bit on that,
508
:like the, I think back to the
nineties when the web was starting.
509
:Um, I think what we've seen is really
an enablement of certain career
510
:opportunities that didn't exist.
511
:So like when I first started.
512
:You had some artists and some graphic
designers on staff that were sort of
513
:helping to do things, but now you,
like, I've worked with people who
514
:are just straight up graphic designer
artists who can now do a whole web
515
:application front end the whole, you
know, and most of the logic to it.
516
:And I think that like, right, we, the
software engineers, computer science,
517
:we, we build these tools that then
enable others to use their special
518
:talents, their, whether they're
artists or whatever, to be able to
519
:sort of take it to the next level.
520
:And I think that that's what AI is
going to be able to do is sort of
521
:make some of these like have impact
to other careers that we can't even
522
:think of, um, and enable them to
be more efficient in their jobs.
523
:Andrew Gamino-Cheong: You know, one thing
that I always find funny is that we call
524
:it prompt engineering, but so oftentimes
it feels more like prompt art, right?
525
:It's more like there are some funky
things that can happen depending
526
:on what prompts you put in there.
527
:I know Maggie, this is like
the big problem that you're
528
:trying to actually solve for.
529
:But I think it is amazing because
I've seen some very incredibly
530
:intelligent people who don't have a
technical background do some really
531
:amazing things with these algorithms
and these generative AI systems.
532
:I do think though, the limitations
of them aren't well understood.
533
:You know, some people it's
like, Oh yeah, I had it like
534
:calculating all this stuff for me.
535
:I'm like, Oh, it really doesn't have
actually an understanding of math.
536
:And like, if you didn't check
the math, you could get into
537
:real trouble in doing that.
538
:I think that's one of the biggest
challenges people have, even on
539
:like their day to day basis, right?
540
:It's knowing like, what are
the things that it can do?
541
:What it can't do?
542
:You know, if you ask him stuff,
most of its information, its
543
:core training data set for chat,
GBT only goes up to:
544
:It has some other ways of adding in some
other context about some things, but like
545
:that itself could be a huge deal for some.
546
:Use cases and they've got a small
disclosure now and like the left hand
547
:corner for it, you know, our point of
view and is always that, you know, you
548
:need to be doing a lot more thoughtfulness
about what tasks it can, can't do.
549
:And I worry that a lot of people,
they don't understand it well
550
:enough in those limitations, um,
that itself can bring some risks.
551
:Tim Winkler: Yeah, it's,
it's an interesting space.
552
:It's like handing, you know, somebody
the keys to like a Lamborghini and
553
:not knowing exactly, you know, it's
capable of a lot of things, but you
554
:know, half of half of the bells and
whistles you don't even know about.
555
:So it was still so early on just to kind
of understand like some of the hacks
556
:and the tips, how to best use the tools.
557
:Um, so it'll be interesting, but I
think with that, You know, with, with
558
:where it's at right now, I, I, I'm
curious to know, um, you know, we'll
559
:say for like data science bootcamps
and things of that nature, right?
560
:Are those already being crafted, uh, you
know, in terms of like really emphasis
561
:around AI and have you all seen any of
that or, or, or just generally in, in,
562
:in pure academia at large, um, are you
seeing these programs being built around?
563
:Career paths within AI and
what does that look like?
564
:Andrew Gamino-Cheong: I know
there's a lot of focus on training
565
:programs to learn how to use them.
566
:I haven't necessarily seen that,
um, in like academia itself.
567
:There's still very much desire to teach
how these systems work under the hood,
568
:partly because there's now so much
focus on how to mitigate the risks.
569
:And you really only do that once you do
understand the underlying levels and like.
570
:You know, these models aren't yet
explainable and yet the necessity
571
:potentially legally for them to be
explainable for certain use cases.
572
:So high.
573
:So I do suspect that'll be one of
like the biggest areas of focus.
574
:Um, on that research side of things, I
think one of the, the challenges in one
575
:area as well, why sometimes I recommend
like explore a multidisciplinary
576
:approach is that there's fewer and
fewer orgs who are working out at
577
:the kind of cutting edge of things.
578
:And that's partly because these
models are so large, you need so much
579
:data, so much compute, that there
is kind of a concentration, right?
580
:If you want to work on a truly large
language model, you need a billion
581
:dollars or at least a hundred million
dollars in funding to be able to.
582
:Really support that kind of
stuff, and that's only going to
583
:be accessible to a smaller and
smaller number of organizations.
584
:You know, I actually knew some professors
in grad school who they used to be some of
585
:the world leaders in machine translation,
but they no longer have access to actually
586
:the algorithms and the data and the
compute to do that still cutting edge work
587
:without actually then just associating
with a lab working in big tech.
588
:So I think that itself can pose
challenges to accessibility
589
:for like those specialists in
academia versus in big tech.
590
:Is that because
591
:Mike Gruen: I'm sorry, is that because
the what was cutting edge now just record
592
:the new cutting edge just require so
much more compute and the access to it?
593
:Or is it more nefarious is I guess is it
more that big tech is actually gobbling
594
:it up and preventing it from being done
at academia or something like that?
595
:You'd rather not
596
:Andrew Gamino-Cheong: say I don't think
they're deliberate in gobbling it up.
597
:Like they're not trying to be, I'll
say predatory in that sense, but they
598
:are the only ones who can literally
have like, Oh, we can spend a hundred
599
:million dollars to train GBD five.
600
:Right.
601
:Right.
602
:No university.
603
:Can throw that kind of resources at that.
604
:Tim Winkler: Sorry, Maggie.
605
:I think you were going to say something
like rudely interrupting them.
606
:Maggie Engler: No, um, I was just going
to add that, uh, that is absolutely true.
607
:And I do think that is a problem,
um, for the field having.
608
:Thought a lot around, um,
sort of the I safety space.
609
:There is a lot.
610
:Well, first of all, I guess the point
one is that it is harder to do cutting
611
:edge work because of the resource
constraints that entry brought up.
612
:I think that's starting to be, um,
remedied a little bit through like
613
:the sort of open source development.
614
:Um, an ecosystem.
615
:Um, so you're not going to be able
to, or it's going to, at least for a
616
:long time, cost a huge amount of money
to do foundation model development.
617
:But, um, I've seen so many cool things
around like, um, um, replicating
618
:performance, the performance of
these cute large models in a smaller
619
:model, um, on, you know, 100 worth of
harm hardware and things like that.
620
:So I think, um, it's, yeah.
621
:We're starting to move in an inner
direction where it's slightly
622
:more accessible, but not for
the not if you want to do the
623
:sort of cutting edge research.
624
:Um, and so, like, it's very going to
be very accessible for, like, building
625
:applications and things like that, but
not necessarily the type of work that, um,
626
:that you'd like, um, sort of professors
to be working on with respect to, um, some
627
:of the safety risks and things like that.
628
:Um, The other thing that I was going to
mention is that I think for these big
629
:companies, it becomes Sort of a there.
630
:There is kind of a situation where they're
all racing for different resources.
631
:And that does, yeah, um, drive up the
cost of development for other folks.
632
:Um, and I know that some leaders in the
space have, um, proposed things like
633
:licensing, uh, for the, if you want to
have a model that's You know, at GPT 4
634
:level or higher, like, um, having some,
uh, you would need to get approval,
635
:um, or a license for that, um, which
is, I mean, I guess a good idea from a
636
:safety perspective, um, because you just
have fewer people, um, at least legally
637
:developing them, but I, even, even as
a person who works in AI safety, I, I.
638
:Very much have like a reticence towards
like any type of limitation around like
639
:who is allowed to, uh, develop them.
640
:And so, uh, that I was just going to, uh,
sort of, um, also reference that proposal
641
:because I think it is interesting to
see and, um, Andrew, you're trustable.
642
:I know is super involved with this,
but, uh, in thinking around this, but
643
:like, it will be very interesting to
see, I think, how the AI governance
644
:develops over the next few years.
645
:Andrew Gamino-Cheong: Yeah, there's really
good questions on like the liability,
646
:right, who owns that a big thing that, you
know, we focus on is it's really important
647
:to have models like the ones that you
guys are building an inflection, you know,
648
:disclose what the risks are for something.
649
:But then you can't, there's no
way you guys can really understand
650
:all the ways that can be used.
651
:Right.
652
:And that itself presents challenge.
653
:So even if you license out
actually that, yeah, you're
654
:allowed to build these models.
655
:It's there still has to be a lot
of responsibility on the groups who
656
:are actually deploying it to make
sure that they're doing an ethical
657
:decision like, Hey, are the benefits
outweighing the risks, right?
658
:I can look at the risks declared by
my model and then I need still need
659
:to decide whether those risks are
appropriate or not for my use case.
660
:And how that kind of maps out.
661
:I do think one of, to tie it back to
just what we were talking about just a
662
:second ago as well, you know, academia,
they love open source stuff because
663
:then they can get access to actually
do things at the edge of this model.
664
:But the danger there is actually,
um, all of the, I think the worst
665
:uses of AI that we're worried about.
666
:They're not actually going to come
from like open AI system that has a
667
:trust and safety team with a 10 million
dollar a year budget looking at stuff.
668
:It's going to come from
the open source systems.
669
:If you want to run a misinformation
campaign, do some illegal
670
:shit with AI, you're going to
use the open source models.
671
:And then the problem is like,
who's responsible for that?
672
:You know, what are the conditions there?
673
:And so there's been a couple of policy
papers that came out earlier this week,
674
:recommending that large frontier models
actually not be open sourced at all.
675
:And the government's forbid that, which
actually could again, impact the ability
676
:for academia to be able to do some of
their own frontier research on that.
677
:And there's a good kind
of trade off there.
678
:I mean, I think it's,
679
:Mike Gruen: as you guys were talking, I
was sort of thinking about how it's so.
680
:In the past, these types of big,
expensive types of endeavors and new
681
:frontiers, space, nuclear technology,
whatever, all started in the government.
682
:The government was the only ones who could
possibly have the budgets to do this.
683
:There wasn't an immediate commercial
application for X, Y, Z with A.
684
:I.
685
:There's an immediate commercial.
686
:Use, and that's what's driving business
to sort of be at the forefront of it.
687
:And therefore I think government is
playing catch up as opposed to in the
688
:past on some of these like, right?
689
:Like what stops somebody from
building a nuclear bomb in the past?
690
:Like we we figured it out the government
funded all that they put in all these
691
:regulations to make it really really
difficult for Someone to do this but
692
:for on AI that's just not the case the
the forefront is commercial application
693
:So I think it's an interesting as you
guys were talking sort of some things
694
:click there that I hadn't really Thought
695
:Tim Winkler: about in the past, I think
it's a good segue to, and, uh, Andrew,
696
:in our initial disco call, you were, we
were kind of talking a little bit about,
697
:you know, the, the doomers out there,
the utopias, and then you had a third
698
:one, the AI pragmatists, you want to
kind of expand on that, uh, just kind of.
699
:Explain a little bit more
of what you mean by that.
700
:Andrew Gamino-Cheong: Yeah.
701
:So, you know, like any media thing,
media loves really, you know, uh, eye
702
:catchy headlines, like AI is going
to create, you know, solve all of our
703
:problems and you can read blogs from
famous VCs about how AI is the solution
704
:to all of our problems and also read,
you know, we talked, we began this
705
:podcast talking about Skynet, right.
706
:AI is going to kill us
all kinds of things.
707
:Those are great for headlines, but the
danger is that that kind of distracts
708
:from actually trying to solve some of
the real problems out there, right?
709
:You don't need to have military
AI to still have AI harms.
710
:One of the first instances that almost
set off this entire industry now of
711
:AI safety research was around use
of AI to recommend prison sentences.
712
:ProPublica did a great expose
about like Hey, this is biased
713
:towards a certain group.
714
:Underlying that actually was a
discussion about how you measure
715
:fairness in an algorithm and arguably
an ethical debate about what fairness
716
:was used to optimize things, right?
717
:AIs are trained to maximize some value.
718
:If that value is Arguably has an ethical
aspect to it that needs to be discussed.
719
:You know, the truth is that we're never
going to be able to pause all of AI,
720
:nor should we really assume that AI
can really solve all of our problems.
721
:Cause there's a lot of things
that frankly are beyond its realm.
722
:And so the question is really,
let's assume AI is going to
723
:be everywhere pretty quickly.
724
:You know, how do you actually set up the
right conditions to do that responsibly?
725
:You know, we can't really prevent it.
726
:And so what are the policies that
should, that we should adopt instead?
727
:One example of that, and you know,
this may sound a little bit cynical,
728
:is it's always going to be cheaper and
faster to generate content with AI.
729
:And so trying to say we're going
to watermark everything, it's
730
:going to be really difficult.
731
:And also again, with any open
source system, any watermarking
732
:things can be evaded.
733
:And so instead, I also say like,
let's look at what is quote
734
:certified human content look like,
you know, it's like the equivalent
735
:of an organic label on something.
736
:Let's define the criteria for that and
actually get that set up because there's
737
:going to be a lot of interest and demand
to say like, yeah, I, I will only buy
738
:journalism that's certified human content.
739
:Right.
740
:Or like certain unions will
want to enforce a certain
741
:level of that kind of stuff.
742
:Um, you know, that's just facing
the reality that probably the
743
:majority of content that we'll see
coming out within five years is
744
:maybe even kind of conservative,
is, uh, will be AI generated.
745
:Maggie Engler: Yeah, I think it's also
so important to, right, think through
746
:kind of the context in which all of
this content is appearing and Um,
747
:what we really need to do as kind of
a society, um, in order to respond.
748
:Um, I guess that, that might be your
definition of pragmatism, um, but it
749
:reminds me I was recently, um, at an
event, uh, organized by the, uh, give CT
750
:global internet forum on counterterrorism
and talking about a lot of this stuff,
751
:generative AI and how, um, we've already
started to see sort of deep fakes from,
752
:or of political figures and things
like that being used for various.
753
:Um, purposes.
754
:And when it comes to something like
watermarking, I think it, that to me
755
:strikes me as like, um, an example of
a technocratic solution where, right,
756
:like, even if you're saying like, okay.
757
:Setting we're setting aside open source
models like all of the big AI generation,
758
:uh, models have agreed that they're
going to all watermark their content,
759
:but then ultimately, like how many
people who are scrolling through X or
760
:or like other social media platforms are
going to be like, Oh, I wonder if like.
761
:This clip of President Biden is is real.
762
:Like, let me go just check it against
all these different waterworking systems.
763
:No one's going to do that.
764
:You know, 1 percent of people less
than that are going to do that.
765
:And so I do think like what I'm most
interested in is how it is exactly what
766
:Andrew is getting at, like how we can
set ourselves up for this, um, in the
767
:way that in a way that is, um, kind of
the most, um, productive as possible
768
:and the most Um, uh, sort of realistic
around what is, what has already
769
:happened and not trying to stuff the
genie back into the bottle, so to speak.
770
:Andrew Gamino-Cheong: One of my
favorite, um, I'll say like pragmatic
771
:AI ideas I heard out there was instead
of schools and teachers trying to
772
:prevent people from using, you know,
GPT to generate their stuff, which is.
773
:You know, that's a,
that's a losing battle.
774
:It's going to be impossible to
ever like truly restrict it.
775
:They said, all right, you have to turn
in one copy that is generated by GPT
776
:and you have to disclose what your
prompt was and all the stuff you did.
777
:And you also have to hand in
the handwritten version as well.
778
:You know, that shouldn't
necessarily reflect that.
779
:I thought that was like really pragmatic
because actually they'll end up
780
:with a whole corpus of like 30 plus
essays written by GPT to then compare
781
:against all the ones that weren't.
782
:And it's kind of like.
783
:You know, use this as a tool,
but still have to like show that
784
:original and creative side of things.
785
:Those are the kinds of solutions
I think we just really need to be
786
:talking about more instead of just like
banning this because I think that'll
787
:be a just a waste of time and effort.
788
:Yeah,
789
:Mike Gruen: the one that I saw that
wasn't also classroom was the idea
790
:of like, we're just going to change
what's homework and what's classwork.
791
:Rather than going home and writing this
paper on your time, we're going to use
792
:the class, like read the book at home, if
you want to use chat GPT to whatever to
793
:come up with ideas or whatever, but like
we're going to actually use class time
794
:to write the paper, which I thought was
an interesting way of doing it to make
795
:sure that people get the concepts and
796
:Tim Winkler: stuff like that.
797
:Yeah, I think it's all a pretty
fascinating conversation at large.
798
:I mean, it, yeah, everybody's
gone through that, that point.
799
:Uh, probably at one point in
the last six months or years is
800
:my job is my job in jeopardy.
801
:Is my role going to be
one that's replaced?
802
:And, you know, I think one of the
biggest things that we've always kind of
803
:preached is just like, it should happen.
804
:Be a part of everybody's job.
805
:Just a matter of like, how do you
use it as a tool in your tool belt to
806
:become more efficient or what have you?
807
:But, um, yeah, it's just, it's still
so early to, I'm very excited to see
808
:how everything's, how things play
out over the years, but, um, this
809
:is a great kind of starting point
to keep the conversation moving.
810
:I love the pragmatic outlook on this too.
811
:I think it's a, it's a really fascinating,
uh, addition, Andrew, but, um, yeah.
812
:Why don't we, um, put a bow on
it and transition over to our
813
:final segment, uh, of the show.
814
:So this is going to be the five second
scramble where I'm just going to ask
815
:each of you a quick series of questions.
816
:Uh, try to give me your, your
best response within five seconds.
817
:Um, some business, some personal,
I'll start with you, Andrew, and then
818
:I'll, I'll jump over to you, Maggie.
819
:So, um, Andrew, you ready?
820
:Yeah, let's do it.
821
:All right.
822
:Uh, explain trustable to me
as if I was a five year old.
823
:Okay.
824
:Okay.
825
:Okay.
826
:Andrew Gamino-Cheong: We help you
do all the legal paperwork for AI.
827
:How would you describe
828
:Tim Winkler: the culture at
829
:Andrew Gamino-Cheong: Trustable?
830
:I mean, we're an early stage company,
so it feels like a family of friends,
831
:family of friends working together.
832
:I don't know if that makes sense, but
833
:Tim Winkler: I got it.
834
:What, uh, what kind of technologists
would you say thrive at, at Trustable?
835
:Andrew Gamino-Cheong: One
who is comfortable kind of
836
:learning stuff on their own.
837
:There's a lot of unknowns for what we're
doing on the regulatory and AI front.
838
:Very cool.
839
:Tim Winkler: And what would you say,
uh, are some exciting things that folks
840
:gear up for, uh, heading into:
841
:Andrew Gamino-Cheong:
Yeah, I mean, be ready.
842
:Uh, the number of new applications
of AI we're going to see is
843
:going to be explosive, I think.
844
:Tim Winkler: Nice.
845
:If you could have any superpower,
what would it be and why?
846
:Andrew Gamino-Cheong: Ooh.
847
:I'd have the ability to, um, go back
in time and re even just to re observe
848
:things that happened in the past.
849
:Nice.
850
:Tim Winkler: All right, kiss, marry,
kill, bagel, croissant, English muffin.
851
:Andrew Gamino-Cheong: All right,
kill English muffin, uh, kiss
852
:a bagel, marry a croissant.
853
:Tim Winkler: Um, what's something that you
like to do, but you're not very good at?
854
:Andrew Gamino-Cheong: Ooh,
um, probably bike rides.
855
:I, I love to go on some
trails, but I'm like.
856
:I'm not particularly fast
or athletic about it.
857
:So keep, keep that helmet on.
858
:Yeah, I crashed a lot.
859
:What's,
860
:Tim Winkler: what's a charity or corporate
philanthropy that's near and dear to you?
861
:Um,
862
:Andrew Gamino-Cheong: my wife
and I have volunteered at a
863
:dog shelter, um, here in DC.
864
:Cool.
865
:Very
866
:Tim Winkler: nice.
867
:What's something that
you're very afraid of?
868
:Andrew Gamino-Cheong: Ooh,
something I'm very afraid of.
869
:Uh, dairy.
870
:Definitely afraid of dairy.
871
:Tim Winkler: All right.
872
:I appreciate the honesty.
873
:Um, who is the greatest superhero of all
874
:Andrew Gamino-Cheong: time?
875
:Greatest superhero of all time.
876
:Uh, I've got a soft spot for Iron Man.
877
:Nice.
878
:Tim Winkler: That's the first time
I've heard Iron Man on the show.
879
:That's good.
880
:I like
881
:Andrew Gamino-Cheong: that.
882
:Tim Winkler: All right, that's a wrap.
883
:Uh, Andrew, Maggie,
are you, are you ready?
884
:I think so.
885
:All right, perfect.
886
:Uh, what is your favorite part
about the culture at Inflection?
887
:Maggie Engler: Uh, I think my favorite
part is that, um, Because this area
888
:is so new, like there's a lot of
just openness to experimentation
889
:and, um, trying different things out.
890
:Very cool.
891
:Tim Winkler: What kind of
technologists thrives at Inflection?
892
:Maggie Engler: Uh, quite a range.
893
:Um, but definitely people who are open to
Um, iterating fast, but also kind of, uh,
894
:robust evaluators, um, and, and, uh, like
to borrow a term from, uh, cybersecurity,
895
:really like pen testing and, and kind
of relentless in terms of, um, trying
896
:to find all the chinks in the armor.
897
:Tim Winkler: Nice.
898
:Red, red team stuff.
899
:Um, what, uh, what can our
listeners be excited about with
900
:inflection going into:
901
:Maggie Engler: Oh, uh, I think we'll
have a lot of, uh, improvements
902
:on the model side coming out.
903
:Um, So yeah, I don't want to,
I can't say too much about
904
:it, but definitely stay tuned.
905
:Um, and, uh, the product, uh, pie,
um, will be, it will be, we'll be
906
:continuing to iterate on our, on our
907
:Andrew Gamino-Cheong: product.
908
:Tim Winkler: Cool.
909
:Excited for that.
910
:Uh, how would you describe your
morning routine in five seconds?
911
:Um,
912
:Maggie Engler: I usually work out, uh,
Peloton and, um, have like some kind
913
:of breakfast, like toast, simple, uh,
toast, peanut butter, anything like that.
914
:What do you love
915
:Tim Winkler: most about living in Austin?
916
:Maggie Engler: Oh, I love Austin.
917
:My family's from Central Texas.
918
:tailgating at UT, lots of
sand volleyball, fun town.
919
:Tim Winkler: Cool.
920
:I'm going to flip it
from what I asked Andrew.
921
:Um, what's something that you're
good at, but you hate doing?
922
:Oh,
923
:Maggie Engler: um, that is interesting.
924
:Um, let's see.
925
:I'm always, I'm, I'm very good at, there
are certain like household chores that
926
:I have like, um, kind of a systematic
approach to, but I don't like enjoy doing.
927
:So.
928
:Um, like, I don't know, um,
like big loads of laundry.
929
:I guess.
930
:I
931
:Tim Winkler: hate laundry.
932
:Um, what, well, if you could live in
a fictional world from a book or a
933
:movie, which one would you choose?
934
:Hmm.
935
:Maggie Engler: Wow.
936
:Um, I.
937
:Would love to live in, um, like the
kind of Gabrielle Garcia Marquez,
938
:like magical realism, um, based,
so like kind of a South America
939
:tropical area, but like with magic.
940
:Tim Winkler: Sounds awesome.
941
:What's the worst fashion trend
that you've ever followed?
942
:Maggie Engler: Oh gosh, um, crepe pants.
943
:Tim Winkler: Well played.
944
:Um, what was your dream job as a kid?
945
:Maggie Engler: I was actually just
talking about this with someone.
946
:I really wanted to be a farmer,
uh, for a long time as a kid, kid,
947
:because, um, my grandpa was a farmer.
948
:Um, and I thought like pigs and
sheep and all that was really
949
:cute or were really cute.
950
:Tim Winkler: That's
such a wholesome answer.
951
:Um, and we'll end with your
favorite Disney character.
952
:Maggie Engler: Um, probably Mulan.
953
:Uh, I feel like she was early to the,
like, strong female lead, uh, game.
954
:And, um, yeah, is just a badass.
955
:Tim Winkler: Yeah, she's a badass.
956
:And great soundtrack too.
957
:Great soundtrack also.
958
:Alright, that is a wrap.
959
:Thank you all both for participating
and, uh, joining us, uh, on the podcast.
960
:You've been fantastic guests.
961
:Uh, we're excited to keep tracking
the innovative work that you all
962
:We'll be dealing with your companies
and building in the AI space.
963
:So appreciate y'all spending
time with us, uh, on the pod.
964
:Thanks for having
965
:Andrew Gamino-Cheong: us.
966
:Thank you.