[00:11:43] Pete: So impressive. So with all of that in mind then, how is AI impacting developers or your developers? [00:11:52] Saba: It’s been actually really, really interesting to see. There’s all of these tools now that are cropping up that really want to make it easier for you as a developer to write code and get your code into production faster. One of the biggest ones that internally we’ve been talking about is GitHub Copilot, so it’s actually built on top of OpenAI’s Codex, and so you can think about it as an AI per programmer.
So you can really lean onto this AI feature to help Denmark WhatsApp Number Data you get started with a help you write boilerplate code, even suggest entire functions. But you know, if you just put a description of what you’re trying to build, it is, it’s actually really, really neat to play around with. What’s interesting though is, you know, we know AI is not always right. So now when it’s suggesting something completely wrong, but if you’re new to a language or new to a framework, you’re not gonna know what’s wrong.
So now you have to debug AI code rather than your own. So yeah, it’s a really interesting time and space. [00:12:52] Garrett: So obviously there’s been this explosion of AI tools and when I run into a new AI tool, I feel like I look at it somewhat uncritically. I have no idea how these things work for the most part. I’m curious, as somebody who builds AI tools and AI products, when you encounter a new AI tool, how do you go about evaluating whether it’s legitimate, whether it’s unique? ‘Cause to your point, a lot of AI tools today are just essentially user experiences on top of like an open AI GPT model.
|