5 Lessons Learned from Adding ChatGPT to a Mature Product | Mito

5 Lessons Learned from Adding ChatGPT to a Mature Product

Ready to write Python code 4x faster?

Last month, Mito launched v1 of Mito AI. To be totally honest with you, it’s not the world’s best AI feature (yet). We wanted to get something out the door so that our users could start exploring, and so that we could start learning.

V2 of Mito AI is coming soon. Before we release it, we wanted to share 5 concrete learnings we had from launching this feature.

An AI taking time off, after working hard in your product.

Lesson 1: Your users are probably excited by AI features.

Mito is just a Python package. As such, users get new features by manually opting in to an upgrade.

The day we launched our AI feature, we sent out a product update to all previous users about Mito AI. That day, we had 2x more upgrades than any other day in Mito history, and 3x more upgrades than the average day.

This probably isn’t news to you, but LLMs are in the zeitgeist right now. Launching an AI feature is exciting for many users, and can encourage them to return to your product to check it out!

Lesson 2: Forcing users to BYOK (Bring Your Own Key) is a major blocker

Our users were excited to checkout Mito AI v1. Great! Only one issue: they were required to bring their open Open AI API key.

Turns out, this is a major blocker to users using Mito AI v1. Only about 10% of the users that attempted to use Mito AI features were successfully able to do so, and the primary barrier in doing so is supplying your open OpenAI API key.

Although we expected there to be a dropoff, we thought it would be closer to 50% than 90%. Upon reflection, this is fairly unsurprising. For many users, they have to create an OpenAI account. For the users with accounts, they might have to make their first-ever API key - this is especially true for our users.

Bring Your Own Key is a huge blocker for many users to realize the value of AI features. If you’re adding AI to your product, invest in infrastructure that makes it as easy as possible for users to get started.

Lesson 3: LLMs means portability. Don’t stress model or prompt choice too much.

The day before we launched Mito AI v1, ChatGPT AI opened to the public. In our testing, this model performed dramatically better at code-gen than the previous model we were using (code-davinci-002).

So, with literally no change to the prompt we were generating, we switched from this old model to ChatGPT. The net result was generated code that was correct more often!

LLMs are, by nature, very flexible. This means that switching from one model to another isn’t really a huge issue, so don’t worry too much about selecting the right model off-the-bat.

This flexibility holds for prompt selection as well. We spent a few days trying to select the best prompt, and at the end of the day ended up making a random selection between the three that performed the best.

Lesson 4: Enterprise adoption is a different ball-game

Mito is currently deployed at some the largest banks in the US and Canada as their go-to tool for Python automation. However, by default, we turned off all AI functionality for enterprise customers.

For one, the use of OpenAI API’s means that data leaves these enterprises’ computers, which is a huge no-no. For two, there are other security concerns that come with large language models that these enterprises are still grappling with.

We’re currently working hard to make LLMs for code-gen more usable for enterprises in a safe, secure, and private way - but as a builder, you should expect some bumps here. Enterprise adoption of LLMs isn’t just about making a good interface. There are other large deployment questions to answer.

Lesson 5: There’s lots of room for UI innovation

Large language models are unique in that they provide the most flexible natural language interface possible for many tasks. As a result, like Mito AI v1 did, it can be very natural to present the UI on top of these models as a chat interface.

But we think there is lots of room to improve here. Spreadsheet data is naturally represented in… Excel. Not in a text message. And this conclusion is not just limited to spreadsheets.

We expect that there are many ways to expose LLMs to users that don’t just look like a chat back and forth. For example, helping users better understand the impact of their requests made through an LLM, or helping users understand how robust the LLM’s solutions are - there is a lot to explore here!

LLM interfaces are early days. UI’s custom-built for your domain could be much more usable than just another chat interface.

The biggest lesson of all

Large language models are early days. If you’re building a product that could benefit from AI-enhanced functionality (like Mito!), it’s probably worth experimenting with early. Hopefully the above lessons will help you launch {Your Product} AI V1 in a way that delivers value to your users more effectively than we did!

For us, the biggest lesson of all is that adding a chatbot that generates code is easy. Actually figuring out how that code interacts with your existing product is much harder.

How do we help the user understand the impact of the generated code? How do ensure the generated code is correct? How can further improve the safety of the system? How can we help users handle common errors?

We’re working on all of these questions. When we have some better answers, we’ll see you for Mito AI v2!

Ready to write Python code 4x faster?