Monitor OpenAI GPT application usage in New Relic

Dan Holloran ✏️📜 - Mar 29 '23 - - Dev Community

To read this full New Relic blog, click here.

If you're already using or plan to use OpenAI's GPT-3 and GPT-4 at scale, it's important to monitor metrics like average request time, total requests, and total cost. Doing so can help you ensure that OpenAI GPT Series APIs like ChatGPT are working as expected, especially when those services are required for important functions like customer service and support.

Monitor OpenAI with our integration

New Relic is focused on delivering valuable AI and ML tools that provide in-depth monitoring insights and integrate with your current technology stack. Our industry-first MLOps integration with OpenAI’s GPT-3, GPT-4, and beyond provides a seamless path for monitoring this service. Our lightweight library helps you monitor OpenAI completion queries and simultaneously records useful statistics around ChatGPT in a New Relic dashboard about your requests.

Data Bytes Video

With just two lines of code, simply import the monitor module from the nr_openai_monitor library and automatically generate a dashboard that displays a variety of key GPT-3 and GPT-4 performance metrics such as cost, requests, average response time, and average tokens per request.

To get started, install the OpenAI Observability quickstart from New Relic Instant Observability (I/O). Watch the Data Bytes video or visit our library repo for further instructions on how to integrate New Relic with your GPT apps and deploy the custom dashboard.

Get the pre-built GPT-3 OpenAI monitoring dashboard by [installing the quickstart](https://newrelic.com/instant-observability/openai?utm_source=devto&utm_medium=community&utm_campaign=global-fy23-q4-gpt_dev_to) from New Relic Instant Observability.

Key observability metrics for GPT-3 and GPT-4

Using OpenAI’s most powerful Davinci model costs $0.12 per 1000 tokens, which can add up quickly and make it difficult to operate at scale. So one of the most valuable metrics you’ll want to monitor is the cost of operating ChatGPT. With the integration of GPT-3 and GPT-4 with New Relic, our dashboard provides you with real-time cost tracking, to surface the financial implications of your OpenAI usage and help you determine more efficient use cases.

Another important metric is average response time. The speed of your ChatGPT, Whisper API, and other GPT requests can help you improve your models and quickly deliver the value behind your OpenAI applications to your customers. Monitoring GPT-3 and GPT-4 with New Relic will give you insight into the performance of your OpenAI requests, so you can understand your usage, improve the efficiency of ML models, and ensure that you’re getting the best possible response times.

Other metrics included on the New Relic dashboard are total requests, average token/request, model names, and samples. These metrics provide valuable information about the usage and effectiveness of ChatGPT and OpenAI, and can help you enhance performance around your GPT use cases.

To view, visit [one.newrelic.com](https://one.newrelic.com/?utm_source=devto&utm_medium=community&utm_campaign=global-fy23-q4-gpt_dev_to) and select Alerts & AI to see the prompts and responses in the New Relic OpenAI dashboard.

Overall, our OpenAI integration is fast, easy to use, and will get you access to real-time metrics that can help you optimize your usage, enhance ML models, reduce costs, and achieve better performance with your GPT-3 and GPT-4 models.

For more information on how to set up New Relic MLOps or integrate OpenAI’s GPT-3 and GPT-3.5 applications in your observability infrastructure, visit our MLOps documentation or our Instant Observability quickstart for OpenAI.


To read this full New Relic blog, click here.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .