Exploring AWS OpenSearch Serverless Pricing: How It Differs from Traditional Serverless Services

Salam Shaik - Oct 21 - - Dev Community

Hi everyone,

Before diving into this topic, Let me introduce what OpenSearch is.

OpenSearch: It’s an AWS-managed service alternative to Elastic Search. It offers various search features. It comes with a dashboard called **Kibana **where you can manage and visualize the data, work with the indexes, etc.

Scenario: Recently I started exploring AWS bedrock **service. I went to see what is Knowledge bases in bedrock and what it offers. As a part of creating the **Knowledge base, It created an OpenSearch serverless collection.

-After around 1 hour I deleted the Knowledge Base, thinking that the OpenSearch collection also be destroyed. But after 2 days I received an E-mail from AWS Budgets that the bill amount was hitting the budget limit

In the past, I had some incidents where my AWS bill hit around $800 without my knowledge. By talking to the AWS support team and explaining the scenario of what caused this huge bill they waved off the bill.

After that incident I created different levels of budget alerts, to know about the unexpected rise in the bill amount at the earliest. Like

Alert 1: with a $5 limit

Alert 2: with a $15 limit

Alert 3: with a $50 limit

I immediately opened my AWS console, went to AWS bills, and saw that the OpenSearch service was causing this issue. Then I deleted that collection.

Assumption:

After using Lambda for a long time, I assumed that serverless services would only be charged when the service got a hit or based on the execution time. I only know about my OpenSearch Serverless endpoint and I didn’t use the endpoint at all in the last 2 days. Then why will I be charged?

Reality:

  • Open Search Serverless pricing works differently when compared to other serverless services

  • You will be charged for Computing Power and Storage

  • Computing power is measured in OpenSearch Computing Units(OCU)

  • 1 OCU comes with 6GB of RAM and corresponding vCPUs and GP3 Storage

  • You need at least 2 OCUs **running one for **indexing and one for searching

  • All data is stored in the S3 bucket. So storage costs will be there

  • Here is the pricing from the AWS docs

  • So even though you are not hitting the server and not running anything in the open search index you will charged based on the OCU units you are using

Let’s calculate how much it costs for you for a month if you want to have a collection with the bare minimum of 2 OCUs **and **10GB of storage

  • 2 OCUs = 2 x 0.24 = $0.48/hour

  • 0.48 x 24(1 day) x 30(days) = $345.6 Per Month

  • Storage 10 x 0.024 = $0.24 Per month

  • Total cost = $345.6 + $0.24 = $345.84 Per Month

So Even though you are not hitting the server, or not running anything on the index you will be charged around 345$ per month. That’s a very huge amount

My Thoughts on this:

  • AWS doesn’t mention anything about how much load an OCU can take. It mostly depends on the complexity of the query we are running and the simultaneous requests that are hitting the server

  • We can set limits for OCUs we need to prevent un-intentional cost

  • Even though it is a serverless service we need to have an idea of Open Search Serverless Infra

  • I feel it looks like a typical EC2 Auto-Scaling service. We can set the limits on the desired instances and it will scale up and down based on the traffic

  • If we are not sure about the load we are gonna get, we don’t know how many OCUs we need and I personally feel paying around a minimum of 345$ dollars to set up a server up and running

  • It offers dev/test mode you can use 1 OCU, half for searching and half for indexing. It might cut the cost to half around $170 which is also not a small amount

I recommend if you are new to OpenSearch and want to experiment or try it, it is good to create an endpoint with dev/test mode on instances

AWS offers a free tier on this service with 750 hours of usage per month on t2.small.search or t3.small.search instances and 10GB per month. It is good enough for trying and testing this service

It is always better to keep budget alerts with different amount limits will help you a lot to identify any cost spikes at the earliest.

Hope you find this helpful. Please share your thoughts on this. I am happy to hear your views on this pricing. Thanks.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .