Enter to win a Tivoli Model One Bluetooth Radio and check out what's coming at AWS re:Invent -->
Enter to win a Tivoli Model One Bluetooth Radio and check out what's coming at AWS re:Invent -->
Start Free Trial

ChaosSearch Blog

7 MIN READ

ChaosSearch Pricing Models Explained

ChaosSearch Pricing Models Explained
7:21

ChaosSearch was built for live analytics at scale on cloud storage. Our architecture was designed for high volume ingestion of streams & analytics at scale via ElasticSearch & Trino API via a stateless fabric that can scale to meet the customers’ scale & latency requirements. Because we don’t store any data, under the hood, ChaosSearch is basically a set of containers that are deployed in cloud compute instances in a dedicated VPC to each customer managed by ChaosSearch. There are two types of containers run:

  • Base infrastructure - this is the set of containers that host the console, load balancers & the services that coordinate requests. This is a base level of compute that is needed for each tenant.
  • Workers - this is the set of containers that do all the work to ingest & query data. Each compute instance hosts 6 workers. In our default deployment model, workers are deployed in each region where data is landing & then are centralized in one of them (at the customer’s choice) which is used for all query activity (this creates a single pane of glass & helps improve capacity utilization, but full region isolation is also possible to meet compliance/security requirements). Workers are generic and can be scaled up & down to improve the capacity utilization (and overall cost of the system), but there is a minimum of 12 workers to be run at all times per tenant-region to ensure data can be ingested.

 

Ingest vs. Worker-based model

ChaosSearch has two pricing models (ingest & worker-based) which can be used to optimize pricing based on customers’ preferences.

The ingest-based pricing model is easier to compare with alternative managed services in the observability space (as they typically price based on the same/similar metric) & allows customers to easily estimate their cost based on their ingest. With the ingest-based pricing model, the customer pays $3,000/tenant-region (i.e. # of regions for each tenant) & commits to a certain level of ingestion (e.g. 10,000GB/day) at a certain committed price (e.g. $0.21/GB for 10,000GB/day). Then monthly, if the average daily ingest is above the committed rate there’s an on-demand fee charged at 30% above the committed rate (e.g. $0.27/GB). This pricing model is good for customers that want a managed service for log analytics with predictability of spend based on volume of data at a fraction of the cost of alternatives in the market. It provides the ability to have unlimited retention at a fraction of the cost of ELK-based companies like Logz.io or Mezmo ($0.80-$1.80/GB+ depending on retention), Datadog ($0.10/GB + $1.06-$2.50/M events), Splunk ($2.20/GB), Sumo Logic ($3/GB), which allows you to either replace or complement them in your stack.

The worker-based pricing provides greater flexibility and allows customers to only pay for the compute resources they use with querying available either via Elasticsearch API & Opensearch Dashboards & Trino API & Superset. With the worker-based pricing model, the customer pays $1,000/tenant & a price for the number of workerhours used (i.e. $0.20/workerhour in US regions & up to 50% above that for other cloud regions with a minimum of 12 workers available at all times per region). Each tenant requires a certain number of workers always up to continuously ingest data & can scale up the number of workers (either based on time of day or login policy) to service querying needs (each worker is used to fetch data from cloud storage on each query in a distributed system fashion). Given that ChaosSearch’s ingest is highly efficient, for a single stream (i.e. single object group) with relatively tight schema & constant well-sized file throughput, 24 workers (i.e. $4,500/mo) can ingest up to 5,000GB/day, being very efficient for ingestion of high volumes of data. Adequate number of workers per environment will depend on specifics of data ingestion & query access patterns so it’s better assessed in ChaosSearch’s free proof-of-value (POV).

 

Pricing Examples

Example #1:

Single tenant with single us-east-1 region and a stream of VPC flow logs with 20,000GB/day and infrequent querying

  • Ingest-based pricing = $3,000 (for 1 tenant) + 20,000GB/day * 30 days/month * $0.15/GB = $93,000/month
  • Worker-based pricing = $1,000 (for 1 tenant) + 48 workers (est. for ingestion) * 730 hours/month * $0.20/workerhour + 120 workers (est. add’l workers for querying) * 4 hours per weekday * 22 weekdays per month * $0.20/workerhour = $10,120/month
Given the high ingest volume & infrequent query access patterns worker-based pricing is much more cost-effective than ingest-based pricing model

 

Example #2:

Single tenant with single us-east-1 region with multiple streams with large nested schema spikey ingest at 1,000GB/day with continuous & spikey querying & alerting activity with multiple users

  • Ingest-based pricing = $3,000 (for 1 tenant) + 1,000GB/day * 30 days/month * $0.30/GB = $12,000/month
  • Worker-based pricing = $1,000 (for 1 tenant) + 48 workers (est. for ingestion) * 730 hours/month * $0.20/workerhour + 90 workers (est. for querying during weekday workhours) * 10 hours/weekday * 22 weekdays/month * $0.20/workerhour + 90 workers (est. add’l workers in peak times) * 2 hours/weekday * 22 weekdays/month * $0.20/workerhour = $12,760/month
Given the # users & spikey nature of ingest & usage, ingest-based pricing is more predictable & easier to compare with alternatives

 

When to use each pricing model

If you want a log analytics managed service for observability, our ingest-based pricing model can give you significant savings vs. alternatives. If your use case has high ingest with spikey access, our worker-based pricing model allows you to take full advantage of our architecture & scale workers to meet your needs with superior cost economics, unleashing access to high volumes of data in near real-time across access modes (Elasticsearch & Trino API) at a fraction of the cost of alternatives in the market.

About the Author, André Rocha

Andre Rocha is the Vice President of Product & Operations for ChaosSearch. Andre loves building processes & systems to enable the hyper growth of disruptive technology companies. He embraces any opportunity to learn, travel or talk macroeconomics. More posts by André Rocha