An Intel Case Study

Bringing Scalability to Elasticsearch

In the time it takes you to read this sentence, over 2.4 billion emails, 40,000 tweets and 400,000 google searches will happen. Traditional enterprise data management systems just weren't made to handle these loads.

Whether you're a new web 3.0 startup or the largest of government agencies, every second, your servers and applications are generating logs that need to be analyzed in real-time to manage your infrastructure, view your customer's journey across your applications and detect security breaches.

Cloud-native platforms like Apache Lucene and Elasticsearch were created to empower IT users to index these logs and deliver real-time intelligence. But these platforms can be very expensive -- requiring all data to be stored on expensive solid-state flash memory to be analyzed quickly. 

Fortunately, now there's a solution powered by Vizion.ai with Intel Optane technology that provides a 10X cost reduction to storing data in Elasticsearch while still keeping it hot and searchable. So, you no longer have to throw away your data.

In this case study, Intel reviews how data center provider phoenixNAP worked with Vizion.ai to enable huge volumes of application and server log metadata to be searched and stored in object storage, while leveraging Intel® Optane™ DC persistent memory as a cache. This breakthrough memory tier enables a 300% improvement in Elasticsearch indexing while dramatically lowering storage costs.

The result is a cost-effective, efficient and totally secure log analytics solution that provides Elasticsearch as a service, frees up IT management time and allows scaling without requiring prohibitively expensive storage. Included in the whitepaper is your link to activate your free Vizion.ai account to experience the power of Intel Optane-accelerated Elasticsearch.

intel-pnap-panzura-1

 

Read The Case Study