Operational IT data, such as log data and other application telemetry, can play an important role in understanding your users. Leveraging user data to continuously optimize and improve products is a core tenet of product-led growth (PLG).
This data can be useful for a number of purposes, including:
Let’s learn more about PLG, and how IT telemetry data can be used to power strategic growth.
PLG is a growth model where product usage drives customer acquisition, expansion and retention. Many SaaS companies like Slack and Calendly have popularized PLG by using their product to create a pipeline of active users. The trend of PLG has taken off, with 61% of Cloud 100 companies embracing product-led growth (PLG).
Enterprise software buying expectations have shifted toward self-evaluation and self-service. As a result, many teams have transitioned to a product-led motion vs. a traditional sales-led motion. According to Gartner, 33% of buyers (and 44% of millennial buyers) demand a sales-free experience.
To provide this experience, PLG companies invest in robust product data to track, measure, and analyze user behavior. Another important component of PLG is building a growth team that runs experiments on PLG data to incrementally improve the user journey. The types of roles that deal with PLG data might include:
Read: Inside DataOps: 3 Ways DevOps Analytics Can Create Better Products
Operational data can provide deeper application insights to help teams understand user behaviors, including:
Today, many teams find that they’re challenged by the fact that IT data is locked within IT. IT Data is not accessible to other parts of the organization. In addition, there’s no single source of truth across disparate systems and SaaS applications. With the emergence of SaaS, each department has their own stack — martech, CRM, customer experience, digital experience, and more. While these applications help manage workflows, they miss the multidimensionality of today’s customers. Not to mention, there are many different analytics tools used for different enterprise data end users.
Read: Unlocking Data Literacy Part 3: Choosing Data Analytics Technology
Across this entire tech stack, data engineers are responsible for creating data pipelines, managing schema, making sure data is accurate and updated. However, waiting on this data engineering work to happen can cause data to be outdated by the time the product team or business analyst needs it.
Many teams find that the data lake architecture approach is useful for SaaS companies with product data that lives in multiple places. In this model, a self-service data lake engine sits on top of a cloud object storage (e.g. Amazon S3 or GCP) repository, delivering key features that help organizations achieve data lake benefits and realize the full value of their data.
Taking it a layer deeper, raw data is produced by applications (either on-prem or in the cloud) and ingested into Amazon S3 buckets with services like Amazon CloudWatch or a log aggregator tool like Logstash. An analytical database like ChaosSearch runs as a managed service in the cloud, allowing organizations to:
Telemetry data for PLG of apps and SaaS
The cloud data lake approach described above can help teams gain a single source of truth for live log analytics, so everyone at the company can trust data across domains and be truly data-driven. Instead of silos, teams can blend data across IT and SaaS applications to discover unique insights. With live ingestion and fast time to value to explore data in real-time, DataOps and ProductOps teams no longer have to wait to discover product and customer data insights.
And by leveraging existing cloud storage infrastructure, teams can lower the cost of data ingestion and retention, gaining new capabilities to analyze operational data at scale.
Learn more about unlocking the hidden value of log analytics.