How to Use Operational IT Data for PLG
Operational IT data, such as log data and other application telemetry, can play an important role in understanding your users. Leveraging user data to continuously optimize and improve products is a core tenet of product-led growth (PLG).
This data can be useful for a number of purposes, including:
- Converting free trials to paid, or upselling existing users
- Understanding customer behaviors to develop a product roadmap
- Blending product telemetry data with marketing/sales data
- Making products more stable and reliable for customers
Let’s learn more about PLG, and how IT telemetry data can be used to power strategic growth.
What is PLG?
PLG is a growth model where product usage drives customer acquisition, expansion and retention. Many SaaS companies like Slack and Calendly have popularized PLG by using their product to create a pipeline of active users. The trend of PLG has taken off, with 61% of Cloud 100 companies embracing product-led growth (PLG).
Enterprise software buying expectations have shifted toward self-evaluation and self-service. As a result, many teams have transitioned to a product-led motion vs. a traditional sales-led motion. According to Gartner, 33% of buyers (and 44% of millennial buyers) demand a sales-free experience.
To provide this experience, PLG companies invest in robust product data to track, measure, and analyze user behavior. Another important component of PLG is building a growth team that runs experiments on PLG data to incrementally improve the user journey. The types of roles that deal with PLG data might include:
- ProductOps: This team is responsible for the behind-the-scenes parts of product development and product management, leveraging data to consistently drive performance improvements.
- CloudOps: This team manages the processes and tools you need to run applications, services, workloads, and infrastructure on public cloud platforms offered by AWS, Microsoft Azure, and Google. Their goals include improving stability and agility.
- DataOps engineer: A collaborative data management practice, DataOps applies an agile methodology to developing and delivering analytics. Typically, DataOps brings together DevOps with data engineers and the data science team.
Read: Inside DataOps: 3 Ways DevOps Analytics Can Create Better Products
Using IT telemetry data for PLG strategy
Operational data can provide deeper application insights to help teams understand user behaviors, including:
- Active customers
- Time in app and time to value
- Customer activities for discovery, activation, monetization, retention and referral
- Types of users
- and more.
Today, many teams find that they’re challenged by the fact that IT data is locked within IT. IT Data is not accessible to other parts of the organization. In addition, there’s no single source of truth across disparate systems and SaaS applications. With the emergence of SaaS, each department has their own stack — martech, CRM, customer experience, digital experience, and more. While these applications help manage workflows, they miss the multidimensionality of today’s customers. Not to mention, there are many different analytics tools used for different enterprise data end users.
Read: Unlocking Data Literacy Part 3: Choosing Data Analytics Technology
Across this entire tech stack, data engineers are responsible for creating data pipelines, managing schema, making sure data is accurate and updated. However, waiting on this data engineering work to happen can cause data to be outdated by the time the product team or business analyst needs it.
Solving IT data management challenges with a data lake architecture
Many teams find that the data lake architecture approach is useful for SaaS companies with product data that lives in multiple places. In this model, a self-service data lake engine sits on top of a cloud object storage (e.g. Amazon S3 or GCP) repository, delivering key features that help organizations achieve data lake benefits and realize the full value of their data.
Taking it a layer deeper, raw data is produced by applications (either on-prem or in the cloud) and ingested into Amazon S3 buckets with services like Amazon CloudWatch or a log aggregator tool like Logstash. An analytical database like ChaosSearch runs as a managed service in the cloud, allowing organizations to:
- Automatically discover, normalize, and index data in Amazon S3 at scale.
- Index data with extreme compression for ultra cost-effective data storage.
- Store data in a proprietary, universal format called Data Edge that removes the need to transform data in different ways to satisfy alternative use cases.
- Perform textual searches and relational queries on indexed data.
- Effectively orchestrate indexing, searching, and querying operations to optimize performance and avoid degradations.
- Clean, prepare, and virtually transform data without moving it out of Amazon S3 buckets, eliminating the ETL process and avoiding data egress fees.
- Analyze indexed data directly in Amazon S3 with data visualization and BI analytics tools.
Telemetry data for PLG of apps and SaaS
Benefits to a cloud data lake approach for PLG
The cloud data lake approach described above can help teams gain a single source of truth for live log analytics, so everyone at the company can trust data across domains and be truly data-driven. Instead of silos, teams can blend data across IT and SaaS applications to discover unique insights. With live ingestion and fast time to value to explore data in real-time, DataOps and ProductOps teams no longer have to wait to discover product and customer data insights.
And by leveraging existing cloud storage infrastructure, teams can lower the cost of data ingestion and retention, gaining new capabilities to analyze operational data at scale.
Learn more about unlocking the hidden value of log analytics.