You have 1 article left to read this month before you need to register a free LeadDev.com account.
Estimated reading time: 5 minutes
As observability 1.0 evolves into observability 2.0, questions arise about how this new approach can save money, time, and improve developer experience.
The journey from observability 1.0 to observability 2.0 reveals the struggle to define observability and the evolving need for tools that truly support developers. Observability 2.0 moves beyond monitoring operational issues, aiming to “shift left” by empowering developers from the beginning of the software development process.
The evolution of observability
To understand how observability is evolving and its impact on developer experience, we need to understand the tumultuous history of trying to define what observability is.
In 1960, Rudolf E. Kálmán introduced the concept of observability for linear dynamic systems. Defining it as “a measure of how well internal states of a system can be inferred from knowledge of its external outputs.”
In 2016, the Honeycomb team popularized and expanded this definition to mean “the power to ask new questions of your system, without having to ship new code or gather new data in order to ask those new questions”.
In 2017, Peter Bourgon suggested that observability consists of “three pillars” – metrics, logs, and traces – a definition that found strong support and popularity within the application performance monitoring (APM) tooling industry, as it aligned with their products.
In subsequent years, many industry thought leaders have been trying to clarify the difference between observability, telemetry, and monitoring, such as Ben Sigelman’s Debunking the ‘Three Pillars of Observability’ Myth. But to no avail.
Then, in August 2024, Honeycomb cofounder Charity Majors’ article “Is it time to version observability?” finally articulated the difference between the two main definitions we’ve been ascribing to this single term. She identifies observability 1.0 as the “three pillars” generation of tooling (i.e., closely tied to APM tools) and observability 2.0 as a new generation of tools oriented toward an open-ended investigation of systems and closer integration with the software development lifecycle (SDLC).
Observability 1.0 vs observability 2.0
Observability 1.0 and 2.0 are not mutually exclusive; both serve valuable but distinct roles.
Observability 1.0 leverages AMP tools post-deployment, gathering vast amounts of telemetry data (metrics, logs, and traces) for the purpose of monitoring system health, trend spotting, and flagging known issues.
Often utilized by ops or DevOps, it involves dashboards and alerts to detect “known unknowns.” This means looking for predictable issues using a predefined set of metrics and logs.
However, issues and problems in complex distributed systems are often non-linear, difficult to predict, and rarely isolated – making it impossible to anticipate every potential failure or create a dashboard.
Likewise, understanding what went wrong, why, and how requires correlating data across various system layers, which is often time-consuming and prone to human error.
How observability 2.0 is shaking things up
Enter observability 2.0. This new generation of tools helps developers to understand system behaviors and reveal “unknown unknowns” throughout the entire SDLC. This approach addresses root causes, pinpointing specific data points and interactions that might otherwise be missed by collecting high volumes of data and only drawing conclusions from the aggregate results.
Observability 2.0’s growing adoption is also linked to the popularity of OpenTelemetry (OTel), a Cloud Native Computing Foundation (CNCF) project. OTel provides a unified, open-source standard for collecting, exporting, and analyzing telemetry data (traces, metrics, logs) across diverse systems and environments.
This interoperability and vendor-agnostic approach aims to eliminate vendor lock-in, streamline telemetry collection, and improve developer productivity.
With over 1,100 companies and 9,000 contributors, OTel has quickly become an industry standard for telemetry data, streamlining how teams implement observability across complex systems.
The impact on developer experience
Developer experience encompasses how developers feel about, think about, and value their work. Developer experience isn’t just about productivity; it’s closely tied to job satisfaction, engagement, retention, and even business performance.
Developer experience reduces friction in daily tasks, minimizing things like interruptions, unrealistic deadlines, unreliable tools, technical debt, and poor documentation.
Engineering leaders can improve developer experience by focusing on three areas:
- Feedback loops: fast feedback loops streamline work, helping developers learn and adjust continuously, reducing friction.
- Cognitive load: high cognitive load, often caused by complex systems or inadequate documentation, leads to mistakes and burnout. Reducing this load helps developers focus and perform better.
- Flow state: minimizing disruptions helps developers maintain deep focus, often referred to as being “in the zone.”
More like this
The potential of observability 2.0
Observability 2.0 opens the doors for new use cases and tools that can enhance the developer experience through:
- Real-time, context-rich insights: with OTel, developers can gain immediate visibility into all the components, dependencies, and APIs within their systems, providing instant feedback on how any given change or new feature might affect the overall system. This improves confidence and helps them avoid creating technical debt (especially architectural technical debt).
- Streamlined debugging: instead of manually sifting through large volumes of aggregated data and relying on guesswork, tools like Multiplayer.app leverage OTel to enable platform-level debugging with deep session replays.
Session replays allow devs to retrace user experiences with behind-the-scenes systems captures – complete with front-end views, distributed traces, metrics, and logs – with a single click, saving valuable time. - Single source of truth: scattered documentation and resources can create roadblocks. Observability 2.0 tools unify data across system layers, correlating information for a holistic view of the system. This helps developers focus on building and improving software rather than piecing together fragmented insights.
The future of observability 2.0
Observability 2.0 provides developers and decision-makers with real-time, actionable insights. It opens doors to new tools and use cases that directly tackle developer pain points, enabling data-driven decisions instead of relying on guesswork.
Looking ahead, observability 2.0 will continue to empower developer teams by automating complex troubleshooting, speeding up onboarding, and breaking down knowledge silos. As it evolves, this shift could transform the landscape of software development and operations, leading to more efficient, cost-effective, and resilient engineering organizations.