Click here for Support   |    Sales: +1 866 755 0267

2023 Application Performance Management Predictions - Part 2

December 6 , 2022 | APMdigest

2023 Application Performance Management Predictions - Part 2
The Disclaimer

The below article was originally published in APMdigest.

The Article

Industry experts offer thoughtful, insightful, and often controversial predictions on how APM, AIOps, Observability, OpenTelemetry and related technologies will evolve and impact business in 2023. Part 2 covers more on observability.

Start with: 2023 Application Performance Management Predictions – Part 1


Observability will become more embedded in developer workflows. The industry is starting to understand that instrumentation isn’t an afterthought but rather something that is crucial to consider and introduce right from the start. It might sound obvious in retrospect, but how could we not want to validate that the code we’re writing is actually behaving correctly in the real world? I think we can expect to see several enhancements that help introduce the telemetry needed to validate your code as you’re first starting to write it.
George Miranda
Head of Ecosystems & Partnerships, Honeycomb

Developers at legacy organizations will begin seeing the benefits that a modern approach to observability can offer them beyond the logs and metrics that they’ve had to use previously. With the rise of eBPF auto-instrumentation, batteries-included ease of use, and standards compliance with OpenTelemetry, it’s easier than ever for enterprises to have a choice when adopting observability.
Liz Fong-Jones
Field CTO, Honeycomb

As developers start to become more aware of their production systems, there’s going to be more of a shift to developers using instrumentation in their day-to-day tasks. Right now, instrumentation, specifically tracing, is mostly reserved for SRE and infrastructure engineers and added after the fact. Developers are starting to realize that rich telemetry is a game-changer when it comes to productivity when you’re responsible for fixing production. With this, we’ll start to see developers of all types (back, front and middle) be interested in how they can use Tracing and Observability techniques to help with local development.
Martin Thwaites
Developer Advocate, Honeycomb


We’re all too familiar with the DevOps infinity loop — the ideal workflow where every stage of the software development lifecycle feeds into the next. However, for many organizations attempting to bring developers and SREs together under the DevOps banner, that loop is broken. Developers will “push and pray” without having a full understanding of how their changes will affect performance and the user experience. As a result, companies miss the opportunity to improve products faster and delight the customer sooner. In 2023, organizations will fix the DevOps workflow by putting code at the center of the collaboration, giving ops engineers and developers visibility into the performance of applications across the entire tech stack, and reducing context switching. The DevOps teams that achieve a single pane of glass view — for collaboration, tracking, and observabillity — will greatly improve clarity and time to resolution and proviide the best possible outcomes for their organization.
Peter Pezaris
SVP of Strategy and User Experience, New Relic


The need to understand why code fails in production will finally, finally make the leap from an ops problem to being a dev-inclusive problem. Service ownership moving into the mainstream consciousness will force tooling to learn how to speak the language of developers, and the increasing popularity of tracing will serve as a dev-friendly entry point into the world of observability. Both will serve to make production less of a scary place for developers to thrive.
Christine Yen
Co-Founder and CEO, Honeycomb


One key way to increase job satisfaction among developers is to ameliorate a sense of ownership and control whenever possible, and new approaches to observability offer several ways to do this. In 2023, we expect the developer experience to become central in observability initiatives — for example, allowing developers to have full, direct access to all the data they need to do their jobs (so they’re never missing a dataset and don’t have to ask DevOps or SRE team members for access in order to make fixes); and automating the onboarding of new services so developers can have instant, real-time visibility into their mission-critical production environments.
Ozan Unlu
CEO, Edge Delta


Observability data will be used for an increasing number of use cases going beyond understanding system performance. Modern software delivery approaches, like progressive delivery, heavily rely on observability data, and this data will be used for more automation use cases. Additionally, new data sources — especially security data — will be integrated with more meta data (like topology information) and system change events (like deployments).
Alois Reitbauer
Chief Technology Strategist, Dynatrace


Observability data is pure gold when it comes to revealing bugs and other issues which could cause a service system outage. But with data being produced at such an incredible rate, organizations find it harder (and costlier) to get their arms around all of it in order to identify anomalies and growing hotspots, which can sprout up virtually anywhere. In 2023, we expect organizations will increasingly turn the traditional observability paradigm on its head — pushing compute power to data vs. data to compute power — in order to leverage the full breadth of their data stores while maintaining efficiency and keeping storage costs in check.
Ozan Unlu
CEO, Edge Delta


Data is everywhere and enterprises will increase investments in technologies that will help democratize data access, gain distributed analytics and search capabilities on data stored in cost-effective object stores, without having to go through the traditional process of indexing and storing data in centralized systems.
Tejo Prayaga
Sr. Director, Product Management, CloudFabrix


Data Gravity is a problem that faces any enterprise when dealing with application performance on the digital transformation journey. In 2023, more enterprises will begin to adopt an agile architecture in which data location will no longer be an impediment to application performance — accelerating application performance time to insight, time to value.
Russel Davis
Chief Operating Officer and Chief Product Officer, Vcinity


I believe that in 2023 we will be witnessing a growing concern regarding the exploding costs of observability platforms. We will see more work around reducing data volumes by creating sophisticated collection mechanisms like tail-based sampling in solutions such as OpenTelemetry. Alongside that, we will see a growing number of vendors offering data pipelines that can help cut costs after the data was collected by using rule-based capturing logics and transformations like raw data to metrics. We will see DevOps teams taking a more active part in maintaining a reasonable budget across their entire stack as observability solutions start offering more than just performance monitoring and will also introduce features helping teams track their cost-effectiveness.
Shahar Azulay
CEO and Co-Founder, groundcover


The increasing OpenTelemetry adoption makes observability market more open, diverse, and competitive, which should cause prices to go down and make observability more affordable for small teams and companies.
Vladimir Mihailenco
Co-Founder, Uptrace


“Adaptive observability” takes observability, the ability to deduce the internal state of the system by analyzing the outputs, and applies intelligence based on intelligent deep data analytics, to increase or decrease monitoring levels in response to the system health of specific IT operations. Previously, a very fine monitoring level can result in the collection of large volumes of data that can become unmanageable, and which also carries the risk of missing genuine anomalies in the noise, while a very coarse monitoring level can lead to the collection of too little data and can lead to incomplete diagnosis and insufficient insights. Adaptive observability integrates two new and active areas of research — adaptive monitoring and adaptive probing — to actively assess ITOps and intelligently route and re-route monitoring levels and data gathering depth and frequency to areas where there are issues. When an issue is identified, the amount of data collected, relative to the issue, is increased. Once the issue is resolved and things are healthy, then it’s no longer necessary to collect as much data, and at such a high frequency, resources can be deployed elsewhere in ITOps. Adaptive observability streamlines decision-making, problem solving, and optimizes IT resources.
Maitreya Natu
Data and AI Scientist, Digitate


As a company builds its observability practice, it’s tempting to focus primarily on data analysis and centralizing all of its information. That approach seems to point toward selecting a single vendor, but that’s a short-sighted decision. Being locked into a single provider can hold an enterprise’s data hostage to rising licensing and storage costs: a lethal mistake to make as companies are keeping a close eye on IT budgets in 2023. For true (and immediate) observability, there’s a wide variety of products and platforms that need to connect to each other, so it’s critical to select a tool that integrates with them all. That’s why open-source tools are essential to meet today’s observability challenges. With an open-source or vendor-neutral approach, users with multiple back-ends or tools don’t have to worry that their data might be favored to a single endpoint over another.
Eduardo Silva
Co-Founder and CEO, Calyptia