OpenTelemetry: The Enterprise Standard for Observability Data Collection

Oct 3, 2025

OpenTelemetry: The Enterprise Standard for Observability Data Collection



TL;DR

  • OTel fixes fragmentation. OpenTelemetry standardizes logs, metrics, and traces (via SDKs, Collector, and OTLP), so services emit a single format and flow through a unified pipeline instead of four vendor-specific agents and dashboards.

  • But OTel stops at the pipeline. At scale, you still face the firehose: exploding data volumes, proprietary query layers that block forensic analysis, and runaway ingestion/storage bills.

  • Cardinal makes OTel data usable. Lakerunner transforms S3-compatible storage into an observability lake, featuring event-driven ingestion, Parquet transformation, automatic compaction and indexing, and instant queryability (e.g., via Grafana), all in open formats.

  • Enterprise payoffs. One authoritative dataset aligns SRE/ops/data teams, enables vendor choice without re-instrumentation, and provides strong security/compliance controls (TLS, redaction, IAM), resulting in cost savings through “store once, query many.”

  • Adoptable pattern. Instrument once with OTel, Collector (agent + gateway), and multi-export: send alert-critical slices to APM, while keeping full-fidelity data in the lake. Debug faster, retain longer, spend less.

The High Cost of Fragmented Observability

Imagine you’re running platform engineering at a SaaS company that just hit hyper-growth. The architecture looks like every modern system: a checkout, payments, and user-profile service deployed on Kubernetes, a couple of Kafka and Airflow pipelines moving data around, and even a monolithic reporting service still running on bare VMs.

On paper, observability seems covered. Prometheus scrapes metrics to track service health. Jaeger traces calls across microservices. ELK indexes logs for audits and troubleshooting. Datadog agents run on half the fleet because one team relies heavily on its dashboards.

In practice, the cracks show quickly. During an outage, none of these tools agrees on the root cause. Every dashboard tells a slightly different story, and engineers spend more time reconciling observability data than fixing the incident. Meanwhile, costs spiral as duplicate agents and redundant storage pile up in the background.

This isn’t an edge case; it’s the industry default. Enterprises inherit a patchwork of observability tools that don’t communicate with each other, resulting in high costs, low trust, and frustrated teams.

This is the gap OpenTelemetry was created to fill. Instead of each vendor defining its own agents, formats, and APIs, OTel provides a vendor-neutral standard for collecting logs, metrics, and traces across the stack. But that only solves the fragmentation problem. Once all telemetry flows into OpenTelemetry, a new challenge arises: what to do with the flood of data?

That’s the journey this article covers. We’ll start by explaining how OpenTelemetry works and why it has become the industry standard. Then we’ll look at the scale problems it doesn’t solve on its own. Finally, we’ll explore how Cardinal’s Lakerunner turns raw telemetry into something enterprises can actually query, retain, and afford.

The Push for a Unified Observability Standard

Over the past decade, observability has become critical infrastructure. Every team picked the tool that solved their immediate problem:

  • Metrics? Prometheus or Datadog.

  • Traces? Jaeger or Zipkin.

  • Logs? ELK or Splunk.

That approach worked at a small scale, but in modern enterprises, it has created fragmented observability stacks. Each system comes with its own agents, APIs, and query languages. Engineers end up context-switching across four or five dashboards just to troubleshoot a single incident.

In our SaaS company example, the operations team pays for four different observability vendors. Yet when a customer reports slowness, there’s no single source of truth. Each tool reveals a partial story, and incident resolution often becomes a process of reconciliation.

This fragmentation isn’t just frustrating; it creates hidden costs:

  • Agent overhead: multiple agents running on the same node, resulting in duplicated work.

  • Vendor lock-in: once telemetry is flowing into a vendor, moving away is painful.

  • Data silos: traces, metrics, and logs live in different backends, making it hard to correlate issues.

The Need for a Unified Standard

This is where OpenTelemetry (OTel) comes in. OTel defines a vendor-neutral standard for generating, collecting, and exporting observability data. Instead of writing separate integrations for every tool, developers can instrument their applications once with OTel SDKs and decide later which backend to send the data to.

The benefits are immediate:

  • Single pipeline: all telemetry data flows through the same APIs and collector.

  • Vendor flexibility: export to Datadog today, Grafana tomorrow, without touching application code.

  • Cross-team consistency: every microservice emits telemetry in the same format.

Back to our SaaS company, leadership’s demand for “one pipeline, not five” becomes reality. By standardizing on OpenTelemetry, they finally align every service and every team on the same foundation.

How Does OpenTelemetry Work?

Core components you actually use

OpenTelemetry (OTel) isn’t one agent; it’s a specification, SDKs, and a Collector that standardize how telemetry is produced, processed, and sent. You instrument once, then choose or change backends later.

Instrumentation SDKs (manual + auto)

  • Languages: Go, Java, Python, Node.js, .NET, and more.

  • Manual spans/counters/logs or auto-instrumentation for frameworks (Spring Boot, Django, Express, etc.).

  • Example (Python) keeps a DB query inside a trace:

from opentelemetry import trace
from opentelemetry.trace import TracerProvider
trace.set_tracer_provider(TracerProvider())
tracer = trace.get_tracer(__name__)

with tracer.start_as_current_span("db.query"):
    cursor.execute("SELECT * FROM users WHERE active=1")

OpenTelemetry Collector

  • A standalone pipeline (Agent/DaemonSet and/or Gateway) with receivers, processors, and exporters.

  • You adjust sampling, enrichment, batching, and destinations without touching app code.

Exporters

  • The final hop to your tools (APM vendors, Grafana, Prometheus, object storage, etc.).

  • Prefer OTLP exporters; they preserve the OTel data model and are widely supported.

The Three Pillars of OTel Standardize

OTel unifies the classic “three pillars” of observability: traces, metrics, and logs, under one data model and context propagation, so you can follow a single request across systems with the same trace_id.

  • Traces: end-to-end request paths; each hop is a span (e.g., GET /checkout: payment_service: db.query).

  • Metrics: numeric time series for SLOs and trends (latency, error rate, CPU).

  • Logs: text/structured events that correlate to the same trace when context is propagated.

How Exports are Actually Set Up (SDK + Collector)

You’ll typically export from the SDK to the Collector using OTLP, then let the Collector fan out to one or more backends. This provides policy control (sampling, PII scrubbing, batching) in one place.

1) SDK: Collector (OTLP)

Configure the app to send to your Collector’s OTLP endpoint (gRPC or HTTP). The official env vars look like this:

# Single endpoint for all signals over gRPC
export OTEL_EXPORTER_OTLP_ENDPOINT="http://otel-collector:4317"

# or HTTP endpoint
# export OTEL_EXPORTER_OTLP_ENDPOINT="http://otel-collector:4318"

# Optional headers (e.g., auth tokens if your collector requires them)
export OTEL_EXPORTER_OTLP_HEADERS="x-api-key=REDACTED"

# You can also scope per-signal if you want:
# OTEL_EXPORTER_OTLP_TRACES_ENDPOINT, OTEL_EXPORTER_OTLP_METRICS_ENDPOINT, OTEL_EXPORTER_OTLP_LOGS_ENDPOINT

Many vendor guides exhibit the same pattern: the SDK (or auto-instrumentation) ships OTLP to a Collector, which then handles the secure export to the vendor.

2) Collector pipelines: destinations

A minimal, production-shaped Collector config (receivers, processors, exporters) looks like:

receivers:
  otlp:
    protocols:
      grpc:
      http:

processors:
  batch: {}
  attributes:
    actions:
      - key: http.request.header.authorization
        action: delete   # scrub secrets before export

exporters:
  otlp/apm:
    endpoint: api.vendor.example:4317
    headers:
      api-key: ${VENDOR_API_KEY}
    # (optionally add tls config here)
  # add more exporters in parallel as needed (e.g., file, kafka, s3-proxy, etc.)

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [attributes, batch]
      exporters: [otlp/apm]
    metrics:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlp/apm]
    logs:
      receivers: [otlp]
      processors: [attributes, batch]
      exporters: [otlp/apm]
  • You can run this as a DaemonSet agent and/or a central gateway; both patterns are first-class in OTel.

  • Tail sampling, resource enrichment, TLS, retries, and health checks are added in the same configuration, as per the official documentation.

Why This Matters in Your Stack?

With OTLP everywhere, apps don’t care which vendor you use. They emit OTel once; the Collector routes to Datadog today, Grafana tomorrow, or a lake for long-term retention, without re-instrumenting. That’s the core enterprise payoff: one standard, many choices

End-to-End OTel Flow

A standard OTel data path looks like this:

Understanding the OpenTelemetry Flow in Practice

  • Applications emit telemetry through SDKs

Every service is instrumented with the OpenTelemetry SDK. This instrumentation can be automatic (patched into common frameworks) or manual (explicit spans, counters, or log events added to the code). Once active, each application continuously sends telemetry, traces, metrics, and logs over the OpenTelemetry Protocol (OTLP), which utilizes the HTTP/gRPC protocol.

  • Collector Agents run alongside workloads

On each node, an OpenTelemetry Collector Agent (usually deployed as a DaemonSet in Kubernetes) receives this telemetry. The agent enriches data with resource attributes such as service.name, namespace, and node identity. Lightweight processors handle immediate needs:

  • memory_limiter prevents overload.

  • resource and attributes add or sanitize fields.

  • batch groups records for efficient forwarding.

The agent then forwards all telemetry securely to the cluster’s central gateway.

  • The Gateway handles heavy processing

A dedicated Gateway collector aggregates telemetry from all agents. Here is where more computationally expensive logic runs:

  • Tail-based sampling ensures entire error traces or high-latency requests are retained, while reducing noise from normal traffic.

  • Transform and attributes processors normalize span names, drop high-cardinality labels, and redact sensitive fields.

  • Batching optimizes throughput before data leaves the cluster.

  • Telemetry is exported to multiple destinations

From the Gateway, telemetry is exported in parallel:

  • A subset of data (for example, all error traces and SLO-critical metrics) is sent to SaaS backends, such as Datadog or Grafana Cloud. This supports real-time dashboards and alerting without overwhelming vendor ingestion limits.

  • Complete, batched datasets are written into object storage (such as S3 or MinIO). This provides durable, vendor-neutral retention of every log, metric, and trace, keeping the full system history available for later analysis or compliance needs.

The Enterprise Payoff of OpenTelemetry

Before OpenTelemetry, every observability vendor insisted on its own approach: its own agent, its own data format, and its own ingestion API. At a small scale, you could get away with that. But once you’re running hundreds of services, the cracks become obvious.

You end up with duplicate agents sitting on the same Kubernetes node: one for Datadog, another for Prometheus, and another for ELK. Each agent collects similar data in slightly different ways, consuming CPU and memory just to satisfy a vendor contract. Worse, the telemetry itself gets siloed. Traces are logged in Jaeger, logs are stored in ELK, and metrics are tracked in Prometheus, and none of them share a common identifier. That makes it almost impossible to follow a single request across the stack. And if you ever want to switch vendors? You’re forced to re-instrument your applications from scratch.

OpenTelemetry breaks that cycle by introducing a vendor-neutral standard. With OTel, you instrument each service once using SDKs or auto-instrumentation. Every service then emits telemetry in the same format and ships it through the OpenTelemetry Collector. From there, exporters handle the final hop into any backend you choose: Datadog, Grafana, Prometheus, or even plain object storage.

For our SaaS company, this marked a significant shift. Instead of ELK handling logs, Jaeger handling traces, and Prometheus handling metrics with no correlation, every signal now flows through a single pipeline. Logs, metrics, and spans all shared the same trace IDs, which meant engineers could now follow a single request end-to-end across the system. During incidents, teams no longer argued over which dashboard was “telling the truth.” They shared a consistent and correlated view of reality.

The New Bottlenecks After Adopting OTel

Adopting OpenTelemetry solves fragmentation, but it introduces a new reality: once every service is instrumented, the amount of telemetry explodes. What initially appears to be a clean solution quickly turns into a firehose. Our SaaS company experienced this firsthand. Standardization provided them with a single pipeline, but scaling made it challenging to manage.

Telemetry Volumes That Outpace Storage Systems

With every service reporting traces, metrics, and logs in a consistent format, the data volume skyrocketed. The Kubernetes cluster was soon exporting hundreds of thousands of spans per minute. Metrics with high-cardinality labels, such as user_id and region, ballooned in size.

The Collector could ingest all of it, but storage quickly became the bottleneck. OpenTelemetry standardizes collection, not retention. Enterprises still need to decide what to keep, for how long, and where it should live. Without a strategy, most organizations push the deluge into vendors and watch costs climb.

Vendor Dashboards That Can’t Answer Forensic Questions

Data is only useful if it can answer real questions. Vendor dashboards are effective for providing top-level insights, including average latency, error rates, and CPU trends. However, when incidents occur, teams require forensic-level queries across all signals.

When our SaaS company attempted to ask, “Which customers saw 500 errors after the last deploy?”, they hit a wall. Datadog could filter metrics, but couldn’t correlate them with traces. ELK could query logs, but not span data. Connecting the dots required manual exports or brittle custom pipelines.

The limitation wasn’t in the collection; it was that telemetry was locked behind proprietary query layers.

Why SaaS Ingestion Costs Spiral Out of Control

Before OTel, the company was already paying for duplicate agents: one for Datadog, one for ELK, and another for Prometheus. After standardizing, they still faced a problem: sending everything into SaaS backends was prohibitively expensive. Each gigabyte ingested carried a fee, whether it was used or not.

The irony? Debugging one incident often meant scrolling through thousands of spans that never needed to be stored in the first place. The company had solved duplication, but not the economics of observability.

OpenTelemetry Alone Isn’t Enough

OpenTelemetry fixed how data is collected. But it didn’t fix:

  • Where to store telemetry efficiently.

  • How to query it flexibly.

  • How to control cost at enterprise scale.

In other words, OTel gives enterprises the pipeline, but not the lake to hold it.

This is where Cardinal comes in, treating telemetry not as vendor exhaust, but as structured data you can store, index, and query on your own terms.

Where Cardinal Extends OpenTelemetry’s Power

OpenTelemetry solved the first half of the observability problem: it standardized how logs, metrics, and traces are collected. But OTel deliberately stops at the pipeline. It doesn’t tell you where to store that data, how to optimize it, or how to query it later.

That gap matters at enterprise scale. Most teams take the default route, push everything into SaaS observability platforms. At first, this feels like a win: all the telemetry is displayed in dashboards without requiring you to build your own storage. However, once traffic reaches terabytes per day, the cracks begin to appear. Ingestion bills spike, queries slow down, and teams are forced to choose between cutting retention or ballooning their spend.

This is exactly where Cardinal comes in. Instead of treating telemetry as vendor exhaust, Cardinal treats it as structured data you control. With its core engine, Lakerunner, an event-driven ingestion system, Cardinal transforms a plain S3-compatible object store into a real-time observability backend. Telemetry isn’t just collected; it’s stored efficiently, compacted, indexed, and made instantly queryable.

Turning Object Storage Into an Observability Lake

Instead of routing all telemetry into vendor silos, Cardinal stores it in Parquet, the open, columnar format widely adopted in data engineering. Parquet is optimized for both compression and fast analytical queries, making it ideal for handling large volumes of telemetry. The data is written into any S3-compatible object storage (S3, GCS, MinIO), creating what Cardinal calls an observability lake, a single, scalable store for logs, metrics, and traces that preserves schema and trace context.

Key benefits:

  • Petabyte-scale retention at object-store prices.

  • Schema preservation so trace IDs, attributes, and labels remain queryable.

  • Single source of truth, one copy of data that can feed multiple tools: Grafana dashboards, Datadog alerts, or internal analytics pipelines.

This view shows the two layers Cardinal manages: raw telemetry arriving directly from applications, and optimized Parquet files continuously generated by Lakerunner. Instead of piling up thousands of small raw files, data is compacted into query-efficient datasets ready for long-term use.

How Lakerunner Streams and Optimizes OTel Data

At the center of Cardinal’s architecture is Lakerunner, an event-driven ingestion engine that turns raw telemetry into query-ready datasets.

Here’s how it works:

  • Event-driven ingestion: Watches for new objects via S3-compatible notifications.

  • Multi-format support: Ingests OTel proto, JSON, CSV, and Parquet.

  • Transformation: Converts telemetry into columnar Parquet, reducing storage overhead and accelerating queries.

  • Indexing and compaction: Bundles small files into larger, indexed Parquet segments for efficiency.

Querying Telemetry Without Vendor Lock-In

Once telemetry is compacted into Parquet and indexed, Cardinal makes it instantly queryable. Through its Grafana data source plugin, engineers can point Grafana directly at their observability lake, eliminating the need to forward 100% of traffic into vendor dashboards.

Here, devs are querying logs stored in object storage directly from Grafana. Notice that all fields, including timestamp, severity, and service name, remain intact because the schema is preserved during ingestion.

Metrics queries feel the same as in a traditional APM dashboard, but the difference is cost: the data sits in cheap object storage, not in a vendor’s proprietary system. Engineers can drill down without triggering extra ingestion fees.

Keeping Telemetry Queryable at Petabyte Scale

Cardinal’s architecture is designed to solve the three challenges we outlined earlier:

  • Volume: By continuously compacting and indexing, Lakerunner ensures petabyte-scale telemetry remains queryable without collapsing under file sprawl.

  • Query flexibility: Open formats like Parquet allow the same data to be explored in Grafana, shipped to Datadog for alerts, or consumed by BI pipelines, eliminating lock-in.

  • Cost efficiency: Store telemetry once in low-cost object storage. Vendors receive only the slices of data necessary for alerts and SLOs.

This bucket view makes it clear: instead of thousands of tiny raw files, Lakerunner produces compact, query-optimized Parquet tables. These files can be read directly by Grafana, Spark, or any system that understands open formats.

For enterprises, this makes the difference between “collect everything but use little” and “collect everything and actually use it sustainably.”

Enterprise Benefits of Combining OTel + Cardinal

OpenTelemetry fixed the fragmentation problem. Every service now emits telemetry in a consistent, vendor-neutral format, and everything flows through a single pipeline. That’s a huge win. But consistency in collection doesn’t automatically translate into usability downstream. Enterprises still face three persistent problems:

  1. Teams don’t work from the same dataset. Logs, traces, and metrics may be standardized, but if they’re stored across multiple tools, engineers still argue over dashboards instead of collaborating on debugging.

  2. Vendors still dictate workflows. Switching from one APM provider to another usually means re-instrumenting agents or rebuilding pipelines. Even with OTel, teams often stay locked into their first vendor because the storage is proprietary.

  3. Costs multiply. Traditional setups charge at ingestion, again for storage, and sometimes even for backups. Enterprises end up paying two or three times for the same telemetry, just to make it queryable in different tools.

This is the gap Cardinal fills. By treating OTel telemetry as structured data in an observability lake, Cardinal enables the downstream usability of OTel's upstream consistency: a single dataset, open formats, and efficient storage that teams can query in any way they need.

One Dataset That Aligns Every Team

OpenTelemetry ensures that applications speak the same telemetry language. Cardinal extends that consistency into storage and queries. All logs, metrics, and traces land in the same schema-preserving observability lake.

  • Shared trace IDs allow logs, spans, and metrics to connect cleanly across services.

  • Consistent queries mean engineers don’t need to switch between Elasticsearch DSL, Datadog queries, and PromQL.

  • Team alignment follows naturally: SREs, operations, and data engineers can finally work from the same dataset instead of fighting over dashboards.

For our SaaS company, this eliminated the classic “ELK vs Jaeger vs Datadog” arguments during incidents. Everyone debugged against the same authoritative store.

Vendor Neutrality Without Re-Instrumentation

In traditional setups, switching vendors is painful because agents and SDKs are tightly coupled to the backend. With OTel handling collection and Cardinal handling storage, those choices decouple.

  • Grafana dashboards can query data directly from S3 through Cardinal’s plugin.

  • APM vendors, such as Datadog or New Relic, can still receive the critical slices needed for alerting and SLAs.

  • Internal BI or ML pipelines can analyze the same Parquet datasets without needing to build custom extractors.

In practice, our SaaS company initially used Datadog for real-time alerts, but later layered Grafana for dashboards, all without modifying a single line of application code.

Breaking the Cycle of Double and Triple Charges

Legacy observability setups charge multiple times for the same data: once at ingestion, again for storage, and sometimes a third time for backups. Cardinal breaks this cycle by making storage the center of gravity.

  • Store once in cheap, durable object storage.

  • Query many ways using Grafana, Datadog, or internal pipelines.

  • Stay efficient with Lakerunner’s automatic compaction and indexing.

For the SaaS company, this translated into significant savings. Instead of sending every gigabyte to vendors, they sent only alert-critical data while keeping the full dataset in S3 for analysis and compliance.

OpenTelemetry in Large-Scale Architectures

The combination of OTel and Cardinal doesn’t just work on a small scale; it’s designed for enterprises handling petabytes of telemetry each month.

Scaling Pipelines to Handle High-Volume Telemetry

At scale, collecting telemetry isn’t enough; it must stay queryable and affordable.

  • Horizontal scaling: OTel Collectors can run as agents (sidecars, DaemonSets) and centralized gateways, scaling linearly with traffic.

  • Processors for efficiency: sampling, batching, and filtering keep cardinality under control. For example, dropping user_id labels prevents Prometheus from blowing up.

  • Lake integration: With Lakerunner, filtered telemetry flows directly into an object store, where compaction and Parquet optimization absorb long-term growth.

For our SaaS company, this meant scaling from tens of thousands to hundreds of thousands of spans per minute without hitting SaaS vendor ingestion caps.

Building Security and Compliance Into Observability

Telemetry isn’t just “ops data.” In practice, it often contains highly sensitive information. Traces may include user IDs or session tokens. Logs may capture stack traces along with request payloads, sometimes even including credit card numbers. Metrics can expose details about customer behavior or geographic distribution. At enterprise scale, this raw telemetry becomes a liability if it isn’t handled correctly.

This is where OpenTelemetry and Cardinal together give enterprises control over both security and compliance:

  • Encryption in transit. All data flowing through the OpenTelemetry Collector can run over TLS. That ensures logs, metrics, and traces aren’t exposed while being moved across clusters or to storage.

  • Data sanitization in the pipeline. Collector processors can redact or transform fields before export. For example, credit card numbers can be masked, or high-cardinality user IDs can be dropped entirely. This reduces both risk and storage bloat.

  • Access control at storage. With Cardinal storing telemetry in an S3-compatible bucket, enterprises can enforce fine-grained IAM policies, specifying who can read traces, who can query logs, and which teams have access to raw data. This level of control is impossible when everything lives in a SaaS vendor’s black box.

  • Compliance alignment. Standards such as SOC2, HIPAA, and GDPR often require that sensitive data remain within your infrastructure, with audit trails and strict access controls. Storing telemetry in your own object store makes it easier to demonstrate compliance without needing to negotiate exceptions with a vendor.

For our SaaS company, this wasn’t a theoretical concern. Several of their largest customers required GDPR-compliant observability as a contractual requirement. By using OTel for collection and Cardinal for storage, they maintained control over telemetry, enforced sanitization rules at the pipeline level, and still provided engineers with the ability to query and debug incidents. The result: compliance boxes checked, without losing visibility.

How Enterprises Are Already Using OTel + Cardinal

The OTel + Cardinal model isn’t a theory; it’s already solving problems across industries:

  • Kubernetes at scale: Hundreds of microservices across clusters generate massive telemetry. Lakerunner keeps retention costs low while Grafana queries remain fast.

  • Financial services: Firms meet strict retention and compliance requirements by storing telemetry in S3 with audit-friendly access controls, without ballooning SaaS bills.

  • Retail and e-commerce: Checkout flow traces remain queryable during peak events, such as Black Friday. Only alert-critical subsets go into SaaS platforms, while the full dataset remains available for long-tail analysis.

For enterprises, the message is clear: OpenTelemetry makes data collection vendor-neutral, and Cardinal makes that data scalable, queryable, and compliant.

Conclusion

Enterprises don’t struggle with the lack of telemetry; they struggle with too much of it, scattered across disconnected tools. Our SaaS company's story demonstrated this clearly: Prometheus, Jaeger, ELK, and Datadog all told different stories, and costs increased with every additional agent.

OpenTelemetry changed the game by standardizing the collection of telemetry. With OTel SDKs and the Collector, every service now emits logs, metrics, and traces in a consistent, vendor-neutral format. That solved fragmentation,  but not the challenges of volume, cost, or query flexibility.

This is where Cardinal fits in. By transforming object storage into a real-time observability backend, Cardinal amplifies OTel’s capabilities downstream. Lakerunner ingests raw telemetry, transforms it into optimized Parquet, compacts and indexes it, and makes it instantly queryable. The result: one authoritative dataset, scalable to petabytes, queryable in Grafana or APM vendors, and stored at object-storage prices.

For our SaaS company, this meant moving from firefighting with four disconnected tools to debugging against a single observability lake. Incidents were resolved more quickly, compliance became simpler, and spending dropped significantly.

The takeaway is simple:

  • OpenTelemetry is the standard for collecting telemetry.

  • Cardinal is how you make that standard usable at an enterprise scale.

If you’re running into the same firehose of data, now is the time to rethink your pipeline. Explore how OTel fits into your stack, and test-drive Cardinal Lakerunner on your own cluster or object store. You’ll see firsthand how observability transitions from a chaotic cost center to a scalable, queryable, and under your control solution.

FAQs

What is the difference between OpenTelemetry and Prometheus?

Prometheus focuses on collecting and querying metrics. OpenTelemetry is broader; it standardizes logs, metrics, and traces across systems. Prometheus can integrate with OTel as a metrics backend.

Can OpenTelemetry replace my existing APM agent?

In most cases, yes. OTel SDKs and auto-instrumentation can replace proprietary agents, while still exporting data to vendors like Datadog or New Relic if needed.

How does Cardinal relate to OpenTelemetry?

Cardinal doesn’t replace OTel; it extends it. OTel collects telemetry; Cardinal stores and optimizes it in an observability lake, making it queryable and affordable at scale.

Does Cardinal replace vendors like Datadog or Grafana?

No. Cardinal complements them. You can still export subsets of data to Datadog for alerts or Grafana for dashboards. Cardinal ensures all data is retained under your control at a lower cost.

Is the OTel Collector required to use Cardinal?

Yes, the Collector is the standard method for sending telemetry into storage. Cardinal integrates with the OTel Collector as a target, making it a natural fit for OTel-native architectures.

© 2025, Cardinal HQ, Inc. All rights reserved.