feat: Implement ProcessorChain for composite metric reporting#2569
feat: Implement ProcessorChain for composite metric reporting#2569drewrelmas wants to merge 15 commits intoopen-telemetry:mainfrom
Conversation
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #2569 +/- ##
==========================================
+ Coverage 88.42% 88.44% +0.01%
==========================================
Files 631 632 +1
Lines 232112 232715 +603
==========================================
+ Hits 205243 205816 +573
- Misses 26345 26375 +30
Partials 524 524
🚀 New features to boost your workflow:
|
|
Marking as draft again, upon further review of the benchmarks I don't think this is a strictly valid comparison at this iteration. Theoretically the chain benchmark should be equivalent or faster than the regular path. |
|
Putting here what we discussed offline: I was wondering if ProcessorChain in this PR can be a ChainProcessor instead :D 1. ChainProcessor (user-facing composition)This is what the PR implements today, but reframed: it's just another processor that allows you to compose multiple processors inside it. It implements the 2. ProcessorChain (engine-level optimization)NOTE: This section is exploratory to share an idea, rather than a specific implementation. This is a separate concept. A processor chain at the engine level would be about optimization, not about adding functionality. The idea would be to separate the pure data processing from message handling in processors, something like: // Full processor — owns its message loop
trait Processor {
async fn process(&mut self, channel, effect_handler);
}
// Pure data transform — no message loop, no channels
trait PDataProcessor {
fn process_pdata(&mut self, pdata: PData) -> Vec<PData>;
}A processor that implements This keeps the two concerns separate:
I think there is a place for both, but they shouldn't be conflated. The current PR mixes user-facing composition with engine-level plumbing. Ideally the ChainProcessor should work without |
|
Two very brief comments on this PR:
|
Change Summary
Implements
ProcessorChain— a composite node that runs multiple sub-processors sequentially within a single task, eliminating inter-processor channels. The chain reports a singlecompute.success.duration(orcompute.failed.duration) metric covering the total compute cost of a record batch passing through all sub-processors, while each sub-processor still reports its own individual duration.Motivation
When a single logical operation involves multiple internal processors for performance optimization (e.g., attributes -> condense -> recordset KQL), operators need a single duration metric representing the total cost of that logical operation per batch. Without this, the only option is statistical aggregation at the metrics layer, which produces incorrect min/max values (
min(A) + min(B) != min(A+B)).The
ProcessorChainapproach gives per-batch composite timing with correct Mmsc distribution (min/max/sum/count).Config syntax
Telemetry output
With
runtime_metrics: normal, this pipeline producescompute.success.durationfor all 5 of the following:node.idnode.typenode.urninsert_Aprocessorurn:otel:processor:attributechainprocessor_chainurn:otel:processor_chain:compositechain/insert_Bprocessorurn:otel:processor:attributechain/insert_Cprocessorurn:otel:processor:attributeinsert_Dprocessorurn:otel:processor:attributeThe composite duration is always >= the sum of sub-processor durations (it includes inter-stage overhead).
Querying metrics locally shows the following result:
[ { "node_id": "insert_A", "node_type": "processor", "node_urn": "urn:otel:processor:attribute", "success": { "avg_ms": 1.2843131327433628, "count": 226 } }, { "node_id": "insert_D", "node_type": "processor", "node_urn": "urn:otel:processor:attribute", "success": { "avg_ms": 0.36642488938053097, "count": 226 } }, { "node_id": "chain", "node_type": "processor_chain", "node_urn": "urn:otel:processor_chain:composite", "success": { "avg_ms": 0.9586159513274336, "count": 226 } }, { "node_id": "chain/insert_B", "node_type": "processor", "node_urn": "urn:otel:processor:attribute", "success": { "avg_ms": 0.4922827168141593, "count": 226 } }, { "node_id": "chain/insert_C", "node_type": "processor", "node_urn": "urn:otel:processor:attribute", "success": { "avg_ms": 0.37540364159292033, "count": 226 } } ]Design decisions
EffectHandlerwired to a sharedRc<RefCell<Vec<PData>>>via aVecSender. When the sub-processor calls send_message(), the output is pushed directly into theVec— no channel send/recv, waker management, or async polling overhead. These buffer handlers are created once at construction and reused for every batch.PDatavalue through each intermediate stage without anyVecoperations. Only when a stage produces 0 or 2+ outputs does it fall back to staging vecs (stage_a/stage_b) which swap roles viastd::mem::swapand retain heap capacity across calls.node.id(e.g.,chain/insert_B) andnode.urnviawith_node_telemetry_handlescoping during factory creation. This ensures sub-processor metrics are clearly separated from the chain's composite metrics in telemetry output.MetricsReporterfrom theCollectTelemetrycontrol message is forwarded to sub-processors so their metric snapshots reach the telemetry registry (not an orphaned channel).NodeKind::ProcessorChain: Reuses the existing (previously unused) config variant. Maps toNodeType::Processorin the engine so it participates in normal processor wiring.Performance
~300µs simulated work per processor (1,000 batches, single-threaded LocalSet):
With realistic per-processor compute (~300µs, approximating processor work), the chain overhead is within noise — effectively zero.
~100ns simulated work per processor (1,000 batches, single-threaded LocalSet):
At trivially low work (100ns per processor), the chain matches or beats separate tasks for
len=1thanks to a single-processor fast path. Forlen>=2, the remaining ~5-8% gap is the cost of per-stageVec/RefCellbookkeeping, which is significant only at these artificially low work levels.The chain's value is not raw throughput — it's the ability to produce a correct composite duration metric (min/max/sum/count) across all sub-processors, which is impossible with separate processors.
Future work
processor_chainconfig switch to opt internal nodes out of reporting their owndurationmetrics: Add processor_chain config switches to enable/disable sub-processor metric reporting #2577What issue does this PR close?
NodeKind::ProcessorChain#2556processor_chain#2579NodeUserConfigforProcessorChainprocessorconfigs #2576How are these changes tested?
Unit tests, benchmarks, and running fake config locally
Are there any user-facing changes?
Yes, users can now configure
processor_chainelements in their configuration.