Skip to content

[KEP-5966] etcd RangeStream#5967

Open
Jefftree wants to merge 2 commits intokubernetes:masterfrom
Jefftree:etcd-rangestream
Open

[KEP-5966] etcd RangeStream#5967
Jefftree wants to merge 2 commits intokubernetes:masterfrom
Jefftree:etcd-rangestream

Conversation

@Jefftree
Copy link
Copy Markdown
Member

@Jefftree Jefftree commented Mar 18, 2026

This KEP proposes adding a RangeStream RPC to etcd's KV service. Instead of buffering the entire Range response in memory before sending, the server streams results back in chunks.

The main problems this addresses:

  • Large Range responses cause memory spikes on the server because the KV slice, serialized protobuf, and gRPC send buffer all have to coexist in memory
  • Client-side pagination is wasteful — every page recomputes the total count by walking the entire B-tree index

The new RPC reuses the existing RangeRequest and wraps responses in a RangeStreamResponse. Clients reassemble the stream with proto.Merge() and get identical results to a regular Range() call. The server handles chunking internally with adaptive sizing, pins to a single MVCC revision for consistency, and only computes the total count once on the first page.

Requests with non-default sort orders fall back to the regular buffered path since sorting defeats the purpose of streaming. Older clients are completely unaffected, and downgrading to a version without RangeStream just returns Unimplemented.

NONE

@k8s-ci-robot
Copy link
Copy Markdown
Contributor

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@k8s-ci-robot k8s-ci-robot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Mar 18, 2026
@k8s-ci-robot k8s-ci-robot requested a review from ahrtr March 18, 2026 15:11
@k8s-ci-robot k8s-ci-robot added the kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory label Mar 18, 2026
@k8s-ci-robot k8s-ci-robot added sig/etcd Categorizes an issue or PR as relevant to SIG Etcd. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Mar 18, 2026
@Jefftree Jefftree marked this pull request as ready for review March 30, 2026 21:29
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Mar 30, 2026
@k8s-ci-robot k8s-ci-robot requested a review from serathius March 30, 2026 21:29
### Non-Goals

- Supporting custom sort orders in streaming mode. Requests with non-default
sort order fall back to a single buffered Range call.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you mean by fall back? Could we just say that users that need non-default sort order can continue to use Range?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated. I was originally thinking of having the server auto switch the unary Range since the contract between the two is the same, but returning an error and getting the client to use unary Range seems to make more sense.

Add a server-streaming `RangeStream` RPC to the etcd KV service that accepts
the existing `RangeRequest` and returns a stream of `RangeStreamResponse`
messages. The server handles pagination internally, pins to a single MVCC
revision for snapshot consistency, and uses adaptive chunk sizing to
Copy link
Copy Markdown
Contributor

@serathius serathius Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens if the stream took so long that revision is no longer available? Please describe the error returned by server and how client should detect and handle it.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One more question regarding long-running streaming: currently, when BoltDB remaps, it requires mmaplock, which could potentially cause stalls in the background commit goroutine. This needs to be handled carefully. While we don’t need to dive into too many details here, it's important that we test this aspect. One possible approach could be modifying compaction to preserve key-value pairs, in order to avoid locking the read transaction for too long.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added. Return ErrCompacted and clients should retry.

edit: Just saw fuweid's comment that still needs to be addressed.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens if the stream took so long that revision is no longer available?

This shouldn't happen, because each TXN guarantees repeatable read.

long-running streaming: currently, when BoltDB remaps, it requires mmaplock, which could potentially cause stalls in the background commit goroutine.

Right, this is a drawback. It's recommende that Try to avoid long running read transactions, refer to https://github.com/etcd-io/bbolt?tab=readme-ov-file#caveats--limitations.

So let's clearly call out the trade-off of using rangeStream here.

  • Pros: reduce the memory usage so that it can avoid OOM
  • Cons: long-running read transaction may block write transaction.
    • Note: normally a read TXNs doesn't block write TXN. A write TXN will only be blocked by a read TXN when it needs to allocate more space/pages.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This gets into how we expect the kube-apiserver to use RangeStream.

Do we expect kube-apiserver move entirely away from batching and fetch a list in a single request? Or are we still expecting the kube-apiserver to batch but possibly request fewer, larger batched chunks?

If we are making a single range request to etcd, we need to understand the impact of the long-running read txn and at what range size it becomes a problem.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Chatted with @Jefftree out-of-band and I understand this better now. The proposal is to make the etcd server responsible for chunking the client's requests into multiple txns on the server side to avoid the long running transaction problem.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Per discussion with @jpbetz, added a note to notes/caveats section that this rangestream implementation does not use read transactions and rather by pinning mvcc revision after the first chunk and reusing it for subsequent chunks under separate txns.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

client's requests into multiple txns

Sounds good.

@serathius
Copy link
Copy Markdown
Contributor

cc @liggitt @jpbetz @deads2k

treated as internal design. The defined contract is that the merged
`RangeResponse` produces identical results as `proto.Merge`.

### CountTotal Optimization
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That sounds like separate feature, do we even need a limit in K8s? For watch cache list we don't need patination and for client requests we can just have client close connection after they hit limit

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is about the internal pagination for stream chunks, and is different than the client limit.

re client limit: The streaming API is designed to wrap RangeRequest which has the client limit field. I don't think we technically need to set the limit for watch cache, but removing it would be an API change that changes the request structure compared to unary.


### Adaptive Chunk Sizing

The chunk limit starts at 10 keys and adjusts based on response size relative
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of using adaptive chunking, could we have MVCC to return up to X bytes? Instead of having caller driven pagination on MVCC, have MVCC decide page size based on expected result size.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is doable. Updated.

| Final chunk | Kvs, Count, More |

Count is deferred to the final message. Revision is only in the first data
chunk. Clients reassemble by merging all messages.
Copy link
Copy Markdown

@rf232 rf232 Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From goals

Eliminate redundant count computation across paginated requests by computing
the total count once on the first chunk.

Why do we want the count done on the first chunk

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 to return count first

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can do it on any chunk, realistically just want to count once so we can merge the count result in RangeResponse. I've aligned the inconsistent description to now both count and return the count on the first chunk.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After discussion with @serathius, returning the count on the first chunk will require a full traversal to obtain the chunk. If we return on the last chunk, we can keep a running count and avoid the duplicate traversal. Going to keep this on the last chunk (and other chunks omit count) unless there is strong objection.

Copy link
Copy Markdown
Member

@fuweid fuweid left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a server-streaming `RangeStream` RPC to the etcd KV service that accepts
the existing `RangeRequest` and returns a stream of `RangeStreamResponse`
messages. The server handles pagination internally, pins to a single MVCC
revision for snapshot consistency, and uses adaptive chunk sizing to
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One more question regarding long-running streaming: currently, when BoltDB remaps, it requires mmaplock, which could potentially cause stalls in the background commit goroutine. This needs to be handled carefully. While we don’t need to dive into too many details here, it's important that we test this aspect. One possible approach could be modifying compaction to preserve key-value pairs, in order to avoid locking the read transaction for too long.


```protobuf
service KV {
rpc RangeStream(RangeRequest) returns (stream RangeStreamResponse) {}
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we support all the fields in RangeRequest? like limit

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. One caveat is that we reject requests with non default sort order because they defeat the purpose of streaming.

| Final chunk | Kvs, Count, More |

Count is deferred to the final message. Revision is only in the first data
chunk. Clients reassemble by merging all messages.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 to return count first

results in chunks instead of buffering the entire response.
- Eliminate redundant count computation across paginated requests by computing
the total count once on the first chunk.
- Provide a streaming API that produces results identical to the unary Range
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe I missed something—do we have a section that describes how to integrate with the kube-apiserver? Should the kube-apiserver be aware of the RangeStream API? If not, we can keep the details hidden - https://github.com/etcd-io/etcd/blob/6a4e69bb85c485115540ff0384dde195c0bbdb1b/client/v3/kubernetes/interface.go#L42

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we have a section that describes how to integrate with the kube-apiserver?

+1

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will also help us evaluate whether the API is well-designed.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As K8s apiserver storage approver I have reviewed the API and it make sense to me. I also invited other API machinery members to review it.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm supportive of this.

The batched range requests was band-aid added quickly many years ago to avoid sending large range requests with long running read txns to etcd. Adding streaming and server side chunking will be a huge improvement.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The watch cache goes through clientv3.KV directly so the detail is hidden in the Kubernetes interface. Given that we have consistent list from cache, almost nothing should be hitting the etcd interface List.

Copy link
Copy Markdown
Contributor

@serathius serathius Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given that we have consistent list from cache, almost nothing should be hitting the etcd interface List

Please note all watch cache features have a fallback mechanism to etcd. Consistent reads from cache fallbacks if cache is older than 3 seconds, pagination fallbacks when snapshots are not available during after restart. We see <1% requests fallbacking, still their high cost has huge impact on cluster stability.

See SIG API machinery meeting notes for Mar 4th.

[serathius] Consistent read from cache fallback, works great until defrag. 
* Defrag stalls member, watch gets delayed more than 3s, watch cache falls back reads to etcd. 
* Due to lack of APF protection (not aware of fallback), memory jumps (2GB pods)
    * Etcd 17GB->180GB
    * API server 50GB->160GB
* Possible fixes:
    * Fallback down to APF and recalculate request cost. Should increase APF cost from 10 to 100, 10x less requests passing to etcd. Still memory jumping 2x. (1.36)
    * Etcd range streaming (proposal [RangeStream Design](https://docs.google.com/document/d/1nSO2CvjvFjPkI5tRJxSQjpDhe87a8FQiHJv0Ri24t1E/edit?usp=sharing), etcd 3.7, maybe K8s 1.37-1.38?). 
      * Stabilizes etcd memory, but still API server manages whole LIST object.
    * Graceful degrade and return 429 status code instead of making things worse (https://tinyurl.com/k8s-graceful-shutdown )

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To support watch cache fallback to regular List, we'll need to add a new interface ListStream or something similar for the streaming API. Updated this section. I don't think we can fully abstract out the interface because clients need to know whether RangeStream is supported to determine whether to paginate client side. We can abstract the details and say something like the interface handles chunking internally and the returned response is identical to a RangeResponse.

@ahrtr
Copy link
Copy Markdown
Member

ahrtr commented Mar 31, 2026

It would be better to add a section to design/clarify the Kubernetes/api-server side change.

Comment on lines +30 to +31
work when clients paginate (repeated Range calls with increasing keys
recompute the total count on every page by walking the full B-tree index).
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This question is not specific to this KEP, but would it be useful for etcd to provide a option to either avoid sending the total count? I suspect clients most often only care if a request that asks for a range with a limit reached the end of the range or not.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is an interesting angle. We wouldn't need to go through the CountTotal optimization section, and the API would be simpler. If it's just an option though, I'd imagine we still want to keep both options optimized and perform the counttotal optimization anyway.

RangeStream is intended to replace Range in high large Range requests anyway (and solves the count problem), would any consumers care about the option to skip the count?

Copy link
Copy Markdown
Contributor

@jpbetz jpbetz Apr 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ability to opt-out for existing Range op feels like a easy-ish win. Not sure if will matter for k8s given the plan to switch RangeStream.

For RangeStream, since the plan is to implement the limit opt, I can imagine clients wanting to know the total.. But for k8s, I think we'd opt out of total if we could?

Add a server-streaming `RangeStream` RPC to the etcd KV service that accepts
the existing `RangeRequest` and returns a stream of `RangeStreamResponse`
messages. The server handles pagination internally, pins to a single MVCC
revision for snapshot consistency, and uses adaptive chunk sizing to
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This gets into how we expect the kube-apiserver to use RangeStream.

Do we expect kube-apiserver move entirely away from batching and fetch a list in a single request? Or are we still expecting the kube-apiserver to batch but possibly request fewer, larger batched chunks?

If we are making a single range request to etcd, we need to understand the impact of the long-running read txn and at what range size it becomes a problem.

Copy link
Copy Markdown
Member Author

@Jefftree Jefftree left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think all comments have been addressed.

  • Add bbolt per-chunk transaction note
  • Kubernetes API Server Integration section (watch cache path, client-side simplification)

Comment on lines +30 to +31
work when clients paginate (repeated Range calls with increasing keys
recompute the total count on every page by walking the full B-tree index).
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is an interesting angle. We wouldn't need to go through the CountTotal optimization section, and the API would be simpler. If it's just an option though, I'd imagine we still want to keep both options optimized and perform the counttotal optimization anyway.

RangeStream is intended to replace Range in high large Range requests anyway (and solves the count problem), would any consumers care about the option to skip the count?

Add a server-streaming `RangeStream` RPC to the etcd KV service that accepts
the existing `RangeRequest` and returns a stream of `RangeStreamResponse`
messages. The server handles pagination internally, pins to a single MVCC
revision for snapshot consistency, and uses adaptive chunk sizing to
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Per discussion with @jpbetz, added a note to notes/caveats section that this rangestream implementation does not use read transactions and rather by pinning mvcc revision after the first chunk and reusing it for subsequent chunks under separate txns.

| Message | Contents |
|----------------------|-------------------------------------------------------|
| Header | ClusterId, MemberId, RaftTerm (sent immediately from v3rpc layer) |
| First chunk | Revision, Count, Kvs |
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need Count in the first chunk? It would require another tree scan, while not giving any benefits.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moved count to last chunk so we don't need an extra tree scan.

returns `InvalidArgument` for these requests and clients should use
the unary `Range` RPC instead.

### Chunk Sizing
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it new field in RangeStream or new option for etcd process?

cc @linxiulei :) - etcd-io/etcd#16300

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Current thinking is that it's derived from etcd's existing max-request-bytes server config option so no new fields. This is referring to an implementation detail between server and the underlying MVCC.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks

@k8s-ci-robot k8s-ci-robot added sig/auth Categorizes an issue or PR as relevant to SIG Auth. sig/autoscaling Categorizes an issue or PR as relevant to SIG Autoscaling. labels Apr 8, 2026
@k8s-ci-robot k8s-ci-robot added the sig/cli Categorizes an issue or PR as relevant to SIG CLI. label Apr 8, 2026
@github-project-automation github-project-automation bot moved this to Needs Triage in SIG Apps Apr 8, 2026
@k8s-ci-robot k8s-ci-robot added the sig/cloud-provider Categorizes an issue or PR as relevant to SIG Cloud Provider. label Apr 8, 2026
@k8s-ci-robot k8s-ci-robot added sig/instrumentation Categorizes an issue or PR as relevant to SIG Instrumentation. sig/multicluster Categorizes an issue or PR as relevant to SIG Multicluster. labels Apr 8, 2026
@github-project-automation github-project-automation bot moved this to Needs Triage in SIG CLI Apr 8, 2026
@k8s-ci-robot k8s-ci-robot added sig/network Categorizes an issue or PR as relevant to SIG Network. sig/node Categorizes an issue or PR as relevant to SIG Node. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. sig/storage Categorizes an issue or PR as relevant to SIG Storage. sig/windows Categorizes an issue or PR as relevant to SIG Windows. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Apr 8, 2026
@github-project-automation github-project-automation bot moved this to Needs Triage in SIG Scheduling Apr 8, 2026
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Apr 8, 2026
@k8s-ci-robot
Copy link
Copy Markdown
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: Jefftree
Once this PR has been reviewed and has the lgtm label, please assign ahrtr for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. labels Apr 8, 2026
@fuweid
Copy link
Copy Markdown
Member

fuweid commented Apr 8, 2026

LGTM overall.

The etcd release cadence is slow. so my concern is that when we can get new release for v3.7 for this.

Add a server-streaming `RangeStream` RPC to the etcd KV service that accepts
the existing `RangeRequest` and returns a stream of `RangeStreamResponse`
messages. The server handles pagination internally, pins to a single MVCC
revision for snapshot consistency, and uses adaptive chunk sizing to
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

client's requests into multiple txns

Sounds good.

Comment on lines +216 to +218
```go
ListStream(ctx context.Context, prefix string, opts ListStreamOptions, cb func(ListStreamResponse) error) error
```
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we reuse the existing List interface method? If users want sorted responses, then use the range, otherwise use RangeStream. We encapsulate the details inside etcd's client SDK.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wanted to but I think it'll be difficult. Currently pagination is done on the client side so they call List with a limit and manage continuation tokens themselves. The logic lives outside of List.

RangeStream will allow callers to issue List without a limit and have chunking handled internally. However, when falling back to an etcd server that doesn't support RangeStream we need to fall back to the existing client-side pagination pattern that is controlled by the caller rather than encapsulated within the List. This makes it hard to transparently switch between the two behind a single List interface.

Copy link
Copy Markdown
Member

@ahrtr ahrtr Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Eventually I think we will need to consolidate them into one List. Can we add an option into ListOptions for now to differentiate the List and streamList instead of adding a new interface method ListStream?

Currently pagination is done on the client side so they call List with a limit and manage continuation tokens themselves. The logic lives outside of List.

Not sure whether it's feasible to move that code into etcd client SDK eventually. But it can be discussed separately.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah that seems reasonable. Updated to include an additional ListOptions parameter Stream.

cc @serathius @fuweid

Not sure whether it's feasible to move that code into etcd client SDK eventually. But it can be discussed separately.

Agree it may be feasible but we can keep it as a separate discussion.

@ahrtr
Copy link
Copy Markdown
Member

ahrtr commented Apr 8, 2026

Overall looks good with a comment #5967 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area/enhancements Issues or PRs related to the Enhancements subproject cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/apps Categorizes an issue or PR as relevant to SIG Apps. sig/auth Categorizes an issue or PR as relevant to SIG Auth. sig/autoscaling Categorizes an issue or PR as relevant to SIG Autoscaling. sig/cli Categorizes an issue or PR as relevant to SIG CLI. sig/cloud-provider Categorizes an issue or PR as relevant to SIG Cloud Provider. sig/etcd Categorizes an issue or PR as relevant to SIG Etcd. sig/instrumentation Categorizes an issue or PR as relevant to SIG Instrumentation. sig/multicluster Categorizes an issue or PR as relevant to SIG Multicluster. sig/network Categorizes an issue or PR as relevant to SIG Network. sig/node Categorizes an issue or PR as relevant to SIG Node. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. sig/storage Categorizes an issue or PR as relevant to SIG Storage. sig/windows Categorizes an issue or PR as relevant to SIG Windows. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.

Projects

Status: Needs Triage
Status: Needs Triage
Status: Needs Triage

Development

Successfully merging this pull request may close these issues.

8 participants