| 文件名 | sha512 hash |
|---|---|
| kubernetes.tar.gz | 99aa74225dd999d112ebc3e7b7d586a2312ec9c99de7a7fef8bbbfb198a5b4cf740baa57ea262995303e2a5060d26397775d928a086acd926042a41ef00f200b |
| kubernetes-src.tar.gz | 0be7d1d6564385cc20ff4d26bab55b71cc8657cf795429d04caa5db133a6725108d6a116553bf55081ccd854a4078e84d26366022634cdbfffd1a34a10b566cf |
| 文件名 | sha512 hash |
|---|---|
| kubernetes-client-darwin-386.tar.gz | a5fb80d26c2a75741ad0efccdacd5d5869fbc303ae4bb1920a6883ebd93a6b41969f898d177f2602faf23a7462867e1235edeb0ba0675041d0c8d5ab266ec62d |
| kubernetes-client-darwin-amd64.tar.gz | 47a9a78fada4b840d9ae4dac2b469a36d0812ac83d22fd798c4cb0f1673fb65c6558383c19a7268ed7101ac9fa32d53d79498407bdf94923f4f8f019ea39e912 |
| kubernetes-client-linux-386.tar.gz | 916e4dd98f5ed8ee111eeb6c2cf5c5f313e1d98f3531b40a5a777240ddb96b9cc53df101daa077ffff52cf01167fdcc39d38a8655631bac846641308634e127a |
| kubernetes-client-linux-amd64.tar.gz | fccf152588edbaaa21ca94c67408b8754f8bc55e49470380e10cf987be27495a8411d019d807df2b2c1c7620f8535e8f237848c3c1ac3791b91da8df59dea5aa |
| kubernetes-client-linux-arm.tar.gz | 066c55fabbe3434604c46574c51c324336a02a5bfaed2e4d83b67012d26bf98354928c9c12758b53ece16b8567e2b5ce6cb88d5cf3008c7baf3c5df02611a610 |
| kubernetes-client-linux-arm64.tar.gz | e41be74cc36240a64ecc962a066988b5ef7c3f3112977efd4e307b35dd78688f41d6c5b376a6d1152d843182bbbe75d179de75675548bb846f8c1e28827e0e0c |
| kubernetes-client-linux-ppc64le.tar.gz | 08783eb3bb2e35b48dab3481e17d6e345d43bab8b8dee25bb5ff184ba46cb632750d4c38e9982366050aecce6e121c67bb6812dbfd607216acd3a2d19e05f5a1 |
| kubernetes-client-linux-s390x.tar.gz | bcb6eb9cd3d8c92dfaf4f102ff2dc7517f632b1e955be6a02e7f223b15fc09c4ca2d6d9cd5b23871168cf6b455e2368daf17025c9cd61bf43d2ea72676db913a |
| kubernetes-client-windows-386.tar.gz | efbc764d8e2889ce13c9eaaa61f685a8714563ddc20464523140d6f5bef0dfd51b745c3bd3fe2093258db242bf9b3207f8e9f451b0484de64f18cdb7162ec30e |
| kubernetes-client-windows-amd64.tar.gz | b34bce694c6a0e4c8c5ddabcecb6adcb4d35f8c126b4b5ced7e44ef39cd45982dd9f6483a38e04430846f4da592dc74b475c37da7fe08444ef4eb5efde85e0b2 |
| 文件名 | sha512 hash |
|---|---|
| kubernetes-server-linux-amd64.tar.gz | a6bdac1eba1b87dc98b2bf5bf3690758960ecb50ed067736459b757fca0c3b01dd01fd215b4c06a653964048c6a81ea80b61ee8c7e4c98241409c091faf0cee1 |
| kubernetes-server-linux-arm.tar.gz | 0560e1e893fe175d74465065d43081ee7f40ba7e7d7cafa53e5d7491f89c61957cf0d3abfa4620cd0f33b6e44911b43184199761005d20b72e3cd2ddc1224f9f |
| kubernetes-server-linux-arm64.tar.gz | 4d5dd001fa3ac2b28bfee64e85dbedab0706302ffd634c34330617674e7a90e0108710f4248a2145676bd72f0bbc3598ed61e1e739c64147ea00d3b6a4ba4604 |
| kubernetes-server-linux-ppc64le.tar.gz | cc642fca57e22bf6edd371e61e254b369b760c67fa00cac50e34464470f7eea624953deff800fa1e4f7791fe06791c48dbba3ed47e789297ead889c2aa7b2bbf |
| kubernetes-server-linux-s390x.tar.gz | 1f480ba6f593a3aa20203e82e9e34ac206e35839fd9135f495c5d154480c57d1118673dcb5a6b112c18025fb4a847f65dc7aac470f01d2f06ad3da6aa63d98a3 |
| 文件名 | sha512 hash |
|---|---|
| kubernetes-node-linux-amd64.tar.gz | e987f141bc0a248e99a371ce220403b78678c739a39dad1c1612e63a0bee4525fbca5ee8c2b5e5332a553cc5f63bce9ec95645589298f41fe83e1fd41faa538e |
| kubernetes-node-linux-arm.tar.gz | 8b084c1063beda2dd4000e8004634d82e580f05cc300c2ee13ad84bb884987b2c7fd1f033fb2ed46941dfc311249acef06efe5044fb72dc4b6089c66388e1f61 |
| kubernetes-node-linux-arm64.tar.gz | 365bdf9759e24d22cf507a0a5a507895ed44723496985e6d8f0bd10b03ffe7c78198732ee39873912147f2dd840d2e284118fc6fc1e3876d8f4c2c3a441def0b |
| kubernetes-node-linux-ppc64le.tar.gz | ff54d83dd0fd3c447cdd76cdffd253598f6800045d2b6b91b513849d15b0b602590002e7fe2a55dc25ed5a05787f4973c480126491d24be7c5fce6ce98d0b6b6 |
| kubernetes-node-linux-s390x.tar.gz | 527cd9bf4bf392c3f097f232264c0f0e096ac410b5211b0f308c9d964f86900f5875012353b0b787efc9104f51ad90880f118efb1da54eba5c7675c1840eae5f |
| kubernetes-node-windows-amd64.tar.gz | 4f76a94c70481dd1d57941f156f395df008835b5d1cc17708945e8f560234dbd426f3cff7586f10fd4c24e14e3dfdce28e90c8ec213c23d6ed726aec94e9b0ff |
relnotes.k8s.io 现已以可自定义的格式托管了发行说明的完整变更日志,请查看链接并向我们提供反馈!
我们很高兴宣布 Kubernetes 1.16 的交付,这是我们 2019 年的第三版!Kubernetes 1.16 由 31 个增强功能组成:8 个进入稳定,8 个进入 Beta,15 个进入 Alpha。
Kubernetes 1.16 版本的四大主题如下:
livez 和 readyz 端点中公开。这将在 v1.16.1 中得到修正。iptables 1.8.0 或更新版本的系统应以兼容模式启动它。请注意,这将影响所有版本的 Kubernetes,而不仅仅是 v1.16.0。有关此问题以及如何应用解决方法的更多详细信息,请参阅官方文档。amd64 的容器镜像 tar 文件现在将在 manifest.json 的 RepoTags 节中给出体系结构。如果你正在使用 Docker 清单,则此变更对你没有影响。见 (#80266、@javier-b-perez)。
<!–beta.kubernetes.io/metadata-proxy-ready, beta.kubernetes.io/metadata-proxy-ready and beta.kubernetes.io/kube-proxy-ds-ready are no longer added on new nodes.
node.kubernetes.io/masq-agent-ds-ready instead of beta.kubernetes.io/masq-agent-ds-ready as its node selector.node.kubernetes.io/kube-proxy-ds-ready instead of beta.kubernetes.io/kube-proxy-ds-ready as its node selector.cloud.google.com/metadata-proxy-ready instead of beta.kubernetes.io/metadata-proxy-ready as its node selector.
–>beta.kubernetes.io/metadata-proxy-ready、beta.kubernetes.io/metadata-proxy-ready 和 beta.kubernetes.io/kube-proxy-ds-ready(节点标签)不再添加到新节点上。
node.kubernetes.io/masq-agent-ds-ready 代替 beta.kubernetes.io/masq-agent-ds-ready 作为其节点选择器;node.kubernetes.io/kube-proxy-ds-ready 代替 beta.kubernetes.io/kube-proxy-ds-ready 作为其节点选择器;cloud.google.com/metadata-proxy-ready 代替 beta.kubernetes.io/metadata-proxy-ready 作为其节点选择器;- 默认情况下不再提供以下 API:
<!--
- All resources under `apps/v1beta1` and `apps/v1beta2` - use `apps/v1` instead
- `daemonsets`, `deployments`, `replicasets` resources under `extensions/v1beta1` - use `apps/v1` instead
- `networkpolicies` resources under `extensions/v1beta1` - use `networking.k8s.io/v1` instead
- `podsecuritypolicies` resources under `extensions/v1beta1` - use `policy/v1beta1` instead
-->
- `apps/v1beta1` 和 `apps/v1` 下的所有资源 - 改用 `apps/v1`
- `extensions/v1beta1` 下的资源 `daemonsets`、`deployments`、`replicasets` - 改用 `apps/v1`
- `extensions/v1beta1` 下的资源 `networkpolicies` - 改用 `networking.k8s.io/v1`
- `extensions/v1beta1` 下的资源 `podsecuritypolicies` - 改用 `policy/v1beta1`
<!--
Serving these resources can be temporarily re-enabled using the `--runtime-config` apiserver flag.
-->
可以使用 `--runtime-config` apiserver 参数临时重新启用服务这些资源。
- `apps/v1beta1=true`
- `apps/v1beta2=true`
- `extensions/v1beta1/daemonsets=true,extensions/v1beta1/deployments=true,extensions/v1beta1/replicasets=true,extensions/v1beta1/networkpolicies=true,extensions/v1beta1/podsecuritypolicies=true`
<!--
The ability to serve these resources will be completely removed in v1.18. ([#70672](https://github.com/kubernetes/kubernetes/pull/70672), [@liggitt](https://github.com/liggitt))
-->
v1.18 中将完全删除提供这些资源的功能。见 ([#70672](https://github.com/kubernetes/kubernetes/pull/70672)、[@liggitt](https://github.com/liggitt))。
- v1.20 中的 extensions/v1beta1 将不再提供 Ingress 资源。请迁移到自 v1.14 起可用的 networking.k8s.io/v1beta1 API。可以通过 networking.k8s.io/v1beta1 API 检索现有的持久数据。
- v1.17 中将不再从 scheduling.k8s.io/v1beta1 和 scheduling.k8s.io/v1alpha1 提供 PriorityClass 资源。请迁移到自 v1.14 起可用的 scheduling.k8s.io/v1 API。可以通过 scheduling.k8s.io/v1 API 检索现有的持久数据。
- 自 v1.14 起已弃用的 list API 调用的 export 查询参数将在 v1.18 中删除。
- events.k8s.io/v1beta1 事件 API 中的 series.state 字段已弃用,并将在 v1.18 中删除。见 (#75987、@yastij)。
- CustomResourceDefinition 的 apiextensions.k8s.io/v1beta1 版本已弃用,在 v1.19 中将不再提供,请改用 apiextensions.k8s.io/v1。见 (#79604、@liggitt)。
- MutatingWebhookConfiguration 和 ValidatingWebhookConfiguration 的 admissionregistration.k8s.io/v1beta1 版本已弃用,在 v1.19 中将不再提供。请改用 admissionregistration.k8s.io/v1。见 (#79549、@liggitt)。
- 在 v1.13 版本中弃用的 alpha metadata.initializers 字段已被删除。见 (#79504、@yue9944882)。
- 已弃用的节点条件类型 OutOfDisk 已被删除,请改用 DiskPressure。(#72420、@Pingan2017)。
- metadata.selfLink 字段在单个和列表对象中已弃用。从 v1.20 开始将不再返回该字段,而在 v1.21 中将完全删除该字段。见 (#80978、@wojtek-t)。
- 已弃用的云提供商 ovirt、cloudstack 和 photon 已被删除。见 (#72178, @dims)。
- Cinder 和 ScaleIO volume(卷)驱动已被弃用,并将在以后的版本中删除。见 (#80099, @dims)。
- GA 版本中的 PodPriority 特性开关现在默认情况下处于打开状态,无法禁用。PodPriority 特性开关将在 v1.18 中删除。见 (#79262, @draveness)。
- 聚合的发现请求现在允许超时。聚合的 API 服务器必须在 5 秒钟内完成发现调用(其他请求可能需要更长的时间)。如果需要的话,使用 EnableAggregatedDiscoveryTimeout=false 将行为暂时恢复到之前的 30 秒超时(临时 EnableAggregatedDiscoveryTimeout 特性开关将在 v1.17 中删除)。见 (#82146、@deads2k)。
- scheduler.alpha.kubernetes.io/critical-pod 注解将被删除。应该使用 Pod 优先级(spec.priorityClassName)来将 Pod 标记为关键。见 (#80342、@draveness)。
- 已从调度程序框架配置 API 中删除了 NormalizeScore 插件集合。仅使用 ScorePlugin。见 (#80930、@liu-cong)。
功能:
GCERegionalPersistentDisk (从 1.15.0 开始)CustomResourcePublishOpenAPICustomResourceSubresourcesCustomResourceValidationCustomResourceWebhookConversionHugePages、VolumeScheduling、CustomPodDNS 和 PodReadinessGates 已被删除。见 (#79307、@draveness)。hyperkube
- v1.14 中弃用的 --make-symlinks 参数已被删除。见 (#80017、@Pothulapati)。
- --basic-auth-file 参数和身份验证模式已弃用,在以后的版本中将被删除。所以不建议在生产环境中使用。见 (#81152, @tedyu)。
- --cloud-provider-gce-lb-src-cidrs 参数已被弃用。一旦从 kube-apiserver 中删除了 GCE 云提供商,该参数将被删除。见 (#81094, @andrewsykim)。
- 从 v1.15 开始不推荐使用 --enable-logs-handler 参数和日志服务功能,并计划在 v1.19 中将其删除。见 (#77611, @rohitsardesai83)。
- 弃用默认服务 IP CIDR。先前的默认值为 10.0.0.0/24,将在 6 个月或者 2 个发行版中删除。集群管理员必须通过在 kube-apiserver 上使用 --service-cluster-ip-range 来指定自己的 IP 期望值。见 (#81668, @darshanime)。
- --resource-container 参数已从 kube-proxy 中删除,现在指定它会导致错误。现在的行为就好像你指定了 --resource-container=""。如果以前指定了非空 --resource-container,则从 kubernetes 1.16 开始,你将无法再这样做。见 (#78294、@vllry)。
- 调度程序开始使用 v1beta1 Event API。任何针对调度程序事件的工具都需要使用 v1beta1 事件 API。见 (#78447、@yastij)。
- 现在,CoreDNS Deployment 通过 ready 插件检查准备情况。
- proxy 插件已被弃用,可以使用 forward 插件。
- kubernetes 插件删除了 resyncperiod 选项。
- upstream 选项已被弃用,如果包含的话,则忽略。见 (#82127, @rajansandeep)。
- 从 v1.14 开始不推荐使用 kubectl convert,该参数将在 v1.17 中删除。
- 从 v1.14 开始不推荐使用 kubectl get 命令的 --export,该参数将在 v1.18 中删除。
- kubectl cp 不再支持从容器复制符号链接;要支持此用例,请参阅 kubectl exec --help 获取直接使用 tar 的示例。见 (#82143、@soltysh)。
- 删除不推荐使用的参数 --include-uninitialized。见 (#80337、@draveness)。
- --containerized 参数在 v1.14 中已弃用,并且已被删除。见 (#80043、@dims)。
- 从 v1.14 开始不推荐使用 beta.kubernetes.io/os 和 beta.kubernetes.io/arch 标签,这些标签将在 v1.18 中删除。
- 从 v1.15 开始不推荐使用 cAdvisor json 端点。见 (#78504、@dashpole)。
- 删除通过 --node-labels 设置 kubernetes.io 或 k8s.io 前缀的标签的功能,除了明确允许的标签/前缀。见 (#79305、@paivagustavo)。
DirectCodecFactory (replaced with serializer.WithoutConversionCodecFactory), DirectEncoder (replaced with runtime.WithVersionEncoder) and DirectDecoder (replaced with runtime.WithoutVersionDecoder). (#79263, @draveness)
–>DirectCodecFactory (替换为 serializer.WithoutConversionCodecFactory)、DirectEncoder (替换为 runtime.WithVersionEncoder) 和 DirectDecoder (替换为 runtime.WithoutVersionDecoder)。见 (#79263、@draveness)。aggregator_openapi_v2_regeneration_count、aggregator_openapi_v2_regeneration_gauge 和 apiextension_openapi_v2_regeneration_count,用于计算(添加,更新,删除)触发 APIServic e和 CRD 以及 kube-apiserver 重新生成 OpenAPI 规范时的原因。见 (#81786、@sttts)。
<!–authentication_attempts that can be used to understand the attempts of authentication. (#81509, @RainbowMango)
–>authentication_attempts。见 (#81509、@RainbowMango)。
<!–apiserver_admission_webhook_rejection_count with details about the causing for a webhook rejection. (#81399, @roycaihw)
–>apiserver_admission_webhook_rejection_count,该指标详细说明了引起 webhook 拒绝的原因。见 (#81399、@roycaihw)。
<!–container_sockets, container_threads, and container_threads_max metrics (#81972, @dashpole)
–>container_sockets、container_threads 和 container_threads_max。见 (#81972、@dashpole)。
<!–container_state label to running_container_count kubelet metrics, to get count of containers based on their state(running/exited/created/unknown) (#81573, @irajdeep)apiserver_watch_events_total that can be used to understand the number of watch events in the system. (#78732, @mborsz)
–>running_container_count 指标上添加了 container_state 标签,以根据容器的状态(running/exited/created/unknown)获得容器的数量。见 (#81573、@irajdeep)。apiserver_watch_events_total,可用于了解系统中监视事件的数量。见 (#78732、@mborsz)。
<!–apiserver_watch_events_sizes that can be used to estimate sizes of watch events in the system. (#80477, @mborsz)
–>apiserver_watch_events_sizes,可用于估计系统中监视事件的大小。见 (#80477、@mborsz)。
<!–sync_proxy_rules_iptables_restore_failures_total for kube-proxy iptables-restore failures (both ipvs and iptables modes)
(#81210, @figo)kubelet_evictions metric that counts the number of pod evictions carried out by the kubelet to reclaim resources (#81377, @sjenning)
–>sync_proxy_rules_iptables_restore_failures_total。见 (#81210、@figo)。kubelet_evictions,该指标用于计算 kubelet 为回收资源而执行的驱逐 Pod 的次数。见 (#81377、@sjenning)。pod_name 和 container_name 以符合仪器指南。任何与 pod_name 和 container_name 标签匹配的 Prometheus 查询(例如 cadvisor 或 kubelet 探针指标)都必须更新为使用 pod 和 container 标签。见 (#80376, @ehashman)。rejected label in apiserver_admission_webhook_admission_duration_seconds metrices now properly indicates if the request was rejected. (#81399, @roycaihw)
–>apiserver_admission_webhook_admission_duration_seconds 指标中的 rejected 标签可以正确显示请求是否被拒绝。见 (#81399、@roycaihw)。
<!–CustomResourceDefaulting feature is promoted to beta and enabled by default. Defaults may be specified in structural schemas via the apiextensions.k8s.io/v1 API. See https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/#specifying-a-structural-schema for details. (#81872, @sttts)
–>CustomResourceDefaulting 功能被提升为 Beta 功能并默认启用。可以通过 apiextensions.k8s.io/v1 API 在结构模式中指定默认值。有关详细信息,请参见 https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/#specifying-a-structural-schema。见 (#81872、@sttts)。
<!–spec。从 Kubernetes 1.16 开始,PodOverhead 是 Alpha 功能。见 (#78484, @egernst)。
<!–kubectl exec runs a process in an existing container. Also like kubectl exec, no resources are reserved for ephemeral containers and they are not restarted when they exit. Note that container namespace targeting is not yet implemented, so process namespace sharing must be enabled to view process from other containers in the pod. (#59484, @verb)
–>kubectl exec 如何在现有容器中运行进程。与 kubectl exec 一样,kubernetes 不为临时容器预留资源,并且这些容器退出时不会被重新启动。请注意,尚未实现容器命名空间定位,因此必须启用进程命名空间共享才能从 Pod 中的其他容器查看进程。见 (#59484, @verb)。
<!–--endpoint-updates-batch-period 可用于减少由 Pod 更改生成的端点更新的数量。见 (#80509、@mborsz)。
<!–--all-namespaces flag is now honored by kubectl wait (#81468, @ashutoshgngwr)
–>kubectl wait 支持 kubectl --all-namespaces 参数。见 (#81468, @ashutoshgngwr)。
<!–kubectl get -w now takes an --output-watch-events flag to indicate the event type (ADDED, MODIFIED, DELETED) (#72416, @liggitt)
–>kubectl get -w 现在带有 --output-watch-events 参数来显示事件类型(已添加、已修改、已删除)。见 (#72416, @liggitt)。
<!–--shutdown-delay-duration 添加到 kube-apiserver 中以延迟正常关闭。在这段时间内,/healthz 将继续返回成功,并且请求将正常处理,但是 /readyz 将立即返回失败。此延迟可用于允许 SDN 更新所有节点上的 iptables 并停止发送流量。见 (#74416、@sttts)。
现在,kubeadm 在升级 CoreDNS 时无差异地迁移 CoreDNS 配置。见 (#78033、@rajansandeep)。
<!–\livez for liveness health checking for kube-apiserver. Using the parameter --maximum-startup-sequence-duration will allow the liveness endpoint to defer boot-sequence failures for the specified duration period. (#81969, @logicalhan)
–>\livez 用来检查 kube-apiserver 的活动状况。使用参数 --maximum-startup-sequence-duration 将使活动端点将启动序列故障推迟指定的持续时间。见 (#81969、@logicalhan)。
<!–IPv6DualStack=true in the ClusterConfiguration. Additionally, for each worker node, the user should set the feature-gate for kubelet using either nodeRegistration.kubeletExtraArgs or KUBELET_EXTRA_ARGS. (#80531, @Arvinderpal)
–>IPv6DualStack=true。此外,对于每个工作节点,用户应使用 nodeRegistration.kubeletExtraArgs 或 KUBELET_EXTRA_ARGS 设置 kubelet 的特性开关功能。见 (#80531, @Arvinderpal)。
<!–--cluster-cidr="<cidr1>,<cidr2>".
Notes:
--cluster-cidr="<cidr1>,<cidr2>"。
注意:
--(kube|system)-reserved-cgroup, with --cgroup-driver=systemd, it is now possible to use the fully qualified cgroupfs name (i.e. /test-cgroup.slice). (#78793, @mattjmcnaughton)
–>--cgroup-driver=systemd 指定 --(kube|system)-reserved-cgroup 时,现在可以使用完全限定的 cgroupfs 名称(即 /test-cgroup.slice)。见 (#78793、@mattjmcnaughton)。
<!–MutatingWebhookConfiguration 和 ValidatingWebhookConfiguration API 已升级为 admissionregistration.k8s.io/v1:
failurePolicy 从 Ignore 更改为 FailmatchPolicy 从 Exact 更改为 Equivalenttimeout 从 30s 更改为 10ssideEffects 默认值,并且该字段为必填字段,v1 仅允许使用 None 和 NoneOnDryRunadmissionReviewVersions 默认值,并为 v1 设置了必填字段(AdmissionReview 支持的版本为 v1 和 v1beta1)admissionregistration.k8s.io/v1 创建的 MutatingWebhookConfiguration 和 ValidatingWebhookConfiguration 对象,指定 Webhook 的 name 字段必须唯一
<!–AdmissionReview API sent to and received from admission webhooks has been promoted to admission.k8s.io/v1. Webhooks can specify a preference for receiving v1 AdmissionReview objects with admissionReviewVersions: ["v1","v1beta1"], and must respond with an API object in the same apiVersion they are sent. When webhooks use admission.k8s.io/v1, the following additional validation is performed on their responses:
response.patch and response.patchType are not permitted from validating admission webhooksapiVersion: "admission.k8s.io/v1" is requiredkind: "AdmissionReview" is requiredresponse.uid: "<value of request.uid>" is requiredresponse.patchType: "JSONPatch" is required (if response.patch is set) (#80231, @liggitt)
–>AdmissionReview API 已升级为 admission.k8s.io/v1。Webhooks 可以指定使用 admissionReviewVersions: ["v1","v1beta1"] 接收 v1 AdmissionReview 对象的首选项,并且必须在发送它们的同一 apiVersion 中以 API 对象进行响应。当 Webhooks 使用 admission.k8s.io/v1 时,将对其响应执行以下附加验证:
CustomResourceDefinition API type is promoted to apiextensions.k8s.io/v1 with the following changes:
default feature in validation schemas is limited to v1spec.scope is no longer defaulted to Namespaced and must be explicitly specifiedspec.version is removed in v1; use spec.versions insteadspec.validation is removed in v1; use spec.versions[*].schema insteadspec.subresources is removed in v1; use spec.versions[*].subresources insteadspec.additionalPrinterColumns is removed in v1; use spec.versions[*].additionalPrinterColumns insteadspec.conversion.webhookClientConfig is moved to spec.conversion.webhook.clientConfig in v1spec.conversion.conversionReviewVersions is moved to spec.conversion.webhook.conversionReviewVersions in v1spec.versions[*].schema.openAPIV3Schema is now required when creating v1 CustomResourceDefinitionsspec.preserveUnknownFields: true is disallowed when creating v1 CustomResourceDefinitions; it must be specified within schema definitions as x-kubernetes-preserve-unknown-fields: trueadditionalPrinterColumns items, the JSONPath field was renamed to jsonPath in v1 (fixes https://github.com/kubernetes/kubernetes/issues/66531)
The apiextensions.k8s.io/v1beta1 version of CustomResourceDefinition is deprecated and will no longer be served in v1.19. (#79604, @liggitt)
–>CustomResourceDefinition API 类型升级为 apiextensions.k8s.io/v1:
default,该功能仅限于 v1spec.scope 不再默认为 Namespaced,必须明确指定 spec.scopespec.version;改用 spec.versionsspec.validation;改用 spec.versions[*].schemaspec.subresources;改用 spec.versions[*].subresourcesspec.additionalPrinterColumns;改用 spec.versions[*].additionalPrinterColumnsspec.conversion.webhookClientConfig 移至 spec.conversion.webhook.clientConfigspec.conversion.conversionReviewVersions 移至 spec.conversion.webhook.conversionReviewVersionsspec.versions[*].schema.openAPIV3Schemaspec.preserveUnknownFields: true;必须在模式定义中将其指定为 x-kubernetes-preserve-unknown-fields: true。additionalPrinterColumns 项目中,将 JSONPath 字段在 v1 中重命名为 jsonPath,见 参考。
CustomResourceDefinition 的 apiextensions.k8s.io/v1beta1 版本已弃用,在 v1.19 中将不再提供。见 (#79604、@liggitt)。
<!–ConversionReview API sent to and received from custom resource CustomResourceDefinition conversion webhooks has been promoted to apiextensions.k8s.io/v1. CustomResourceDefinition conversion webhooks can now indicate they support receiving and responding with ConversionReview API objects in the apiextensions.k8s.io/v1 version by including v1 in the conversionReviewVersions list in their CustomResourceDefinition. Conversion webhooks must respond with a ConversionReview object in the same apiVersion they receive. apiextensions.k8s.io/v1 ConversionReview responses must specify a response.uid that matches the request.uid of the object they were sent. (#81476, @liggitt)
–>ConversionReview 的 API 已升级为 apiextensions.k8s.io/v1。现在,通过在 CustomResourceDefinition 的 conversionReviewVersions 列表中包含 v1,CustomResourceDefinition 转换 webhooks 可以表示它们支持 apiextensions.k8s.io/v1 版本中的 ConversionReview API 对象进行接收和响应。转换 webhooks 必须在收到的 apiVersion 中以 ConversionReview 对象作为响应。apiextensions.k8s.io/v1 ConversionReview 响应必须指定一个 response.uid 来匹配发送对象的 request.uid。见 (#81476、@liggitt)。
<!–nil 或空字段,以避免哈希值更改。对于容器规范中默认值为非 nil 的新字段,哈希值仍会更改。见 (#57741、@dixudx)。
<!–conditions in apiextensions.v1beta1.CustomResourceDefinitionStatus and apiextensions.v1.CustomResourceDefinitionStatus is now optional instead of required. (#64996, @roycaihw)
–>apiextensions.v1beta1.CustomResourceDefinitionStatus 和 apiextensions.v1.CustomResourceDefinitionStatus 中的 conditions 属性是可选的,而不是必需的。见 (#64996、@roycaihw)。
<!–lastTransitionTime is now updated. (#69655, @CaoShuFeng)
–>lastTransitionTime。见 (#69655、@CaoShuFeng)。pkg/api/ref 中删除 GetReference() 和 GetPartialReference() 函数,因为在 staging/src/k8s.io/client-go/tools/ref 中也存在相同的函数。见 (#80361、@wojtek-t)。metadata 下的值除外。见 (#78829、@sttts)。/ with non-2xx HTTP responses (#79895, @deads2k)/ 请求的服务支持的聚合 API 的问题。见 (#79895、@deads2k)。io.k8s.apimachinery.pkg.runtime.RawExtension, which previously required a field raw to be specified (#80773, @jennybuckley)conditions in apiextensions.v1beta1.CustomResourceDefinitionStatus and apiextensions.v1.CustomResourceDefinitionStatus is now optional instead of required. (#64996, @roycaihw)io.k8s.apimachinery.pkg.runtime.RawExtension 的 openAPI 定义的错误,该错误以前需要指定字段 raw。见 (#80773, @jennybuckley)。apiextensions.v1beta1.CustomResourceDefinitionStatus 和 apiextensions.v1.CustomResourceDefinitionStatus 中的 conditions 属性是可选的,而不是必需的。见 (#64996、@roycaihw)。LastTransitionTime is now updated. (#69655, @CaoShuFeng)metadata.generation=1 to old CustomResources. (#82005, @sttts)
–>lastTransitionTime。见 (#69655、@CaoShuFeng)。metadata.generation=1 添加到旧的 CustomResources 中。见 (#82005、@sttts)。
<!–Patch method to ScaleInterface (#80699, @knight42)api-approved.kubernetes.io set to either unapproved.* or a link to the pull request approving the schema. See https://github.com/kubernetes/enhancements/pull/1111 for more details. (#79992, @deads2k)ScaleInterface 中添加 Patch 方法。见 (#80699、@knight42)。api-approved.kubernetes.io 设置为 unapproved.* 或指向批准该模式下的拉取请求的链接。有关更多详细信息,请参见 https://github.com/kubernetes/enhancements/pull/1111。见 (#79992、@deads2k)。kubectl set config hangs and uses 100% CPU on some invalid property names (#79000, @pswica)
–>kubectl set config 挂起并在某些无效的属性名称上使用 100% CPU 的错误。见 (#79000、@pswica)。
<!–kubectl get --watch-only when watching a single resource (#79345, @liggitt)
–>kubectl get --watch-only 输出内容。见 (#79345、@liggitt)。
<!–--ignore-not-found continue processing when encountering error. (#82120, @soltysh)
–>kubectl get --ignore-not-found 操作继续进行。见 (#82120、@soltysh)。
<!–azure-load-balancer-resource-group 的值为空字符串时,应使用默认 resourceGroup。见 (#79514、@feiskyer)。
<!–service.beta.kubernetes.io/azure-pip-name to specify the public IP name for Azure load balancer. (#81213, @nilo19)
–>service.beta.kubernetes.io/azure-pip-name,用于指定 Azure 负载均衡器的公共 IP 名称。见 (#81213、@nilo19)。
<!–service.beta.kubernetes.io/aws-load-balancer-eip-allocations to assign AWS EIP to the newly created Network Load Balancer. Number of allocations and subnets must match. (#69263, @brooksgarrett)
–>service.beta.kubernetes.io/aws-load-balancer-eip-allocations,用于将 AWS EIP 分配给新创建的 Network Load Balancer,其中分配数量和子网必须匹配。见 (#69263、@brooksgarrett)。
<!–LoadBalancerName and LoadBalancerResourceGroup to allow the corresponding customizations of azure load balancer. (#81054, @nilo19)
–>添加 Azure 云配置 LoadBalancerName 和 LoadBalancerResourceGroup,用于允许对 Azure 负载均衡器进行相应的自定义。见 (#81054、@nilo19)。
<!–
–>
kubeadm join --discovery-file when using discovery files with embedded credentials (#80675, @fabriziopandini)
–>kubeadm join --discovery-file 操作中的错误。见 (#80675、@fabriziopandini)。
<!–kubeadm init phase certs. (#78556, @neolit123)
–>kubeadm init phase certs 操作中引入确定性顺序用于生成证书。见 (#78556、@neolit123)。
<!–/var/lib/kubelet for Linux only (#81494, @Klaven)
–>/var/lib/kubelet 下卸载目录。见 (#81494、@Klaven)。
<!–--cri-socket flag does not work for kubeadm reset (#79498, @SataQiu)
–>--cri-socket 参数不适用于 kubeadm reset 操作的错误。见 (#79498、@SataQiu)。
<!–kubeadm join fails if file-based discovery is too long, with a default timeout of 5 minutes. (#80804, @olivierlemasle)
–>kubeadm join 操作会失败,默认超时为 5 分钟。见 (#80804、@olivierlemasle)。
<!–/home/kubernetes/bin/nvidia/vulkan/icd.d on the host to /etc/vulkan/icd.d inside containers requesting GPU. (#78868, @chardch)
–>/home/kubernetes/bin/nvidia/vulkan/icd.d 挂载到请求 GPU 的容器内的 /etc/vulkan/icd.d。见 (#78868、@chardch)。
<!–--pod-network-cidr flag to init or use the podSubnet field in the kubeadm config to pass a comma separated list of pod CIDRs. (#79033, @Arvinderpal)
–>--pod-network-cidr 参数初始化或使用 kubeadm 配置中的 podSubnet 字段传递以逗号分隔的 Pod CIDR 列表。见 (#79033、@Arvinderpal)。
<!–--control-plane-endpoint flag for controlPlaneEndpoint (#79270, @SataQiu)
–>controlPlaneEndpoint 提供 --control-plane-endpoint 参数。见 (#79270、@SataQiu)。
<!–--v>=5 (#80937, @neolit123)
–>--v> = 5 的错误的堆栈跟踪信息。见 (#80937、@neolit123)。
<!–--kubernetes-version to kubeadm init phase certs ca and kubeadm init phase kubeconfig (#80115, @gyuho)
–>--kubernetes-version 添加到 kubeadm init phase certs ca 和 kubeadm init phase kubeconfig 中。见 (#80115、@gyuho)。
<!–upgrade diff (#80025, @SataQiu)
–>upgrade diff 操作。见 (#80025、@SataQiu)。
<!–E2E_USE_GO_RUNNER will cause the tests to be run with the new golang-based test runner rather than the current bash wrapper. (#79284, @johnSchnake)
–>E2E_USE_GO_RUNNER 将使测试使用基于 golang 的新测试运行程序而不是当前的 bash 包装程序运行。见 (#79284、@johnSchnake)。
<!–The 404 request handler for the GCE Ingress load balancer now exports prometheus metrics, including:
http_404_request_total (the number of 404 requests handled)http_404_request_duration_ms (the amount of time the server took to respond in ms)Also includes percentile groupings. The directory for the default 404 handler includes instructions on how to enable prometheus for monitoring and setting alerts. (#79106, @vbannai) –> - GCE Ingress 负载均衡器的 404 请求处理程序现在可以支持导出 Prometheus 指标,包括:
http_404_request_total (已处理的 404 请求数目)http_404_request_duration_ms (服务器响应所花费的时间(以毫秒为单位))还包括百分位数分组。默认 404 处理程序的目录包括有关如何启用 Prometheus 来监视和设置警报的说明。见 (#79106、@vbannai)。
kube-proxy --cleanup will 返回正确的退出代码。见 (#78775、@johscheuer)。v=5 (#80100, @andrewsykim)v=5。见 (#80100、@andrewsykim)。KUBE-MARK-DROP chain in kube-proxy IPVS mode. The chain is ensured for both IPv4 and IPv6 in dual-stack operation. (#82214, @uablrek)node.kubernetes.io/exclude-balancer and node.kubernetes.io/exclude-disruption labels in alpha to prevent cluster deployers from being dependent on the optional node-role labels which not all clusters may provide. (#80238, @smarterclayton)KUBE-MARK-DROP 链处于 kube-proxy IPVS 模式。在双栈操作中为 IPv4 和 IPv6 均确保该链。见 (#82214、@uablrek)。node.kubernetes.io/exclude-balancer 和 node.kubernetes.io/exclude-disruption 标签,用于防止集群部署者依赖于并非所有集群都可以提供的可选 node-role 标签。见 (#80238、@smarterclayton)。--cpu-manager-policy 参数中传递无效的策略名称将导致 kubelet 失败,而不是简单地忽略该参数并运行 cpumanager 的默认策略。见 (#80294、@klueska)。nil or empty field when calculating container hash value to avoid hash changed. For a new field with a non-nil default value in the container spec, the hash would still get changed. (#57741, @dixudx)nil 或空字段,以避免哈希值更改。对于容器规范中默认值为非 nil 的新字段,哈希值仍会更改。见 (#57741、@dixudx)。node-lease-renew-interval to 0.25 of lease-renew-duration (#80429, @gaorong)--cloud-provider=external and no node addresses exists (#75229, @andrewsykim)
–>node-lease-renew-interval 更改为 0.25 的续租期限。见 (#80429、@gaorong)。--cloud-provider=external 并且不存在节点地址,则尝试设置 kubelet 的主机名和内部 IP。见 (#75229、@andrewsykim)。post-filter extension point for scheduling framework (#78097, @draveness)[0, 100]. (#81015, @draveness)
–>post-filter 扩展。见 (#78097、@draveness)。[0, 100] 范围时,返回错误。见 (#81015、@draveness)。
<!–requestedToCapacityRatioArguments to add resources parameter that allows the users to specify the resource name along with weights for each resource to score nodes based on the request to capacity ratio. (#77688, @sudeshsh)UnschedulableAndUnresolvable status code for scheduling framework (#82034, @alculquicondor)
–>requestedToCapacityRatioArguments 用于添加资源参数,该参数允许用户指定资源名称以及每种资源的权重,基于请求与容量之比为节点评分。见 (#77688、@sudeshsh)。UnschedulableAndUnresolvable 状态码。见 (#82034、@alculquicondor)。
<!–doSafeMakeDir 中可能的文件描述符泄漏和目录关闭。见 (#79534、@odinuge)。skuname 或 storageaccounttype,它们将不再失败。见 (#80837、@rmweir)。/var/lib/kubelet/pods) symbolically links to another disk device’s directory (#79094, @gaorong)
–>修复 kubelet 的 Pod 目录(默认为 /var/lib/kubelet/pods)象征性地链接到另一个磁盘设备的目录时,kubelet 无法删除孤立的 pod 目录的问题。见 (#79094、@gaorong)。
<!–
–>
framework.WaitForPodsWithLabelRunningReady。见 (#78687、@pohly)。TerminationGracePeriodSeconds 添加到测试框架 API 中。见 (#82170、@vivekbagade)。/test/e2e/framework:添加一个参数 non-blocking-taints,该参数允许测试在带有污染节点的环境中运行。字符串的值应为逗号分隔的列表。见 (#81043、@johnSchnake)。
<!–framework.ExpectNoError no longer logs the error and instead relies on using the new log.Fail as gomega fail handler. (#80253, @pohly)
–>framework.ExpectNoError 不再记录错误,而是依靠使用新的 log.Fail 作为 Gomega 故障处理程序。见 (#80253、@pohly)。%HOMEDRIVE%\%HOMEPATH% 不包含 .kube\config 文件并且 %USERPROFILE% 存在且可写,那么 %USERPROFILE% 现在比 %HOMEDRIVE%\%HOMEPATH% 更适合作为主文件夹。见 (#73923、@liggitt)。在 Windows 节点上支持 kubelet 插件监视程序。见 (#81397、@ddebroy)。 <!–
–>
| 文件名 | sha512 hash |
|---|---|
| kubernetes.tar.gz | 68837f83bcf380e22b50f145fb64404584e96e5714a6c0cbc1ba76e290dc267f6b53194e2b51f19c1145ae7c3e5874124d35ff430cda15f67b0f9c954803389c |
| kubernetes-src.tar.gz | 922552ed60d425fa6d126ffb34db6a7f123e1b9104e751edaed57b4992826620383446e6cf4f8a9fd55aac72f95a69b45e53274a41aaa838c2c2ae15ff4ddad2 |
| 文件名 | sha512 hash |
|---|---|
| kubernetes-client-darwin-386.tar.gz | d0df8f57f4d9c2822badc507345f82f87d0e8e49c79ca907a0e4e4dd634db964b84572f88b8ae7eaf50a20965378d464e0d1e7f588e84e926edfb741b859e7d2 |
| kubernetes-client-darwin-amd64.tar.gz | 0bc7daaf1165189b57dcdbe59f402731830b6f4db53b853350056822602579d52fe43ce5ac6b7d4b6d89d81036ae94eab6b7167e78011a96792acfbf6892fa39 |
| kubernetes-client-linux-386.tar.gz | 7735c607bb99b47924140a6a3e794912b2b97b6b54024af1de5db6765b8cc518cba6b145c25dc67c8d8f827805d9a61f676b4ae67b8ef86cfda2fe76de822c6a |
| kubernetes-client-linux-amd64.tar.gz | d35f70cea4780a80c24588bc760c38c138d73e5f80f9fe89d952075c24cbf179dd504c2bd7ddb1756c2632ffbcc69a334684710a2d702443043998f66bec4a25 |
| kubernetes-client-linux-arm.tar.gz | e1fc50b6884c42e92649a231db60e35d4e13e58728e4af7f6eca8b0baa719108cdd960db1f1dbd623085610dbccf7f17df733de1faf10ebf6cd1977ecd7f6213 |
| kubernetes-client-linux-arm64.tar.gz | defc25fe403c20ef322b2149be28a5b44c28c7284f11bcf193a07d7f45110ce2bd6227d3a4aa48859aaeb67796809962785651ca9f76121fb9534366b40c4b7d |
| kubernetes-client-linux-ppc64le.tar.gz | e87b16c948d09ddbc5d6e3fab05ad3c5a58aa7836d4f42c59edab640465531869c92ecdfa2845ec3eecd95b8ccba3dafdd9337f4c313763c6e5105b8740f2dca |
| kubernetes-client-linux-s390x.tar.gz | 2c25a1860fa81cea05a1840d6a200a3a794cc50cfe45a4efec57d7122208b1354e86f698437bbe5c915d6fb70ef9525f844edc0fa63387ab8c1586a6b22008a5 |
| kubernetes-client-windows-386.tar.gz | 267654a7ecfa37c800c1c94ea78343f5466783881cfac62091cfbd8c62489f04bd74a7a39a08253cb51d7ba52c207f56da371f992f61c1468b595c094f0e080f |
| kubernetes-client-windows-amd64.tar.gz | bd4c25b80e54f9fc0c07f64550d020878f899e4e3a28ca57dd532fdbab9ab700d296d2890185591ac27bce6fde336ab90f3102a6797e174d233db76f24f5ac1b |
| 文件名 | sha512 hash |
|---|---|
| kubernetes-server-linux-amd64.tar.gz | 13a93bb9bd5599b669af7bd25537ee81cefd6d8c73bedfbac845703c01950c70b2aa39f94f2346d935bc167bae435dbcd6e1758341b634102265657e1b1c1259 |
| kubernetes-server-linux-arm.tar.gz | 781d127f32d8479bc21beed855ec73e383702e6e982854138adce8edb0ee4d1d4b0c6e723532bc761689d17512c18b1945d05b0e4adb3fe4b98428cce40d52c8 |
| kubernetes-server-linux-arm64.tar.gz | 6d6dfa49288e4a4ce77ca4f7e83a51c78a2b1844dd95df10cb12fff5a104e750d8e4e117b631448e066487c4c71648e822c87ed83a213f17f27f8c7ecb328ca4 |
| kubernetes-server-linux-ppc64le.tar.gz | 97804d87ea984167fdbdedcfb38380bd98bb2ef150c1a631c6822905ce5270931a907226d5ddefc8d98d5326610daa79a08964fc4d7e8b438832beb966efd214 |
| kubernetes-server-linux-s390x.tar.gz | d45bd651c7f4b6e62ceb661c2ec70afca06a8d1fde1e50bb7783d05401c37823cf21b9f0d3ac87e6b91eeec9d03fc539c3713fd46beff6207e8ebac1bf9d1dd5 |
| 文件名 | sha512 hash |
|---|---|
| kubernetes-node-linux-amd64.tar.gz | 42c57b59ce43f8961e427d622ee9cfa85cc23468779945262d59aa8cd31afd495c7abaaef7263b9db60ec939ba5e9898ebc3281e8ec81298237123ce4739cbff |
| kubernetes-node-linux-arm.tar.gz | 034a5611909df462ef6408f5ba5ff5ebfb4e1178b2ad06a59097560040c4fcdb163faec48ab4297ca6c21282d7b146f9a5eebd3f2573f7d6d7189d6d29f2cf34 |
| kubernetes-node-linux-arm64.tar.gz | df1493fa2d67b59eaf02096889223bbf0d71797652d3cbd89e8a3106ff6012ea17d25daaa4baf9f26c2e061afb4b69e3e6814ba66e9c4744f04230c922fbc251 |
| kubernetes-node-linux-ppc64le.tar.gz | 812a5057bbf832c93f741cc39d04fc0087e36b81b6b123ec5ef02465f7ab145c5152cfc1f7c76032240695c7d7ab71ddb9a2a4f5e1f1a2abb63f32afa3fb6c7c |
| kubernetes-node-linux-s390x.tar.gz | 2a58a4b201631789d4309ddc665829aedcc05ec4fe6ad6e4d965ef3283a381b8a4980b4b728cfe9a38368dac49921f61ac6938f0208b671afd2327f2013db22a |
| kubernetes-node-windows-amd64.tar.gz | 7fb09e7667715f539766398fc1bbbc4bf17c64913ca09d4e3535dfc4d1ba2bf6f1a3fcc6d81dbf473ba3f10fd29c537ce5debc17268698048ce7b378802a6c46 |
| 文件名 | sha512 hash |
|---|---|
| kubernetes.tar.gz | 2feadb470a8b0d498dff2c122d792109bc48e24bfc7f49b4b2b40a268061c83d9541cbcf902f2b992d6675e38d69ccdded9435ac488e041ff73d0c2dc518a5a9 |
| kubernetes-src.tar.gz | 6d8877e735e041c989c0fca9dd9e57e5960299e74f66f69907b5e1265419c69ed3006c0161e0ced63073e28073355a5627154cf5db53b296b4a209b006b45db0 |
| 文件名 | sha512 hash |
|---|---|
| kubernetes-client-darwin-386.tar.gz | 27bbfcb709854a9625dbb22c357492c1818bc1986b94e8cf727c046d596c4f1fe385df5b2ce61baaf95b066d584a8c04190215eaf979e12707c6449766e84810 |
| kubernetes-client-darwin-amd64.tar.gz | 9c2ea22e188282356cd532801cb94d799bde5a5716f037b81e7f83273f699bf80776b253830e3a4e1a72c420f0c0b84e28ae043c9d28a49e9455e6b1449a353c |
| kubernetes-client-linux-386.tar.gz | bbba78b8f972d0c247ed11e88010fc934a694efce8d2605635902b4a22f5ecc7e710f640bcefbba97ef28f6db68b9d8fb9e6a4a099603493c1ddcc5fd50c0d17 |
| kubernetes-client-linux-amd64.tar.gz | f2f04dc9b93d1c8f5295d3f559a3abdd19ea7011741aa006b2cd96542c06a6892d7ed2bad8479c89e7d6ae0ed0685e68d5096bd5a46431c8cab8a90c04f1f00c |
| kubernetes-client-linux-arm.tar.gz | 77d1f5b4783f7480d879d0b7682b1d46e730e7fb8edbc6eccd96986c31ceecbf123cd9fd11c5a388218a8c693b1b545daed28ca88f36ddaca06adac4422e4be5 |
| kubernetes-client-linux-arm64.tar.gz | 0b57aa1dbbce51136789cb373d93e641d1f095a4bc9695d60917e85c814c8959a4d6e33224dc86295210d01e73e496091a191f303348f3b652a2b6160b1e6059 |
| kubernetes-client-linux-ppc64le.tar.gz | 847065d541dece0fc931947146dbc90b181f923137772f26c7c93476e022f4f654e00f9928df7a13a9dec27075dd8134bdb168b5c57d4efa29ed20a6a2112272 |
| kubernetes-client-linux-s390x.tar.gz | d7e8a808da9e2551ca7d8e7cb25222cb9ac01595f78ebbc86152ae1c21620d4d8478ef3d374d69f47403ca913fc716fbaa81bd3ff082db2fc5814ef8dc66eeec |
| kubernetes-client-windows-386.tar.gz | c9cf6a6b9f2f29152af974d30f3fd97ba33693d5cbbf8fc76bcf6590979e7ac8307e5da4f84a646cec6b68f6fa1a83aa1ce24eb6429baa0a39c92d5901bd80be |
| kubernetes-client-windows-amd64.tar.gz | ebea0c0b64d251e6023e8a5a100aa609bc278c797170765da2e35c8997efc233bec9f8d1436aeee1cd6459e30ec78ba64b84de47c26a4e4645e153e5e598202b |
| 文件名 | sha512 hash |
|---|---|
| kubernetes-server-linux-amd64.tar.gz | 2fe7ccce15e705826c4ccfce48df8130ba89a0c930bca4b61f49267e9d490f57cf6220671752e44e55502bee501a9af2f0ac3927378a87b466f2526fa6e45834 |
| kubernetes-server-linux-arm.tar.gz | 6eb77e59095a1de9eb21e7065e8d10b7d0baf1888991a42089ede6d4f8a8cac0b17ae793914eef5796d56d8f0b958203d5df1f7ed45856dce7244c9f047f9793 |
| kubernetes-server-linux-arm64.tar.gz | 429ce0d5459384c9d3a2bb103924eebc4c30343c821252dde8f4413fcf29cc73728d378bfd193c443479bde6bfd26e0a13c036d4d4ae22034d66f6cad70f684d |
| kubernetes-server-linux-ppc64le.tar.gz | 18041d9c99efc00c8c9dbb6444974efdbf4969a4f75faea75a3c859b1ee8485d2bf3f01b7942a524dcd6a71c82af7a5937fc9120286e920cf2d501b7c76ab160 |
| kubernetes-server-linux-s390x.tar.gz | 2124c3d8856e50ca6b2b61d83f108ab921a1217fac2a80bf765d51b68f4e67d504471787d375524974173782aa37c57b6bf1fc6c7704ed7e6cabe15ec3c543b1 |
| 文件名 | sha512 hash |
|---|---|
| kubernetes-node-linux-amd64.tar.gz | ea1bcd8cc51fbc95058a8a592eb454c07ab5dadc1c237bbc59f278f8adc46bda1f334e73463e1edbd6da5469c4a527ceb1cb0a96686493d3ff4e8878dd1c9a20 |
| kubernetes-node-linux-arm.tar.gz | e5d62df5fd086ff5712f59f71ade9efcf617a13c567de965ce54c79f3909372bed4edbf6639cf058fe1d5c4042f794e1c6a91e5e20d9dcce597a95dedf2474b2 |
| kubernetes-node-linux-arm64.tar.gz | 5aa0a7a3d02b65253e4e814e51cea6dd895170f2838fea02f94e4efd3f938dbf83bc7f209801856b98420373c04147fab9cb8791d24d51dcedf960068dfe6fda |
| kubernetes-node-linux-ppc64le.tar.gz | f54bc5ae188f8ecb3ddcae20e06237430dd696f444a5c65b0aa3be79ad85c5b500625fa47ed0e126f6e738eb5d9ee082b52482a6913ec6d22473520fa6582e66 |
| kubernetes-node-linux-s390x.tar.gz | afa4f9b747fff20ed03d40092a2df60dbd6ced0de7fd0c83c001866c4fe5b7117468e2f8c73cbef26f376b69b4750f188143076953fc200e8a5cc002c8ac705b |
| kubernetes-node-windows-amd64.tar.gz | e9b76014a1d4268ad66ade06883dd3344c6312ece14ee988af645bdf9c5e9b62c31a0e339f774c67799b777314db6016d86a3753855c7d2eb461fbbf4e154ae7 |
| 文件名 | sha512 hash |
|---|---|
| kubernetes.tar.gz | d1f4e9badc6a4422b9a261a5375769d63f0cac7fff2aff4122a325417b77d5e5317ba76a180cda2baa9fb1079c33e396fc16f82b31eeebea61004b0aabdf8c32 |
| kubernetes-src.tar.gz | 2ab20b777311746bf9af0947a2bea8ae36e27da7d917149518d7c2d2612f513bbf88d1f2c7efff6dc169aa43c2dd3be73985ef619172d50d99faa56492b35ce4 |
| 文件名 | sha512 hash |
|---|---|
| kubernetes-client-darwin-386.tar.gz | 55523fd5cfce0c5b79e981c6a4d5572790cfe4488ed23588be45ee13367e374cf703f611769751583986557b2607f271704d9f27e03f558e35e7c75796476b10 |
| kubernetes-client-darwin-amd64.tar.gz | 13e696782713da96f5fb2c3fa54d99ca40bc71262cb2cbc8e77a6d19ffd33b0767d3f27e693aa84103aca465f9b00ed109996d3579b4bd28566b8998212a0872 |
| kubernetes-client-linux-386.tar.gz | 7f4818599b84712edd2bf1d94f02f9a53c1f827b428a888356e793ff62e897276afcbc97f03bc0317e7d729740410037c57e6443f65c691eb959b676833511fa |
| kubernetes-client-linux-amd64.tar.gz | 8a2656289d7d86cbded42831f6bc660b579609622c16428cf6cc782ac8b52df4c8511c5aad65aa520f398a65e35dee6ea5b5ad8e5fd14c5a8690a7248dc4c109 |
| kubernetes-client-linux-arm.tar.gz | 418606bc109b9acb2687ed297fa2eec272e8cb4ad3ce1173acd15a4b43cec0ecfd95e944faeecf862b349114081dd99dfac8615dc95cffc1cd4983c5b38e9c4e |
| kubernetes-client-linux-arm64.tar.gz | 2eb943b745c270cd161e01a12195cfb38565de892a1da89e851495fb6f9d6664055e384e30d3551c25f120964e816e44df5415aff7c12a8639c30a42271abef7 |
| kubernetes-client-linux-ppc64le.tar.gz | 262e7d61e167e7accd43c47e9ce28323ae4614939a5af09ecc1023299cd2580220646e7c90d31fee0a17302f5d9df1e7da1e6774cc7e087248666b33399e8821 |
| kubernetes-client-linux-s390x.tar.gz | 8f0cfe669a211423dd697fdab722011ea9641ce3db64debafa539d4a424dd26065c8de5da7502a4d40235ff39158f3935bd337b807a63771391dffb282563ccf |
| kubernetes-client-windows-386.tar.gz | b1deab89653f4cd3ad8ad68b8ec3e1c038d1ef35bd2e4475d71d4781acf0b2002443f9c2b7d2cf06cbb9c568bea3881c06d723b0529cc8210f99450dc2dc5e43 |
| kubernetes-client-windows-amd64.tar.gz | 0e3b5150767efd0ed5d60b2327d2b7f6f2bda1a3532fca8e84a7ca161f6e069fae15af37d3fe8a641d34c9a65fc61f1c44dd3265ef6cacfd2df55c9c004bc6bd |
| 文件名 | sha512 hash |
|---|---|
| kubernetes-server-linux-amd64.tar.gz | 32688295df1fcdb9472ed040dc5e8b19d04d62789d2eca64cfe08080d08ffee1eaa4853ce40bd336aabd2f764dd65b36237d4f9f1c697e2d6572861c0c8eff01 |
| kubernetes-server-linux-arm.tar.gz | c8ea6d66e966889a54194f9dce2021131e9bae34040c56d8839341c47fc4074d6322cc8aadce28e7cdcee88ec79d37a73d52276deb1cc1eee231e4d3083d54e5 |
| kubernetes-server-linux-arm64.tar.gz | 12b42cfa33ff824392b81a604b7edcab95ecc67cddfc24c47ef67adb356a333998bc7b913b00daf7a213692d8d441153904474947b46c7f76ef03d4b2a63eab0 |
| kubernetes-server-linux-ppc64le.tar.gz | e03f0eba181c03ddb7535e56ff330dafebb7dcb40889fd04f5609617ebb717f9f833e89810bff36d5299f72ae75d356fffb80f7b3bab2232c7597abcc003b8ba |
| kubernetes-server-linux-s390x.tar.gz | 4e7bd061317a3445ad4b6b308f26218777677a1fef5fda181ee1a19e532a758f6bd3746a3fe1917a057ed71c94892aeaf00dd4eb008f61418ec3c80169a1f057 |
| 文件名 | sha512 hash |
|---|---|
| kubernetes-node-linux-amd64.tar.gz | dc5606c17f0191afc6f28dce5ab566fd8f21a69fa3989a1c8f0976d7b8ccd32e26bb21e9fec9f4529c5a6c8301747d278484688a0592da291866f8fa4893dcbb |
| kubernetes-node-linux-arm.tar.gz | 3d5d9893e06fd7be51dca11182ecb9e93108e86af40298fe66bb62e5e86f0bf4713667ba63d00b02cfddaf20878dd78cc738e76bf1ca715bbbe79347ca518ec4 |
| kubernetes-node-linux-arm64.tar.gz | fd18a02f32aeafc5cce8f3f2eadd0e532857bd5264b7299b4e48f458f77ebaa53be94b1d1fe2062168f9d88c8a97e6c2d904fc3401a2d9e69dd4e8c87d01d915 |
| kubernetes-node-linux-ppc64le.tar.gz | 703afd80140db2fae897d83b3d2bc8889ff6c6249bb79be7a1cce6f0c9326148d22585a5249c2e976c69a2518e3f887eef4c9dc4a970ebb854a78e72c1385ccb |
| kubernetes-node-linux-s390x.tar.gz | 445d4ef4f9d63eabe3b7c16114906bc450cfde3e7bf7c8aedd084c79a5e399bd24a7a9c2283b58d382fb11885bb2b412773a36fffb6fc2fac15d696439a0b800 |
| kubernetes-node-windows-amd64.tar.gz | 88b04171c3c0134044b7555fbc9b88071f5a73dbf2dac21f8a27b394b0870dff349a56b0ee4d8e1d9cfbeb98645e485f40b8d8863f3f3e833cba0ca6b1383ccf |
--admission-control-config-file 配置的 Webhook 客户端凭据必须在配置的主机名中包含非默认端口。例如,配置与命令空间 myns 中的服务 mysvc上的端口 8443 进行通信的 Webhook 将在带有 name: mysvc.myns.svc:8443 的节中指定客户端凭据。有关更多详细信息,请参见 https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#authenticate-apiservers。见 (#82252、@liggitt)。kubectl cp no longer supports copying symbolic links from containers; to support this use case, see kubectl exec --help for examples using tar directly (#82143, @soltysh)kubectl cp 不再支持从容器中复制符号链接;为了支持该用例,请参见 kubectl exec --help 以获取直接使用 tar 的示例。见 (#82143、@soltysh)。container_sockets, container_threads, and container_threads_max metricsEnableAggregatedDiscoveryTimeout=false if you must remove this check, but that feature gate will be removed next release.
–>container_sockets、container_threads 和 container_threads_max。EnableAggregatedDiscoveryTimeout=false 功能开关,但是该功能将在下一个版本中删除。
<!–ready plugin.proxy plugin has been deprecated. The forward plugin is to be used instead.kubernetes plugin removes the resyncperiod option.upstream option is deprecated and ignored if included.
–>ready 插件检查准备情况。proxy 插件已被弃用,而是使用 forward 插件。upstream 选项已被弃用,如果包含的话,将被忽略。
<!–--maximum-startup-sequence-duration will allow the liveness endpoint to defer boot-sequence failures for the specified duration period. (#81969, @logicalhan)
–>--maximum-startup-sequence-duration 能够允许活动端点(liveness endpoint)将引导序列故障推迟指定的持续时间。见 (#81969、@logicalhan)。
<!–rejected label in apiserver_admission_webhook_admission_duration_seconds metrices now properly indicates if a request was rejected. Add a new counter metrics apiserver_admission_webhook_rejection_count with details about the causing for a webhook rejection. (#81399, @roycaihw)container_state label to running_container_count kubelet metrics, to get count of containers based on their state(running/exited/created/unknown) (#81573, @irajdeep)apiserver_admission_webhook_admission_duration_seconds 指标中的 rejected 标签可以正确指示是否拒绝了请求。添加了一个新的计数器指标 apiserver_admission_webhook_rejection_count,其中该指标包含有关引起 Webhook 拒绝操作的详细信息。见 (#81399、@roycaihw)。running_container_count 指标中添加 container_state 标签,用于根据容器的状态(running/exited/created/unknown)获取容器数量。见 (#81573、@irajdeep)。kubectl wait (#81468, @ashutoshgngwr)
–>kubectl wait 来支持 –all-namespaces 参数。(#81468、@ashutoshgngwr)。
<!–--service-cluster-ip-range=<CIDR>,<CIDR> and make sure IPv6DualStack feature flag is turned on. The flag is validated and used as the following:
–>--service-cluster-ip-range=<CIDR>,<CIDR>,并确保已打开 IPv6DualStack 功能参数。该参数已验证并按以下方式使用:
<!–--service-cluster-ip-range[0] is consider primary service range, and will be used for any service with Service.Spec.IPFamily = nil or any service in the at the time of turning on the feature flag.PodIPs (according to family and binding selection in user code) but will ingress will only be performed against the pod primary IP. This can be configured by supplying single entry to --service-cluster-ip-range flag.--service-cluster-ip-range and they are validated to be dual stacked i.e. --service-cluster-ip-range=<v4>,<v6> or --service-cluster-ip-range=<v6>,<v4><v6>/108 or --service-cluster-ip-range=<CIDR>,<CIDR> and make sure IPv6DualStack feature flag is turned on. The flag is validated as above.
–>--service-cluster-ip-range[0] 被认为是主要服务范围,并且将用于具有 Service.Spec.IPFamily = nil 的任何服务或打开功能标志时的任何服务。PodIPs 执行出口(根据用户代码中的族和绑定选择),但是仅针对 Pod 主 IP 执行入口。可以通过向 --service-cluster-ip-range 参数提供单个条目来进行配置。--service-cluster-ip-range 参数中最多允许两个条目,并且它们被验证为双堆叠,例如 --service-cluster-ip-range=<v4>,<v6> 或者 --service-cluster-ip-range=<v6>,<v4>。<v6>/108 或 <v4>/12)--service-cluster-ip-range=<CIDR>,<CIDR>,并确保已打开 IPv6DualStack 功能参数。该参数如上所述进行验证。
<!–Service.Spec.IPFamily has been added. The default of this field is family of (first service cidr in –service-cluster-ip-range flag). The value is defaulted as described above once the feature gate is turned on. Here are the possible values for this field:service-cluster-ip-range that is ipv4 (either the primary or the secondary, according to how they were configured).service-cluster-ip-range that is ipv6 (either the primary or the secondary, according to how they were configured).EndpointSlice feature. They can not be turned on together. metaproxy is yet to implement EndpointSlice handling.
–>Service.Spec.IPFamily。该字段的默认值为 family(–service-cluster-ip-range 参数中的第一个服务 cidr)。一旦打开功能开关,该值将如上所述为默认值,这是此字段的可能值:service-cluster-ip-range 中分配一个 IP 地址为 ipv4(根据它们的配置是主 IP 还是辅助 IP)。service-cluster-ip-range 中分配一个 IP 地址为 ipv6(根据它们的配置是主 IP 还是辅助 IP)。EndpointSlice 功能互斥,不能同时打开。metaproxy 还没有实现 EndpointSlice 处理功能。authentication_attempts。见 (#81509、@RainbowMango)。CustomResourceValidation、CustomResourceSubresources、CustomResourceWebhookConversion 和 CustomResourcePublishOpenAPI 功能现在已成为 GA,但是不赞成使用相关功能开关,因为这些参数将在 v1.18 中被删除。见 (#81965、@roycaihw)。
<!–node.kubernetes.io/exclude-balancer and node.kubernetes.io/exclude-disruption labels in alpha to prevent cluster deployers from being dependent on the optional node-role labels which not all clusters may provide. (#80238, @smarterclayton)
–>node.kubernetes.io/exclude-balancer 和 node.kubernetes.io/exclude-disruption 标签,以防止集群部署者依赖于并非所有集群都可以提供的可选 node-role 标签。见 (#80238、@smarterclayton)。
<!–aggregator_openapi_v2_regeneration_count、aggregator_openapi_v2_regeneration_gauge 和 apiextension_openapi_v2_regeneration_count,这些指标统计了(添加,更新,删除)操作时触发的 APIService 和 CRD 以及原因。
<!–apiextensions.k8s.io/v1 API. See https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/#specifying-a-structural-schema for details. (#81872, @sttts)apiextensions.k8s.io/v1 API 指定默认值。有关详细信息,请参见 https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/#specifying-a-structural-schema。见 (#81872、@sttts)。framework.ExpectNoError no longer logs the error and instead relies on using the new log.Fail as Gomega fail handler. (#80253, @pohly)framework.ExpectNoError 不再记录错误,而是依靠使用新的 log.Fail 作为 Gomega 故障处理程序。见 (#80253、@pohly)。metadata. (#78829, @sttts)metadata 下的值除外。见 (#78829、@sttts)。kubectl logs -f for windows server containers. (#81747, @Random-Liu)kubectl logs -f 问题。见 (#81747、@Random-Liu)。| 文件名 | sha512 hash |
|---|---|
| kubernetes.tar.gz | 16513ebb52b01afee26156dcd4c449455dc328d7a080ba54b3f3a4584dbd9297025e33a9dafe758b259ae6e33ccb84a18038f6f415e98be298761c4d3dfee94b |
| kubernetes-src.tar.gz | 3933f441ebca812835d6f893ec378896a8adb7ae88ca53247fa402aee1fda00d533301ac806f6bf106badf2f91be8c2524fd98e9757244b4b597c39124c59d01 |
| 文件名 | sha512 hash |
|---|---|
| kubernetes-client-darwin-386.tar.gz | 28f0a8979f956aa5b3be1c1158a3ade1b242aac332696cb604fbdba44c4279caa1008840af01e50692bf48d0342018f882dd6e30f9fe3279e9784094cfc9ff3c |
| kubernetes-client-darwin-amd64.tar.gz | 8804f60b690e5180125cf6ac6d739ad5432b364c5e0d0ee0d2f06220c86ca3a2cffc475e0e3c46c19466e5d1566a5b8bf0a33191cba5bbd3ff27ac64ceee57a0 |
| kubernetes-client-linux-386.tar.gz | 8f7f86db5a496afd269b926b6baf341bbd4208f49b48fad1a44c5424812667b3bd7912b5b97bd7844dee2a7c6f9441628f7b5db3caa14429020de7788289191c |
| kubernetes-client-linux-amd64.tar.gz | 7407dc1216cac39f15ca9f75be47c0463a151a3fda7d9843a67c0043c69858fb36eaa6b4194ce5cefd125acd7f521c4b958d446bb0c95ca73a3b3ae47af2c3ee |
| kubernetes-client-linux-arm.tar.gz | 249a82a0af7d8062f49edd9221b3823590b6d166c1bca12c787ae640d6a131bd6a3d7c99136de62074afa6cabe8900dcf4e11037ddbfdf9d5252fc16e256eeb5 |
| kubernetes-client-linux-arm64.tar.gz | 3a8416d99b6ae9bb6d568ff15d1783dc521fe58c60230f38126c64a7739bf03d8490a9a10042d1c4ef07290eaced6cb9d42a9728d4b937305d63f8d3cc7a66f8 |
| kubernetes-client-linux-ppc64le.tar.gz | 105bf4afeccf0b314673265b969d1a7f3796ca3098afa788c43cd9ff3e14ee409392caa5766631cca180e790d92731a48f5e7156167637b97abc7c178dd390f3 |
| kubernetes-client-linux-s390x.tar.gz | 98de73accb7deba9896e14a5012a112f6fd00d6e6868e4d21f61b06605efa8868f1965a1c1ba72bb8847416bc789bd7ef5c1a125811b6c6df060217cd84fdb2c |
| kubernetes-client-windows-386.tar.gz | 7a43f3285b0ab617990497d41ceadfbd2be2b72d433b02508c198e9d380fb5e0a96863cc14d0e9bf0317df13810af1ab6b7c47cd4fa1d0619a00c9536dc60f0f |
| kubernetes-client-windows-amd64.tar.gz | f3fafcffc949bd7f8657dd684c901e199b21c4812009aca1f8cf3c8bf3c3230cab072208d3702d7a248c0b957bc513306dd437fb6a54e1e64b4d7dc8c3c180cd |
| 文件名 | sha512 hash |
|---|---|
| kubernetes-server-linux-amd64.tar.gz | 87b46e73ae2162ee49f510da6549e57503d3ea94b3c4488f39b0b93d45603f540ece30c3784c5e201711a7ddd1260481cd20ac4c618eaf46879e841d054a115a |
| kubernetes-server-linux-arm.tar.gz | 80ba8e615497c0b9c339fbd2d6a4dda54fdbd5659abd7d8e8d448d8d8c24ba7f0ec48693e4bf8ed20513c46432f2a0f1039ab9044f0ed006b935a58772372d95 |
| kubernetes-server-linux-arm64.tar.gz | b4a76a5fc026b4b3b5f9666df05e46896220591b21c147982ff3d91cec7330ed78cf1fc63f5ab759820aadbcfe400c1ad75d5151d9217d42e3da5873e0ff540d |
| kubernetes-server-linux-ppc64le.tar.gz | fb435dfd5514e4cd3bc16b9e71865bff3cdd5123fc272c8cbc5956c260449e0dcfd30d2fdb120da73134e62f48507c5a02d4528d7b9d978765ff4ed740b274e8 |
| kubernetes-server-linux-s390x.tar.gz | 65ed3d372a4d03493d0a586c7f67f1236aa99f02552195f1fb58079bc24787200d9a0f34d0c311a846345d0d70d02ad726f74376a91d3ced234bbfdce80c5133 |
| 文件名 | sha512 hash |
|---|---|
| kubernetes-node-linux-amd64.tar.gz | c9161689532a5e995a68bb0985a983dc43d8e747a05f37849cd33062c07e5202417b26bff652b8bc9c0005026618b7ebc56f918c71747a3addb5da044e683b4a |
| kubernetes-node-linux-arm.tar.gz | 7dba9fdb290f33678983c046eb145446edb1b7479c2403f9e8bd835c3d832ab1f2acb28124c53af5b046d47ab433312d6a654f000a22f8e10795b0bc45bfbddb |
| kubernetes-node-linux-arm64.tar.gz | 8c435824667cd9ec7efdfb72c1d060f62ca61b285cbb9575a6e6013e20ec5b379f77f51d43ae21c1778a3eb3ef69df8895213c54e4b9f39c67c929a276be12de |
| kubernetes-node-linux-ppc64le.tar.gz | 2cfca30dbe49a38cd1f3c78135f60bf7cb3dae0a8ec5d7fa651e1c5949254876fbab8a724ed9a13f733a85b9960edcc4cc971dc3c16297db609209c4270f144f |
| kubernetes-node-linux-s390x.tar.gz | 63bbe469ddd1be48624ef5627fef1e1557a691819c71a77d419d59d101e8e6ee391eb8545da35b412b94974c06d73329a13660484ab26087a178f34a827a3dcb |
| kubernetes-node-windows-amd64.tar.gz | 07cb97d5a3b7d0180a9e22696f417422a0c043754c81ae68338aab7b520aa7c119ff53b9ad835f9a0bc9ea8c07483ce506af48d65641dd15d30209a696b064bb |
pod_name 和 container_name 以符合仪器指南。见 (#80376、@ehashman)。
pod 和 container 标签。v1 包含在 CustomResourceDefinition 的 conversionReviewVersions 列表中来表明它们支持 apiextensions.k8s.io/v1 版本的 ConversionReview API 对象的接收和响应。转换 webhooks 必须在收到的 apiVersion 中以 ConversionReview 对象作为响应。apiextensions.k8s.io/v1 ConversionReview 响应必须指定一个 response.uid,该响应与发送对象的 request.uid 相匹配。见 (#81476、@liggitt)。CustomResourceDefinition API 类型升级为 apiextensions.k8s.io/v1:见 (#79604、@liggitt)。
<!–
default feature in validation schemas is limited to v1spec.scope is no longer defaulted to Namespaced and must be explicitly specifiedspec.version is removed; use spec.versions insteadspec.validation is removed; use spec.versions[*].schema insteadspec.subresources is removed; use spec.versions[*].subresources insteadspec.additionalPrinterColumns is removed; use spec.versions[*].additionalPrinterColumns insteadspec.conversion.webhookClientConfig is moved to spec.conversion.webhook.clientConfigspec.conversion.conversionReviewVersions is moved to spec.conversion.webhook.conversionReviewVersionsspec.versions[*].schema.openAPIV3Schema is now required when creating v1 CustomResourceDefinitionsspec.preserveUnknownFields: true is disallowed when creating v1 CustomResourceDefinitions; it must be specified within schema definitions as x-kubernetes-preserve-unknown-fields: trueadditionalPrinterColumns items, the JSONPath field was renamed to jsonPath (fixes https://github.com/kubernetes/kubernetes/issues/66531)
–>CustomResourceDefinition API 类型升级为 apiextensions.k8s.io/v1:spec.scope 不再默认为 Namespaced,必须明确指定 spec.scopespec.version;改用 spec.versionsspec.validation;改用 spec.versions[*].schemaspec.subresources;改用 spec.versions[*].subresourcesspec.additionalPrinterColumns;改用 spec.versions[*].additionalPrinterColumnsspec.conversion.webhookClientConfig 移至 spec.conversion.webhook.clientConfigspec.conversion.conversionReviewVersions 移至 spec.conversion.webhook.conversionReviewVersionsspec.versions[*].schema.openAPIV3Schemaspec.preserveUnknownFields: true;必须在模式定义中将其指定为 x-kubernetes-preserve-unknown-fields: true。additionalPrinterColumns 项目中,将 JSONPath 字段重命名为 jsonPath,见 参考。
<!–node.status.volumesInUse. (#81163, @jsafrane)
–>node.status.volumesInUse 中格式化、安装和报告。见 (#81163、@jsafrane)。
<!–Authorization header contents (#81330, @tedyu)describe pvc output. (#76463, @j-griffith)Authorization 头部内容。见 (#81330、@tedyu)。describe pvc 输出的一部分。见 (#76463、@j-griffith)。service.beta.kubernetes.io/azure-pip-name to specify the public IP name for Azure load balancer. (#81213, @nilo19)service.beta.kubernetes.io/azure-pip-name,用于指定 Azure 负载均衡器的公共 IP 名称。见 (#81213、@nilo19)。Patch method to ScaleInterface (#80699, @knight42
–>ScaleInterface 中添加 Patch 方法。见 (#80699、@knight42)。
<!–LoadBalancerName and LoadBalancerResourceGroup to allow the corresponding customizations of azure load balancer. (#81054, @nilo19)LoadBalancerName 和 LoadBalancerResourceGroup,用于允许对 Azure 负载均衡器进行相应的自定义。见 (#81054、@nilo19)。--basic-auth-file flag and authentication mode is deprecated and will be removed in a future release. It is not recommended for production environments. (#81152, @tedyu)--cloud-provider-gce-lb-src-cidrs flag in the kube-apiserver. This flag will be removed once the GCE Cloud Provider is removed from kube-apiserver. (#81094, @andrewsykim)--basic-auth-file 参数和身份验证模式已弃用,在以后的版本中将被删除。所以不建议在生产环境中使用。见 (#81152, @tedyu)。--cloud-provider-gce-lb-src-cidrs 参数。一旦从 kube-apiserver 中删除了 GCE 云提供商,该参数将被删除。见 (#81094, @andrewsykim)。metadata.selfLink 字段在单个和列表对象中已弃用。从 v1.20 开始将不再返回该字段,而在 v1.21 中将完全删除该字段。见 (#80978、@wojtek-t)。| 文件名 | sha512 hash |
|---|---|
| kubernetes.tar.gz | 82bc119f8d1e44518ab4f4bdefb96158b1a3634c003fe1bc8dcd62410189449fbd6736126409d39a6e2d211a036b4aa98baef3b3c6d9f7505e63430847d127c2 |
| kubernetes-src.tar.gz | bbf330b887a5839e3d3219f5f4aa38f1c70eab64228077f846da80395193b2b402b60030741de14a9dd4de963662cfe694f6ab04035309e54dc48e6dddd5c05d |
| 文件名 | sha512 hash |
|---|---|
| kubernetes-client-darwin-386.tar.gz | 8d509bdc1ca62463cbb25548ec270792630f6a883f3194e5bdbbb3d6f8568b00f695e39950b7b01713f2f05f206c4d1df1959c6ee80f8a3e390eb94759d344b2 |
| kubernetes-client-darwin-amd64.tar.gz | 1b00b3a478c210e3c3e6c346f5c4f7f43a00d5ef6acb8d9c1feaf26f913b9d4f97eb6db99bbf67953ef6399abe4fbb79324973c1744a6a8cd76067cb2aeed2ca |
| kubernetes-client-linux-386.tar.gz | 82424207b4ef52c3722436eaaf86dbe5c93c6670fd09c2b04320251028fd1bb75724b4f490b6e8b443bd8e5f892ab64612cd22206119924dafde424bdee9348a |
| kubernetes-client-linux-amd64.tar.gz | 57ba937e58755d3b7dfd19626fedb95718f9c1d44ac1c5b4c8c46d11ba0f8783f3611c7b946b563cac9a3cf104c35ba5605e5e76b48ba2a707d787a7f50f7027 |
| kubernetes-client-linux-arm.tar.gz | 3a3601026e019b299a6f662b887ebe749f08782d7ed0d37a807c38a01c6ba19f23e2837c9fb886053ad6e236a329f58a11ee3ec4ba96a8729905ae78a7f6c58c |
| kubernetes-client-linux-arm64.tar.gz | 4cdeb2e678c6b817a04f9f5d92c5c6df88e0f954550961813fca63af4501d04c08e3f4353dd8b6dce96e2ee197a4c688245f03c888417a436b3cf70abd4ba53a |
| kubernetes-client-linux-ppc64le.tar.gz | 0cc7c8f7b48f5affb679352a94e42d8b4003b9ca6f8cbeaf315d2eceddd2e8446a58ba1d4a0df18e8f9c69d0d3b5a46f25b2e6a916e57975381e504d1a4daa1b |
| kubernetes-client-linux-s390x.tar.gz | 9d8fa639f543e707dc65f24ce2f8c73a50c606ec7bc27d17840f45ac150d00b3b3f83de5e3b21f72b598acf08273e4b9a889f199f4ce1b1d239b28659e6cd131 |
| kubernetes-client-windows-386.tar.gz | 05bf6e696da680bb8feec4f411f342a9661b6165f4f0c72c069871983f199418c4d4fa1e034136bc8be41c5fecc9934a123906f2d5666c09a876db16ae8c11ad |
| kubernetes-client-windows-amd64.tar.gz | b2097bc851f5d3504e562f68161910098b46c66c726b92b092a040acda965fed01f45e7b9e513a4259c7a5ebd65d7aa3e3b711f4179139a935720d91216ef5c2 |
| 文件名 | sha512 hash |
|---|---|
| kubernetes-server-linux-amd64.tar.gz | 721bd09b64e5c8f220332417089a772d9073c0dc5cdfa240984cfeb0d681b4a02620fb3ebf1b9f6a82a4dd3423f5831c259d4bad502dce87f145e0a08cb73ee9 |
| kubernetes-server-linux-arm.tar.gz | e7638ce4b88b4282f0a157593cfe809fa9cc9139ea7ebae4762ef5ac1dfaa516903a8acb34a45937eb94b2699e5d4c68c639cbe40cbed2a6b97681aeace9948e |
| kubernetes-server-linux-arm64.tar.gz | 395566c4be3c2ca5b07e81221b3370bc7ccbef0879f96a9384650fcaf4f699f3b2744ba1d97ae42cc6c5d9e1a65a41a793a8b0c9e01a0a65f57c56b1420f8141 |
| kubernetes-server-linux-ppc64le.tar.gz | 90fcba066efd76d2f271a0eb26ed4d90483674d04f5e8cc39ec1e5b7f343311f2f1c40de386f35d3c69759628a1c7c075559c09b6c4542e42fbbe0daeb61a5fa |
| kubernetes-server-linux-s390x.tar.gz | b25014bcf4138722a710451f6e58ee57588b4d47fcceeda8f6866073c1cc08641082ec56e94b0c6d586c0835ce9b55d205d254436fc22a744b24d8c74e8e5cce |
| 文件名 | sha512 hash |
|---|---|
| kubernetes-node-linux-amd64.tar.gz | 6925a71096530f7114a68e755d07cb8ba714bc60b477360c85d76d7b71d3a3c0b78a650877d81aae35b308ded27c8207b5fe72d990abc43db3aa8a7d6d7f94f4 |
| kubernetes-node-linux-arm.tar.gz | 073310e1ccf9a8af998d4c0402ae86bee4f253d2af233b0c45cea55902268c2fe7190a41a990b079e24536e9efa27b94249c3a9236531a166ba3ac06c0f26f92 |
| kubernetes-node-linux-arm64.tar.gz | c55e9aecef906e56a6003f441a7d336846edb269aed1c7a31cf834b0730508706e73ea0ae135c1604b0697c9e2582480fbfba8ba105152698c240e324da0cbd2 |
| kubernetes-node-linux-ppc64le.tar.gz | e89d72d27bb0a7f9133ef7310f455ba2b4c46e9852c43e0a981b68a413bcdd18de7168eb16d93cf87a5ada6a4958592d3be80c9be1e6895fa48e2f7fa70f188d |
| kubernetes-node-linux-s390x.tar.gz | 6ef8a25f2f80a806672057dc030654345e87d269babe7cf166f7443e04c0b3a9bc1928cbcf5abef1f0f0fcd37f3a727f789887dbbdae62f9d1fd90a71ed26b39 |
| kubernetes-node-windows-amd64.tar.gz | 22fd1cea6e0150c06dbdc7249635bbf93c4297565d5a9d13e653f9365cd61a0b8306312efc806d267c47be81621016b114510a269c622cccc916ecff4d10f33c |
DisableCompression field on their rest.Config. This is recommended when clients communicate primarily over high bandwidth / low latency networks where response compression does not improve end to end latency. (#80919, @smarterclayton)DisableCompression 字段来禁用自动压缩。当客户端主要通过高带宽或低延迟网络进行通信时,建议使用此方法,其中响应压缩不会改善端到端延迟。见 (#80919、@smarterclayton)。--(kube|system)-reserved-cgroup, with --cgroup-driver=systemd, it is now possible to use the fully qualified cgroupfs name (i.e. /test-cgroup.slice). (#78793, @mattjmcnaughton)--cgroup-driver=systemd 指定 --(kube|system)-reserved-cgroup 时,现在可以使用完全限定的 cgroupfs 名称(即 /test-cgroup.slice)。见 (#78793、@mattjmcnaughton)。AdmissionReview API sent to and received from admission webhooks has been promoted to admission.k8s.io/v1. Webhooks can specify a preference for receiving v1 AdmissionReview objects with admissionReviewVersions: ["v1","v1beta1"], and must respond with an API object in the same apiVersion they are sent. When webhooks use admission.k8s.io/v1, the following additional validation is performed on their responses: (#80231, @liggitt)
* response.patch and response.patchType are not permitted from validating admission webhooks
* apiVersion: "admission.k8s.io/v1" is required
* kind: "AdmissionReview" is required
* response.uid: "<value of request.uid>" is required
* response.patchType: "JSONPatch" is required (if response.patch is set)
–>AdmissionReview API 已升级为 admission.k8s.io/v1。Webhooks 可以指定使用 admissionReviewVersions: ["v1","v1beta1"] 接收 v1 AdmissionReview 对象的首选项,并且必须在发送它们的同一 apiVersion 中以 API 对象进行响应。当 Webhooks 使用 admission.k8s.io/v1 时,将对其响应执行以下附加验证:见 (#80231、@liggitt)。
* response.patch 和 response.patchType 不允许验证 admission webhooks
* apiVersion: "admission.k8s.io/v1" 是必需的
* kind: "AdmissionReview" 是必需的
* response.uid: "<value of request.uid>" 是必需的
* response.patchType: "JSONPatch" 是必需的(前提是设置了 response.patch)
<!–--cloud-provider=external and no node addresses exists (#75229, @andrewsykim)--cloud-provider=external 并且不存在节点地址,则尝试设置 kubelet 的主机名和内部 IP。见 (#75229、@andrewsykim)。kubeadm join --discovery-file when using discovery files with embedded credentials (#80675, @fabriziopandini)
–>IPv6DualStack=true。此外,对于每个工作节点,用户应使用 nodeRegistration.kubeletExtraArgs 或 KUBELET_EXTRA_ARGS 设置 kubelet 的特性开关功能。见 (#80531, @Arvinderpal)。kubeadm join --discovery-file 操作中的错误。见 (#80675、@fabriziopandini)。| 文件名 | sha512 hash |
|---|---|
| kubernetes.tar.gz | 7dfa3f8b9e98e528e2b49ed9cca5e95f265b9e102faac636ff0c29e045689145be236b98406a62eb0385154dc0c1233cac049806c99c9e46590cad5aa729183f |
| kubernetes-src.tar.gz | 7cf14b92c96cab5fcda3115ec66b44562ca26ea6aa46bc7fa614fa66bda1bdf9ac1f3c94ef0dfa0e37c992c7187ecf4205b253f37f280857e88a318f8479c9a9 |
| 文件名 | sha512 hash |
|---|---|
| kubernetes-client-darwin-386.tar.gz | 4871756de2cd1add0b07ec1e577c500d18a59e2f761595b939e1d4e10fbe0a119479ecaaf53d75cb2138363deae23cc88cba24fe3018cec6a27a3182f37cae92 |
| kubernetes-client-darwin-amd64.tar.gz | dbd9ca5fd90652ffc1606f50029d711eb52d34b707b7c04f29201f85aa8a5081923a53585513634f3adb6ace2bc59be9d4ad2abc49fdc3790ef805378c111e68 |
| kubernetes-client-linux-386.tar.gz | 6b049098b1dc65416c5dcc30346b82e5cf69a1cdd7e7b065429a76d302ef4b2a1c8e2dc621e9d5c1a6395a1fbd97f196d99404810880d118576e7b94e5621e4c |
| kubernetes-client-linux-amd64.tar.gz | 7240a9d49e445e9fb0c9d360a9287933c6c6e7d81d6e11b0d645d3f9b6f3f1372cc343f03d10026518df5d6c95525e84c41b06a034c9ec2c9e306323dbd9325b |
| kubernetes-client-linux-arm.tar.gz | 947b0d9aeeef08961c0582b4c3c94b7ae1016d20b0c9f50af5fe760b3573f17497059511bcb57ac971a5bdadeb5c77dfd639d5745042ecc67541dd702ee7c657 |
| kubernetes-client-linux-arm64.tar.gz | aff0258a223f5061552d340cda36872e3cd7017368117bbb14dc0f8a3a4db8c715c11743bedd72189cd43082aa9ac1ced64a6337c2f174bdcbeef094b47e76b0 |
| kubernetes-client-linux-ppc64le.tar.gz | 3eabecd62290ae8d876ae45333777b2c9959e39461197dbe90e6ba07d0a4c50328cbdf44e77d2bd626e435ffc69593d0e8b807b36601c19dd1a1ef17e6810b4f |
| kubernetes-client-linux-s390x.tar.gz | 6651b2d95d0a8dd748c33c9e8018ab606b4061956cc2b6775bd0b008b04ea33df27be819ce6c391ceb2191b53acbbc088d602ed2d86bdd7a3a3fc1c8f876798a |
| kubernetes-client-windows-386.tar.gz | 4b6c11b7a318e5fcac19144f6ab1638126c299e08c7b908495591674abcf4c7dd16f63c74c7d901beff24006150d2a31e0f75e28a9e14d6d0d88a09dafb014f0 |
| kubernetes-client-windows-amd64.tar.gz | 760ae08da6045ae7089fb27a9324e77bed907662659364857e1a8d103d19ba50e80544d8c21a086738b15baebfd9a5fa78d63638eff7bbe725436c054ba649cc |
| 文件名 | sha512 hash |
|---|---|
| kubernetes-server-linux-amd64.tar.gz | 69db41f3d79aa0581c36a3736ab8dc96c92127b82d3cf25c5effc675758fe713ca7aa7e5b414914f1bc73187c6cee5f76d76b74a2ee1c0e7fa61557328f1b8ef |
| kubernetes-server-linux-arm.tar.gz | ca302f53ee91ab4feb697bb34d360d0872a7abea59c5f28cceefe9237a914c77d68722b85743998ab12bf8e42005e63a1d1a441859c2426c1a8d745dd33f4276 |
| kubernetes-server-linux-arm64.tar.gz | 79ab1f0a542ce576ea6d81cd2a7c068da6674177b72f1b5f5e3ca47edfdb228f533683a073857b6bc53225a230d15d3ba4b0cb9b6d5d78a309aa6e24c2f6c500 |
| kubernetes-server-linux-ppc64le.tar.gz | fbe5b45326f1d03bcdd9ffd46ab454917d79f629ba23dae9d667d0c7741bc2f5db2960bf3c989bb75c19c9dc1609dacbb8a6dc9a440e5b192648e70db7f68721 |
| kubernetes-server-linux-s390x.tar.gz | eb13ac306793679a3a489136bb7eb6588472688b2bb2aa0e54e61647d8c9da6d3589c19e7ac434c24defa78cb65f7b72593eedec1e7431c7ecae872298efc4de |
| 文件名 | sha512 hash |
|---|---|
| kubernetes-node-linux-amd64.tar.gz | a4bde88f3e0f6233d04f04d380d5f612cd3c574bd66b9f3ee531fa76e3e0f1c6597edbc9fa61251a377e8230bce0ce6dc1cf57fd19080bb7d13f14a391b27fe8 |
| kubernetes-node-linux-arm.tar.gz | 7d72aa8c1d883b9f047e5b98dbb662bdfd314f9c06af4213068381ffaac116e68d1aad76327ead7a4fd97976ea72277cebcf765c56b265334cb3a02c83972ec1 |
| kubernetes-node-linux-arm64.tar.gz | c9380bb59ba26dcfe1ab52b5cb02e2d920313defda09ec7d19ccbc18f54def4b57cf941ac8a397392beb5836fdc12bc9600d4055f2cfd1319896cfc9631cab10 |
| kubernetes-node-linux-ppc64le.tar.gz | 7bcd79b368a62c24465fce7dcb024bb629eae034e09fb522fb43bb5798478ca2660a3ccc596b424325c6f69e675468900f3b41f3924e7ff453e3db40150b3c16 |
| kubernetes-node-linux-s390x.tar.gz | 9bda9dd24ee5ca65aaefece4213b46ef57cde4904542d94e6147542e42766f8b80fe24d99a6b8711bd7dbe00c415169a9f258f433c5f5345c2e17c2bb82f2670 |
| kubernetes-node-windows-amd64.tar.gz | d5906f229d2d8e99bdb37e7d155d54560b82ea28ce881c5a0cde8f8d20bff8fd2e82ea4b289ae3e58616d3ec8c23ac9b473cb714892a377feb87ecbce156147d |
必须要进行的操作:amd64 的容器镜像 tar 文件现在将在 manifest.json 的 RepoTags 节中给出体系结构。见 (#80266、@javier-b-perez)。
如果你正在使用 Docker 清单,则此变更对你没有影响。 <!–
–>
node-lease-renew-interval 更改为 0.25 的续租期限。见 (#80429、@gaorong)。--endpoint-updates-batch-period 可用于减少由 Pod 更改生成的端点更新的数量。见 (#80509、@mborsz)。
<!–pkg/api/ref 中删除 GetReference() 和 GetPartialReference() 函数,因为在 staging/src/k8s.io/client-go/tools/ref 中也存在相同的函数。见 (#80361、@wojtek-t)。unapproved.* or a link to the pull request approving the schema. See https://github.com/kubernetes/enhancements/pull/1111 for more details. (#79992, @deads2k)api-approved.kubernetes.io 设置为 unapproved.* 或指向批准该模式下的拉取请求的链接。有关更多详细信息,请参见 https://github.com/kubernetes/enhancements/pull/1111。见 (#79992、@deads2k)。--cpu-manager-policy flag will now cause the kubelet to fail instead of simply ignoring the flag and running the cpumanagers default policy instead. (#80294, @klueska)[]TopologySpreadConstraint is introduced into PodSpec to support the “Even Pods Spread” alpha feature. (#77327, @Huang-Wei)
–>--cpu-manager-policy 参数中传递无效的策略名称将导致 kubelet 失败,而不是简单地忽略该参数并运行 cpumanager 的默认策略。见 (#80294、@klueska)。[]TopologySpreadConstraint,以支持 “Even Pods Spread” alpha 功能。见 (#77327、@Huang-Wei)。
<!–| 文件名 | sha512 hash |
|---|---|
| kubernetes.tar.gz | 4834c52267414000fa93c0626bded5a969cf65d3d4681c20e5ae2c5f62002a51dfb8ee869484f141b147990915ba57be96108227f86c4e9f571b4b25e7ed0773 |
| kubernetes-src.tar.gz | 9329d51f5c73f830f3c895c2601bc78e51d2d412b928c9dae902e9ba8d46338f246a79329a27e4248ec81410ff103510ba9b605bb03e08a48414b2935d2c164b |
| 文件名 | sha512 hash |
|---|---|
| kubernetes-client-darwin-386.tar.gz | 3cedffb92a0fca4f0b2d41f8b09baa59dff58df96446e8eece4e1b81022d9fdda8da41b5f73a3468435474721f03cffc6e7beabb25216b089a991b68366c73bc |
| kubernetes-client-darwin-amd64.tar.gz | 14de6bb296b4d022f50778b160c98db3508c9c7230946e2af4eb2a1d662d45b86690e9e04bf3e592ec094e12bed1f2bb74cd59d769a0eaac3c81d9b80e0a79c8 |
| kubernetes-client-linux-386.tar.gz | 8b2b9fa55890895239b99fabb866babe50aca599591db1ecf9429e49925ae478b7c813b9d7704a20f41f2d50947c3b3deecb594544f1f3eae6c4e97ae9bb9b70 |
| kubernetes-client-linux-amd64.tar.gz | e927ac7b314777267b95e0871dd70c352ec0fc967ba221cb6cba523fa6f18d9d193e4ce92a1f9fa669f9c961de0e34d69e770ef745199ed3693647dd0d692e57 |
| kubernetes-client-linux-arm.tar.gz | 4a230a6d34e2ffd7df40c5b726fbcbb7ef1373d81733bfb75685b2448ed181eb49ef27668fc33700f30de88e5bbdcc1e52649b9d31c7940760f48c6e6eb2f403 |
| kubernetes-client-linux-arm64.tar.gz | 87c8d7185df23b3496ceb74606558d895a64daf0c41185c833a233e29216131baac6e356a57bb78293ed9d0396966ecc3b00789f2b66af352dc286b101bcc69a |
| kubernetes-client-linux-ppc64le.tar.gz | 16ea5efa2fc29bc7448a609a7118e7994e901ab26462aac52f03b4851d4c9d103ee12d2335360f8aa503ddbb2a71f3000f0fcb33597dd813df4f5ad5f4819fa9 |
| kubernetes-client-linux-s390x.tar.gz | 7390ad1682227a70550b20425fa5287fecf6a5d413493b03df3a7795614263e7883f30f3078bbb9fbd389d2a1dab073f8f401be89b82bd5861fa6b0aeda579eb |
| kubernetes-client-windows-386.tar.gz | 88251896dfe38e59699b879f643704c0195e7a5af2cb00078886545f49364a2e3b497590781f135b80d60e256bad3a4ea197211f4f061c98dee096f0845e7a9b |
| kubernetes-client-windows-amd64.tar.gz | 766b2a9bf097e45b2549536682cf25129110bd0562ab0df70e841ff8657dd7033119b0929e7a213454f90594b19b90fa57d89918cee33ceadba7d689449fe333 |
| 文件名 | sha512 hash |
|---|---|
| kubernetes-server-linux-amd64.tar.gz | dfd5c2609990c9b9b94249c654931b240dc072f2cc303e1e1d6dec1fddfb0a9e127e3898421ace00ab1947a3ad2f87cfd1266fd0b6193ef00f942269388ef372 |
| kubernetes-server-linux-arm.tar.gz | 7704c2d3c57950f184322263ac2be1649a0d737d176e7fed1897031d0efb8375805b5f12c7cf9ba87ac06ad8a635d6e399382d99f3cbb418961a4f0901465f50 |
| kubernetes-server-linux-arm64.tar.gz | fbbd87cc38cfb6429e3741bfd87ecec4b69b551df6fb7c121900ced4c1cd0bc77a317ca8abd41f71ffd7bc0b1c7144fecb22fa405d0b211b238df24d28599333 |
| kubernetes-server-linux-ppc64le.tar.gz | cfed5b936eb2fe44df5d0c9c6484bee38ef370fb1258522e8c62fb6a526e9440c1dc768d8bf33403451ae00519cab1450444da854fd6c6a37665ce925c4e7d69 |
| kubernetes-server-linux-s390x.tar.gz | 317681141734347260ad9f918fa4b67e48751f5a7df64a848d2a83c79a4e9dba269c51804b09444463ba88a2c0efa1c307795cd8f06ed840964eb2c725a4ecc3 |
| 文件名 | sha512 hash |
|---|---|
| kubernetes-node-linux-amd64.tar.gz | b3b1013453d35251b8fc4759f6ac26bdeb37f14a98697078535f7f902e8ebca581b5629bbb4493188a7e6077eb5afc61cf275f42bf4d9f503b70bfc58b9730b2 |
| kubernetes-node-linux-arm.tar.gz | 0bacc1791d260d2863ab768b48daf66f0f7f89eeee70e68dd515b05fc9d7f14b466382fe16fa84a103e0023324f681767489d9485560baf9eb80fe0e7ffab503 |
| kubernetes-node-linux-arm64.tar.gz | 73bd70cb9d27ce424828a95d715c16fd9dd22396dbe1dfe721eb0aea9e186ec46e6978956613b0978a8da3c22df39790739b038991c0192281881fce41d7c9f1 |
| kubernetes-node-linux-ppc64le.tar.gz | a865f98838143dc7e1e12d1e258e5f5f2855fcf6e88488fb164ad62cf886d8e2a47fdf186ad6b55172f73826ae19da9b2642b9a0df0fa08f9351a66aeef3cf17 |
| kubernetes-node-linux-s390x.tar.gz | d2f9f746ed0fe00be982a847a3ae1b6a698d5c506be1d3171156902140fec64642ec6d99aa68de08bdc7d65c9a35ac2c36bda53c4db873cb8e7edc419a4ab958 |
| kubernetes-node-windows-amd64.tar.gz | 37f48a6d8174f38668bc41c81222615942bfe07e01f319bdfed409f83a3de3773dceb09fd86330018bb05f830e165e7bd85b3d23d26a50227895e4ec07f8ab98 |
--make-symlinks 参数已被删除。见 (#80017、@Pothulapati)。
<!–beta.kubernetes.io/metadata-proxy-ready, beta.kubernetes.io/metadata-proxy-ready and beta.kubernetes.io/kube-proxy-ds-ready are no longer added on new nodes. (#79305, @paivagustavo)
* ip-mask-agent addon starts to use the label node.kubernetes.io/masq-agent-ds-ready instead of beta.kubernetes.io/masq-agent-ds-ready as its node selector.
* kube-proxy addon starts to use the label node.kubernetes.io/kube-proxy-ds-ready instead of beta.kubernetes.io/kube-proxy-ds-ready as its node selector.
* metadata-proxy addon starts to use the label cloud.google.com/metadata-proxy-ready instead of beta.kubernetes.io/metadata-proxy-ready as its node selector.
* Kubelet removes the ability to set kubernetes.io or k8s.io labels via –node-labels other than the specifically allowed labels/prefixes.
–>beta.kubernetes.io/metadata-proxy-ready、beta.kubernetes.io/metadata-proxy-ready 和 beta.kubernetes.io/kube-proxy-ds-ready(节点标签)不再添加到新节点上。见 (#79305、@paivagustavo)。
* ip-mask-agent 插件开始使用标签 node.kubernetes.io/masq-agent-ds-ready 代替 beta.kubernetes.io/masq-agent-ds-ready 作为其节点选择器。
* kube-proxy 插件开始使用标签 node.kubernetes.io/kube-proxy-ds-ready 代替 beta.kubernetes.io/kube-proxy-ds-ready 作为其节点选择器。
* metadata-proxy 插件开始使用标签 cloud.google.com/metadata-proxy-ready 代替 beta.kubernetes.io/metadata-proxy-ready 作为其节点选择器。
* kubelet 删除了通过 --node-labels 标签(特定允许的标签或前缀 除外)设置 kubernetes.io或 k8s.io 标签的功能。
<!–apps/v1beta1 and apps/v1beta2 - use apps/v1 instead
* daemonsets, deployments, replicasets resources under extensions/v1beta1 - use apps/v1 instead
* networkpolicies resources under extensions/v1beta1 - use networking.k8s.io/v1 instead
* podsecuritypolicies resources under extensions/v1beta1 - use policy/v1beta1 instead
--runtime-config apiserver flag.
apps/v1beta1=trueapps/v1beta2=trueextensions/v1beta1/daemonsets=true,extensions/v1beta1/deployments=true,extensions/v1beta1/replicasets=true,extensions/v1beta1/networkpolicies=true,extensions/v1beta1/podsecuritypolicies=trueapps/v1beta1 和 apps/v1 下的所有资源 - 改用 apps/v1
* extensions/v1beta1 下的资源 daemonsets、deployments、replicasets - 改用 apps/v1
* extensions/v1beta1 下的资源 networkpolicies - 改用 networking.k8s.io/v1
* extensions/v1beta1 下的资源 podsecuritypolicies - 改用 policy/v1beta1
--runtime-config apiserver 参数临时重新启用服务这些资源。
apps/v1beta1=trueapps/v1beta2=trueextensions/v1beta1/daemonsets=true,extensions/v1beta1/deployments=true,extensions/v1beta1/replicasets=true,extensions/v1beta1/networkpolicies=true,extensions/v1beta1/podsecuritypolicies=true--resource-container from kube-proxy. (#78294, @vllry)
--resource-container flag has been removed from kube-proxy, and specifying it will now cause an error. The behavior is now as if you specified --resource-container="". If you previously specified a non-empty --resource-container, you can no longer do so as of kubernetes 1.16.
–>--resource-container 弃用参数已从 kube-proxy 中移除。见 (#78294、@vllry)。现在指定它会导致错误。现在的行为就好像你指定了 --resource-container=""。如果以前指定了非空 --resource-container,则从 kubernetes 1.16 开始,你将无法再这样做。见 (#78294、@vllry)。
--resource-container 参数已从 kube-proxy 中删除,现在指定它会导致错误。现在的行为就好像你指定了 --resource-container=""。如果以前指定了非空 --resource-container,则从 kubernetes 1.16 开始,你将无法再这样做。--kubernetes-version 添加到 kubeadm init phase certs ca 和 kubeadm init phase kubeconfig 中。见 (#80115、@gyuho)。/openapi/v2. (#79843, @sttts)
generate-internal-groups.sh script in k8s.io/code-generator will generate OpenAPI definitions by default in pkg/generated/openapi. Additional API group dependencies can be added via OPENAPI_EXTRA_PACKAGES=<group>/<version> <group2>/<version2>....v=5。见 (#80100、@andrewsykim)。upgrade diff 操作。见 (#80025、@SataQiu)。/openapi/v2 上提供的 OpenAPI v2 规范的支持。(#79843、@sttts)。
generate-internal-groups.sh script in k8s.io/code-generator will generate OpenAPI definitions by default in pkg/generated/openapi. Additional API group dependencies can be added via OPENAPI_EXTRA_PACKAGES=<group>/<version> <group2>/<version2>....Cinder 和 ScaleIO volume(卷)驱动已被弃用,并将在以后的版本中删除。见 (#80099, @dims)。
<!–/healthz will keep returning success during this time and requests are normally served, but /readyz will return faillure immediately. This delay can be used to allow the SDN to update iptables on all nodes and stop sending traffic. (#74416, @sttts)
–>--shutdown-delay-duration 添加到 kube-apiserver 中以延迟正常关闭。在这段时间内,/healthz 将继续返回成功,并且请求将正常处理,但是 /readyz 将立即返回失败。此延迟可用于允许 SDN 更新所有节点上的 iptables 并停止发送流量。见 (#74416、@sttts)。
<!–MutatingWebhookConfiguration and ValidatingWebhookConfiguration APIs have been promoted to admissionregistration.k8s.io/v1: (#79549, @liggitt)
* failurePolicy default changed from Ignore to Fail for v1
* matchPolicy default changed from Exact to Equivalent for v1
* timeout default changed from 30s to 10s for v1
* sideEffects default value is removed and the field made required for v1
* admissionReviewVersions default value is removed and the field made required for v1 (supported versions for AdmissionReview are v1 and v1beta1)
* The name field for specified webhooks must be unique for MutatingWebhookConfiguration and ValidatingWebhookConfiguration objects created via admissionregistration.k8s.io/v1
admissionregistration.k8s.io/v1beta1 versions of MutatingWebhookConfiguration and ValidatingWebhookConfiguration are deprecated and will no longer be served in v1.19.
–>MutatingWebhookConfiguration 和 ValidatingWebhookConfiguration API 已升级为 admissionregistration.k8s.io/v1:见 (#79549、@liggitt)。
* v1 的默认 failurePolicy 从 Ignore 更改为 Fail
* v1 的默认 matchPolicy 从 Exact 更改为 Equivalent
* v1 的默认 timeout 从 30s 更改为 10s
* 删除了 sideEffects 默认值,并且该字段为必填字段,v1 仅允许使用 None 和 NoneOnDryRun
* 删除了 admissionReviewVersions 默认值,并为 v1 设置了必填字段(AdmissionReview 支持的版本为 v1 和 v1beta1)
* 对于通过 admissionregistration.k8s.io/v1 创建的 MutatingWebhookConfiguration 和 ValidatingWebhookConfiguration 对象,指定 Webhook 的 name 字段必须唯一
MutatingWebhookConfiguration 和 ValidatingWebhookConfiguration 的 admissionregistration.k8s.io/v1beta1 版本已弃用,在 v1.19 中将不再提供。
<!–kubectl replace --raw and kubectl delete --raw to have parity with create and get (#79724, @deads2k)Accept-Encoding: gzip will now receive a GZIP compressed response body if the API call was larger than 128KB. Go clients automatically request gzip-encoding by default and should see reduced transfer times for very large API requests. Clients in other languages may need to make changes to benefit from compression. (#77449, @smarterclayton)
–>kubectl replace --raw 和 kubectl delete --raw 与 create 和 get 相等。见 (#79724、@deads2k)。/ with non-2xx HTTP responses (#79895, @deads2k)/ 请求的服务支持的聚合 API 的问题。见 (#79895、@deads2k)。apiserver_watch_events_total,可用于了解系统中监视事件的数量。见 (#78732、@mborsz)。k8s.io/client-go/metadata.Client has been added for accessing objects generically. This client makes it easier to retrieve only the metadata (the metadata sub-section) from resources on the cluster in an efficient manner for use cases that deal with objects generically, like the garbage collector, quota, or the namespace controller. The client asks the server to return a meta.k8s.io/v1 PartialObjectMetadata object for list, get, delete, watch, and patch operations on both normal APIs and custom resources which can be encoded in protobuf for additional work. If the server does not yet support this API the client will gracefully fall back to JSON and transform the response objects into PartialObjectMetadata. (#77819, @smarterclayton)
–>k8s.io/client-go/metadata.Client,用于一般性地访问对象。一般情况下,对于使用处理对象(例如垃圾收集器、配额或命名空间控制器)的用例,该客户端使以有效方式从集群上的资源仅检索元数据(“元数据”部分)变得更加容易。客户端要求服务器返回一个 meta.k8s.io/v1 PartialObjectMetadata 对象,用于在常规 API 和自定义资源上进行列表、获取、删除、监视和修补操作,这些操作可以在 protobuf 中进行编码以进行其他工作。如果服务器尚不支持此 API,则客户端将正常使用 JSON 并将响应对象转换为 PartialObjectMetadata。见 (#77819、@smarterclayton)。
<!–controlPlaneEndpoint (#79270, @SataQiu)controlPlaneEndpoint 提供 --control-plane-endpoint 参数。见 (#79270、@SataQiu)。To configure controller manager to use ipv6dual stack: (#73977, @khenidak)
Notes:
When using the conformance test image, a new environment variable E2E_USE_GO_RUNNER will cause the tests to be run with the new Golang-based test runner rather than the current bash wrapper. (#79284, @johnSchnake) –>
允许将控制器管理器配置为使用 IPv6 双协议栈:见 (#73977、@khenidak)。
--cluster-cidr="<cidr1>,<cidr2>"。注意:
当使用一致性测试镜像时,新的环境变量 E2E_USE_GO_RUNNER 将使测试使用基于 golang 的新测试运行程序而不是当前的 bash 包装程序运行。见 (#79284、@johnSchnake)。
<!–
kubeadm: prevent PSP blocking of upgrade image prepull by using a non-root user (#77792, @neolit123)
kubelet now accepts a –cni-cache-dir option, which defaults to /var/lib/cni/cache, where CNI stores cache files. (#78908, @dcbw) –>
kubeadm:通过使用非 root 用户防止 PSP 阻止升级镜像提前拉取操作。见 (#77792、@neolit123)。
kubelet 现在接受 --cni-cache-dir 选项,默认为 /var/lib/cni/cache,这是 CNI 存储缓存文件的位置。(#78908、@dcbw)。
<!–
Update Azure API versions (containerregistry – 2018-09-01, network – 2018-08-01) (#79583, @justaugustus) –>
Azure API 版本已更新(容器 registry 到 2018-09-01,网络到 2018-08-01)。见 (#79583)。 <!–
Fix possible fd leak and closing of dirs in doSafeMakeDir (#79534, @odinuge) –>
kubeadm: fix the bug that “–cri-socket” flag does not work for kubeadm reset (#79498, @SataQiu)
kubectl logs –selector will support –tail=-1. (#74943, @JishanXing)
Introduce a new admission controller for RuntimeClass. Initially, RuntimeClass will be used to apply the pod overhead associated with a given RuntimeClass to the Pod.Spec if a corresponding RuntimeClassName is specified. (#78484, @egernst)
kubeadm:修复 --cri-socket 参数不适用于 kubeadm reset 操作的错误。见 (#79498、@SataQiu)。
kubectl log 操作 –selector 支持 –tail = -1。见 (#74943、@JishanXing)。
为 RuntimeClass 引入新的准入控制器。最初,如果指定了相应的 RuntimeClassName,则将使用 RuntimeClass 与给定 RuntimeClass 关联的 Pod 开销应用于 Pod spec。见 (#78484、@egernst)。
Fix kubelet errors in AArch64 with huge page sizes smaller than 1MiB (#78495, @odinuge)
The alpha metadata.initializers field, deprecated in 1.13, has been removed. (#79504, @yue9944882)
Fix duplicate error messages in cli commands (#79493, @odinuge)
Default resourceGroup should be used when the value of annotation azure-load-balancer-resource-group is an empty string. (#79514, @feiskyer) –>
使用小于 1 MiB 的巨大页面来修复 AArch64 中的 kubelet 错误。见 (#78495、@odinuge)。
在 v1.13 版本中弃用的 alpha metadata.initializers 字段已被删除。见 (#79504、@yue9944882)。
当注解 azure-load-balancer-resource-group 的值为空字符串时,应使用默认 resourceGroup。见 (#79514、@feiskyer)。
<!–
Fixes output of kubectl get --watch-only when watching a single resource (#79345, @liggitt)
RateLimiter add a context-aware method, fix client-go request goruntine backlog in async timeout scene. (#79375, @answer1991)
Fix a bug where kubelet would not retry pod sandbox creation when the restart policy of the pod is Never (#79451, @yujuhong)
Fix CRD validation error on ‘items’ field. (#76124, @tossmilestone)
The CRD handler now properly re-creates stale CR storage to reflect CRD update. (#79114, @roycaihw) –>
监视某个资源时修复 kubectl get --watch-only 输出内容。见 (#79345、@liggitt)。
RateLimiter 添加了 context-aware 方法,修复了异步超时场景中的 client-go 请求 goruntine backlog 问题。见 (#79375、@answer1991)。
修复了当 Pod 的重启策略设置为 Never 时,kubelet 不会重试 Pod 沙盒创建的问题。见 (#79451、@yujuhong)。
修复 items 字段中的 CRD 验证错误。见 (#76124、@tossmilestone)。
CRD 处理程序现在可以正确地重新创建过时的 CR 存储,以反映 CRD 更新。见 (#79114、@roycaihw)。 <!–
Integrated volume limits for in-tree and CSI volumes into one scheduler predicate. (#77595, @bertinatto)
Fix a bug in server printer that could cause kube-apiserver to panic. (#79349, @roycaihw)
Mounts /home/kubernetes/bin/nvidia/vulkan/icd.d on the host to /etc/vulkan/icd.d inside containers requesting GPU. (#78868, @chardch)
Remove CSIPersistentVolume feature gates (#79309, @draveness) –>
将 in-tree 和 CSI(Volume)卷的(Volume)限制集成到一个调度器条件(Scheduler Predicate)。见 (#77595、@bertinatto)。
修复了源代码 printers 函数中可能导致 kube-apiserver 崩溃的错误。见 (#79349、@roycaihw)。
将主机上的 /home/kubernetes/bin/nvidia/vulkan/icd.d 挂载到请求 GPU 的容器内的 /etc/vulkan/icd.d。见 (#78868、@chardch)。
移除 CSIPersistentVolume 功能特性开关。见 (#79309、@draveness)。 <!–
Init container resource requests now impact pod QoS class (#75223, @sjenning)
Correct the maximum allowed insecure bind port for the kube-scheduler and kube-apiserver to 65535. (#79346, @ncdc)
Fix remove the etcd member from the cluster during a kubeadm reset. (#79326, @bradbeam)
Remove KubeletPluginsWatcher feature gates (#79310, @draveness)
Remove HugePages, VolumeScheduling, CustomPodDNS and PodReadinessGates feature flags (#79307, @draveness) –>
将 kube-scheduler 和 kube-apiserver 允许的最大不安全绑定端口更正为 65535。(#79346、@ncdc)。
移除 KubeletPluginsWatcher 特性开关。见 (#79310、@draveness)。
移除 HugePages、VolumeScheduling、CustomPodDNS 和 PodReadinessGates 功能参数。见 (#79307、@draveness)。 <!–
The GA PodPriority feature gate is now on by default and cannot be disabled. The feature gate will be removed in v1.18. (#79262, @draveness)
Remove pids cgroup controller requirement when related feature gates are disabled (#79073, @rafatio)
Add Bind extension point of the scheduling framework (#78513, @chenchun)
if targetPort is changed that will process by service controller (#77712, @Sn0rt)
update to use go 1.12.6 (#78958, @tao12345666333)
kubeadm: fix a potential panic if kubeadm discovers an invalid, existing kubeconfig file (#79165, @neolit123) –>
GA 版本中的 PodPriority 特性开关现在默认情况下处于打开状态,无法禁用。PodPriority 特性开关将在 v1.18 中删除。见 (#79262, @draveness)。
升级 go 到 1.12.6。见 (#78958、@tao12345666333)。
kubeadm:修复了因 kubeadm 发现无效的、现有的 kubeconfig 文件而可能引起的崩溃问题。见 (#79165、@neolit123)。 <!–
fix kubelet fail to delete orphaned pod directory when the kubelet’s pods directory (default is “/var/lib/kubelet/pods”) symbolically links to another disk device’s directory (#79094, @gaorong)
Addition of Overhead field to the PodSpec and RuntimeClass types as part of the Pod Overhead KEP (#76968, @egernst)
fix pod list return value of framework.WaitForPodsWithLabelRunningReady (#78687, @pohly) –>
修复 kubelet 的 Pod 目录(默认为 /var/lib/kubelet/pods)象征性地链接到另一个磁盘设备的目录时,kubelet 无法删除孤立的 pod 目录的问题。见 (#79094、@gaorong)。
作为 Pod Overhead KEP 的一部分,将 Overhead 字段添加到 PodSpec 和 RuntimeClass 类型中。见 (#76968、@egernst)。
修正 Pod 列表返回值为 framework.WaitForPodsWithLabelRunningReady。见 (#78687、@pohly)。
<!–
The behavior of the default handler for 404 requests fro the GCE Ingress load balancer is slightly modified in the sense that it now exports metrics using prometheus. The metrics exported include: (#79106, @vbannai)
对于 GCE Ingress 负载均衡器的 404 请求的默认处理程序的行为,在某种程度上已进行了一些修改,即它现在使用 Prometheus 导出指标。导出的指标包括:见 (#79106、@vbannai)。
http_404_request_total (已处理的 404 请求数目)http_404_request_duration_ms (服务器响应所花费的时间(以毫秒为单位))The kube-apiserver has improved behavior for both startup and shutdown sequences and also now exposes
eadyz for readiness checking. Readyz includes all existing healthz checks but also adds a shutdown check. When a cluster admin initiates a shutdown, the kube-apiserver will try to process existing requests (for the duration of request timeout) before killing the apiserver process. (#78458, @logicalhan)
kube-apiserver 改进了启动和关闭序列的行为,现在还公开了 eadyz 用来进行就绪检查。Readyz 包括所有现有的 healthz 检查,但还添加了关闭检查。当集群管理员启动关闭时, kube-apiserver 将在杀死 apiserver 进程之前尝试处理现有请求(在请求超时时间内)。见 (#78458、@logicalhan)。
--maximum-startup-sequence-duration。这样,您就可以在 healthz 开始失败之前明确定义 apiserver 启动序列的上限。通过使 kubelet 活跃度初始延迟尽量较短,尽管启动序列没有在预期的时间范围内完成并且缺少较长的启动序列(如 RBAC),但可以使 kubelet 快速恢复。当此标志的值为零时,kube-apiserver 行为是向后兼容的(这是该参数的默认值)。
<!–fix: make azure disk URI as case insensitive (#79020, @andyzhangx)
Enable cadvisor ProcessMetrics collecting. (#79002, @jiayingz)
Fixes a bug where kubectl set config hangs and uses 100% CPU on some invalid property names (#79000, @pswica)
Fix a string comparison bug in IPVS graceful termination where UDP real servers are not deleted. (#78999, @andrewsykim) –>
修复了使 Azure 磁盘 URI 不区分大小写的问题。见 (#79020、@andyzhangx)。
修复了 kubectl set config 挂起并在某些无效的属性名称上使用 100% CPU 的错误。见 (#79000、@pswica)。
修复了 IPVS 正常终止操作中不删除 UDP 真实服务器的字符串比较错误。见 (#78999、@andrewsykim)。 <!–
Reflector watchHandler Warning log ‘The resourceVersion for the provided watch is too old.’ is now logged as Info. (#78991, @sallyom)
fix a bug that pods not be deleted from unmatched nodes by daemon controller (#78974, @DaiHao)
Volume expansion is enabled in the default GCE storageclass (#78672, @msau42) –>
重构 watchHandler Warning 日志 The resourceVersion for the provided watch is too old,现在被设置为 Info。见 (#78991、@sallyom)。
修复了守护程序控制器(daemon controller)无法从不匹配的节点删除 Pod 的错误。见 (#78974、@DaiHao)。
kubeadm: use the service-cidr flag to pass the desired service CIDR to the kube-controller-manager via its service-cluster-ip-range flag. (#78625, @Arvinderpal)
kubeadm: introduce deterministic ordering for the certificates generation in the phase command “kubeadm init phase certs” . (#78556, @neolit123)
Add Pre-filter extension point to the scheduling framework. (#78005, @ahg-g)
fix pod stuck issue due to corrupt mnt point in flexvol plugin, call Unmount if PathExists returns any error (#75234, @andyzhangx) –>
kubeadm:使用 service-cidr 参数通过其 service-cluster-ip-range 参数将所需的服务 CIDR 传递到 kube-controller-manager。见 (#78625、@Arvinderpal)。
kubeadm:在阶段命令 kubeadm init phase certs 操作中引入确定性顺序用于生成证书。见 (#78556、@neolit123)。
修复由于 flexvol 插件中的 mnt 点损坏而导致的卡住问题,如果 PathExists 返回任何错误,请调用 Unmount 操作。见 (#75234、@andyzhangx)。
此页是否对您有帮助?
感谢反馈。如果您有一个关于如何使用 Kubernetes 的特定的、需要答案的问题,可以访问 Stack Overflow. 在 GitHub 仓库上登记新的问题 报告问题 或者 提出改进建议.