Kubernetes Fury Distribution Release v1.23.1
Welcome to the KFD release v1.23.1. This is a patch release
fixing bugs in all the core modules.
The team has been working to make the release upgrade as simple as possible, so read carefully the upgrade path of each core module listed below along with the upgrade path of the distribution.
â ī¸ If upgrading from v1.23.0, you must delete all the objects (StatefulSet, Deployment, DaemonSet, etc) as specified in the release notes of the modules
before upgrading to v1.23.1.
This distribution is maintained with â¤ī¸ by the team SIGHUP, and is battle tested in production environments.
New Featuresâ
Core Module Updatesâ
-
Removed
commonLabelsfrom all thekustomizekatalogs -
networking đĻ core module: v1.8.0 -> v1.8.2
- No updates on the components of the module
commonLabelsbugfix
-
monitoring đĻ core module: v1.14.0 -> v1.14.1
- No updates on the components of the module
commonLabelsbugfix
-
logging đĻ core module: v1.10.0 -> v1.10.2
- No updates on the components of the module
commonLabelsbugfix
-
ingress đĻ core module: v1.12.0 -> v1.12.2
- Update [forecastle] from version
1.0.73to1.0.75. commonLabelsbugfix
- Update [forecastle] from version
-
dr đĻ core module: v1.9.0 -> v1.9.2
- No updates on the components of the module
commonLabelsbugfix
-
OPA đĻ core module: v1.6.0 -> v1.6.2
- Fixed an issue present only in
v1.6.0with a missing volume mount that broke the audit process (policy enforcement was unaffected) commonLabelsbugfix
- Fixed an issue present only in
Please refer the individual release notes for detailed information
Upgrade pathâ
Katalog Procedureâ
To upgrade the distribution from v1.23.0 to v1.23.1 please follow the instructions written in the release notes of each core module.
To upgrade this distribution from v1.7.x to v1.23.1, you need to download this new version, vendor the dependencies,
finally applying the kustomize project.
furyctl vendor -H
kustomize build . | kubectl apply -f -
NOTE: The upgrade takes some minutes (depends on the cluster size), and you should expect some downtime during the upgrade process.
Test itâ
If you want to test the distribution in a test environment, spin up a
kind cluster, then deploy all rendered manifests.
$ kind version
kind v0.11.0 go1.16.4 darwin/amd64
$ curl -Ls https://github.com/sighupio/fury-distribution/releases/download/v1.23.1/katalog/tests/config/kind-config | kind create cluster --image registry.sighup.io/fury/kindest/node:v1.23.1 --config -
Creating cluster "kind" ...
â Ensuring node image (kindest/node:v1.20.1) đŧ
â Preparing nodes đĻ đĻ
â Writing configuration đ
â Starting control-plane đšī¸
â Installing StorageClass đž
â Joining worker nodes đ
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kind
Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community đ
$ kubectl apply -f https://github.com/sighupio/fury-distribution/releases/download/v1.23.1/fury-distribution-v1.23.1.yml
namespace/cert-manager created
namespace/gatekeeper-system created
namespace/ingress-nginx created
namespace/logging created
namespace/monitoring created
customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
<TRUNCATED OUTPUT>
NOTE: Run
kubectl applymultiple times until you see no errors in the console