Install etcd on dedicated nodes
Since version 1.31.1, 1.30.2 and 1.29.7 SD supports deploying etcd on dedicated nodes, separate from the Kubernetes control plane nodes.
This feature only applies to the OnPremises provider.
By default, in SD etcd runs on the control plane nodes alongside the Kubernetes API server, controller manager, and scheduler. This guide explains how to configure etcd to run on separated nodes instead.
Planning
Before starting you need to plan your infrastructure carefully. For development or testing environments, etcd typically works well with minimal resources. However, production clusters running etcd on dedicated nodes need proper hardware planning. Refer to the etcd hardware recommendations for detailed guidelines on sizing your production cluster.
Installation
Create the configuration file
Create a SD cluster furyctl.yaml file with default values using the OnPremises provider. You'll now customize it for this use case.
Configure etcd nodes
The key to deploying etcd on dedicated nodes is the .spec.kubernetes.etcd section. When this section is present, etcd will be deployed on the specified nodes instead of on control plane nodes.
Edit your furyctl.yaml and add the etcd configuration:
spec:
kubernetes:
etcd:
hosts:
- name: etcd1
ip: 192.168.1.50
- name: etcd2
ip: 192.168.1.51
- name: etcd3
ip: 192.168.1.52
Parameters:
hosts: Array of etcd node definitions (minimum 1, recommended 3 or 5)name: Hostname for the etcd node (used for identification in the cluster)ip: IP address of the etcd node
The .spec.kubernetes.etcd.hosts array is immutable after deployment. You cannot add, remove, or change etcd nodes after the cluster is created. Plan your etcd infrastructure carefully before installation.
Configure control plane nodes
Continue configuring your control plane nodes as usual.
spec:
kubernetes:
masters:
hosts:
- name: master1
ip: 192.168.1.10
- name: master2
ip: 192.168.1.11
- name: master3
ip: 192.168.1.12
The .spec.kubernetes.masters.hosts array is also immutable after deployment when using dedicated etcd nodes.
Complete the setup
Continue configuring the rest of your cluster:
- SSH configuration: set up the SSH user and key path in the
.spec.kubernetes.sshsection - Load balancers: configure load balancers or disable if managing your own
- Worker nodes: define your worker node groups in
.spec.kubernetes.nodes - Networking: configure Pod and Service CIDRs if needed
- Distribution modules: configure the SD modules you prefer to install
Example configuration
Here's an example showing the key sections:
apiVersion: kfd.sighup.io/v1alpha2
kind: OnPremises
metadata:
name: my-cluster
spec:
kubernetes:
ssh:
username: example
keyPath: ~/.ssh/example-key
podCidr: 10.244.0.0/16
svcCidr: 10.96.0.0/16
etcd:
hosts:
- name: etcd1
ip: 192.168.1.50
- name: etcd2
ip: 192.168.1.51
- name: etcd3
ip: 192.168.1.52
masters:
hosts:
- name: master1
ip: 192.168.1.10
- name: master2
ip: 192.168.1.11
- name: master3
ip: 192.168.1.12
nodes:
- name: worker
hosts:
- name: worker1
ip: 192.168.1.20
- name: worker2
ip: 192.168.1.21
- name: worker3
ip: 192.168.1.22
distribution:
modules:
networking:
type: calico
ingress:
baseDomain: example.com
nginx:
type: dual
Important considerations
No migration path
No migration path is supported, neither from control plane to dedicated etcd nodes, nor viceversa.
Once you deploy a cluster with a specific etcd configuration, you cannot change it. If you need to switch between architectures, you must create a new cluster and migrate your workloads.
Apply the configuration
Before applying, make sure you have:
- Created the PKI with
furyctl create pki(see OnPremises provider) - Configured SSH access to your machines setting up the
spec.kubernetes.sshsection of the config file
Then apply the configuration:
furyctl apply
After installation completes, verify etcd is running correctly:
sudo ETCDCTL_API=3 /usr/local/bin/etcdctl \
--endpoints=https://192.168.1.50:2379,https://192.168.1.51:2379,https://192.168.1.52:2379 \
--cacert=/etc/etcd/pki/etcd/ca.crt \
--cert=/etc/etcd/pki/etcd/server.crt \
--key=/etc/etcd/pki/etcd/server.key \
endpoint health
All etcd endpoints will report healthy.
Follow the full installation guide in the OnPremises provider documentation for complete details on SSH setup, load balancers, and other required configurations.