Fury on EKS
This step-by-step tutorial guides you to deploy the Kubernetes Fury Distribution on an EKS cluster on AWS.
This tutorial covers the following steps:
- Deploy an EKS Kubernetes cluster on AWS with
furyctl
- Download the latest version of Fury with
furyctl
- Install the Fury distribution
- Explore some features of the distribution
- (optional) Deploy additional modules of the Fury distribution
- Teardown of the environment
â ī¸ AWS charges you to provision the resources used in this tutorial. You should be charged only a few dollars, but we are not responsible for any costs that incur.
âī¸ Remember to stop all the instances by following all the steps listed in the teardown phase.
đģ If you prefer trying Fury in a local environment, check out the Fury on Minikube tutorial.
Prerequisitesâ
This tutorial assumes some basic familiarity with Kubernetes and AWS. Some experience with Terraform is helpful but not required.
To follow this tutorial, you need:
- AWS Access Credentials of an AWS Account with the following IAM permissions.
- Docker - the tutorial uses a Docker image containing
furyctl
and all the necessary tools to follow it. - OpenVPN Client - Tunnelblick (on macOS) or OpenVPN Connect (for other OS) are recommended.
- AWS S3 Bucket (optional) to store the Terraform state.
- Github account with SSH key configured.
Setup and initialize the environmentâ
-
Open a terminal
-
Clone the fury getting started repository containing the example code used in this tutorial:
git clone https://github.com/sighupio/fury-getting-started/
cd fury-getting-started/fury-on-eks
- Run the
fury-getting-started
docker image:
docker run -ti --rm \
-v $PWD:/demo \
registry.sighup.io/delivery/fury-getting-started
- Setup your AWS credentials by exporting the following environment variables:
export AWS_ACCESS_KEY_ID=<YOUR_AWS_ACCESS_KEY_ID>
export AWS_SECRET_ACCESS_KEY=<YOUR_AWS_SECRET_ACCESS_KEY>
export AWS_DEFAULT_REGION=<YOUR_AWS_REGION>
Alternatively, authenticate with AWS by running aws configure
in your terminal. When prompted, enter your AWS Access Key ID, Secret Access Key, region, and output format.
$ aws configure
AWS Access Key ID [None]: <YOUR_AWS_ACCESS_KEY_ID>
AWS Secret Access Key [None]: <YOUR_AWS_SECRET_ACCESS_KEY>
Default region name [None]: <YOUR_AWS_REGION>
Default output format [None]: json
You are all set âī¸.
Step 1 - Automatic provisioning of an EKS Cluster with furyctlâ
furyctl
is a command-line tool developed by SIGHUP to support:
- the automatic provisioning of Kubernetes clusters in various cloud environments
- the installation of the Fury distribution
The provisioning process is divided into two phases:
- Bootstrap provisioning phase
- Cluster provisioning phase
Boostrap provisioning phaseâ
In the bootstrap phase, furyctl
automatically provisions:
- Virtual Private Cloud (VPC) in a specified CIDR range with public and private subnets
- EC2 instance bastion host with an OpenVPN Server
- All the required networking gateways and routes
More details about the bootstrap provisioner can be found here.
Configure the bootstrap provisionerâ
The bootstrap provisioner takes a bootstrap.yml
as input. This file, instructs the bootstrap provisioner with all the needed parameters to deploy the networking infrastructure.
For this tutorial, use the bootstrap.yml
template located at /demo/infrastructure/bootstrap.yml
:
kind: Bootstrap
metadata:
name: fury-eks-demo
spec:
networkCIDR: 10.0.0.0/16
publicSubnetsCIDRs:
- 10.0.1.0/24
- 10.0.2.0/24
- 10.0.3.0/24
privateSubnetsCIDRs:
- 10.0.101.0/24
- 10.0.102.0/24
- 10.0.103.0/24
vpn:
instances: 1
port: 1194
instanceType: t3.micro
diskSize: 50
operatorName: fury
dhParamsBits: 2048
subnetCIDR: 172.16.0.0/16
sshUsers:
- <GITHUB_USER>
executor:
# state:
# backend: s3
# config:
# bucket: <S3_BUCKET>
# key: furyctl/boostrap
# region: <S3_BUCKET_REGION>
provisioner: aws
Open the /demo/infrastructure/bootstrap.yml
file with a text editor of your choice and:
- Replace the field
<GITHUB_USER>
with your actual GitHub username - Ensure that the VPC and subnets ranges are not already in use. If so, specify different values in the fields:
networkCIDR
publicSubnetsCIDRs
privateSubnetsCIDRs
Leave the rest as configured. More details about each field can be found here.
(optional) Create S3 Bucket to hold the Terraform remoteâ
Although this is a tutorial, it is always a good practice to use a remote Terraform state over a local one. In case you are not familiar with Terraform, you can skip this section.
- Choose a unique name and an AWS region for the S3 Bucket:
export S3_BUCKET=fury-demo-eks # Use a different name
export S3_BUCKET_REGION=$AWS_DEFAULT_REGION # You can use the same region of before
- Create the S3 bucket using the AWS CLI:
aws s3api create-bucket \
--bucket $S3_BUCKET \
--region $S3_BUCKET_REGION \
--create-bucket-configuration LocationConstraint=$S3_BUCKET_REGION
âšī¸ You might need to give permissions on S3 to the user.
- Once created, uncomment the
spec.executor.state
block in the/demo/infrastructure/bootstrap.yml
file:
...
executor:
state:
backend: s3
config:
bucket: <S3_BUCKET>
key: fury/boostrap
region: <S3_BUCKET_REGION>
- Replace the
<S3_BUCKET>
and<S3_BUCKET_REGION>
placeholders with the correct values from the previous commands:
...
executor:
state:
backend: s3
config:
bucket: fury-demo-eks # example value
key: fury/boostrap
region: eu-central-1 # example value
Provision networking infrastructureâ
- Initialize the bootstrap provisioner:
cd /demo/infrastructure/
furyctl bootstrap init
In case you run into errors, you can re-initialize the bootstrap provisioner by adding the --reset
flag:
furyctl bootstrap init --reset
- If the initialization succeeds, apply the bootstrap provisioner:
furyctl bootstrap apply
âą This phase may take some minutes.
Logs are available at
/demo/infrastructure/bootstrap/logs/terraform.logs
.
- When the
furyctl bootstrap apply
completes, inspect the output:
...
All the bootstrap components are up to date.
VPC and VPN ready:
VPC: vpc-0d2fd9bcb4f68379e
Public Subnets: [subnet-0bc905beb6622f446, subnet-0c6856acb42edf8f3, subnet-0272dcf88b2f5d12c]
Private Subnets: [subnet-072b1e3405f662c70, subnet-0a23db3b19e5a7ed7, subnet-08f4930148ab5223f]
Your VPN instance IPs are: [34.243.133.186]
...
In particular, take note of:
- VPC -
vpc-0d2fd9bcb4f68379e
in the example output above - Private Subnets -
[subnet-072b1e3405f662c70, subnet-0a23db3b19e5a7ed7, subnet-08f4930148ab5223f]
in the example output above
These values are used in the cluster provisioning phase.
Cluster provisioning phaseâ
In the cluster provisioning phase, furyctl
automatically deploys a battle-tested private EKS Cluster. To interact with the private EKS cluster, connect first to the private network via the OpenVPN server in the bastion host.
Connect to the private networkâ
- Create the
fury.ovpn
OpenVPN credentials file withfuryagent
:
furyagent configure openvpn-client \
--client-name fury \
--config /demo/infrastructure/bootstrap/secrets/furyagent.yml > fury.ovpn
đĩđģââī¸ Furyagent is a tool developed by SIGHUP to manage OpenVPN and SSH user access to the bastion host.
- Check that the
fury
user is now listed:
furyagent configure openvpn-client \
--list \
--config /demo/infrastructure/bootstrap/secrets/furyagent.yml
Output:
2022-12-09 15:29:02.853807 I | storage.go:146: Item pki/vpn-client/fury.crt found [size: 1094]
2022-12-09 15:29:02.853961 I | storage.go:147: Saving item pki/vpn-client/fury.crt ...
2022-12-09 15:29:02.975943 I | storage.go:146: Item pki/vpn/ca.crl found [size: 560]
2022-12-09 15:29:02.975991 I | storage.go:147: Saving item pki/vpn/ca.crl ...
+------+------------+------------+---------+--------------------------------+
| USER | VALID FROM | VALID TO | EXPIRED | REVOKED |
+------+------------+------------+---------+--------------------------------+
| fury | 2022-12-09 | 2023-12-09 | false | false 0001-01-01 00:00:00 |
| | | | | +0000 UTC |
+------+------------+------------+---------+--------------------------------+
-
Open the
fury.ovpn
file with any OpenVPN Client. -
Connect to the OpenVPN Server via the chosen OpenVPN Client.
Configure the cluster provisionerâ
The cluster provisioner takes a cluster.yml
as input. This file instructs the provisioner with all the needed parameters to deploy the EKS cluster.
In the repository, you can find a template for this file at /demo/infrastructure/cluster.yml
:
kind: Cluster
metadata:
name: fury-eks-demo
spec:
version: 1.24
network: <VPC_ID>
subnetworks:
- <PRIVATE_SUBNET1_ID>
- <PRIVATE_SUBNET2_ID>
- <PRIVATE_SUBNET3_ID>
dmzCIDRRange:
- 10.0.0.0/16
sshPublicKey: example-ssh-key # put your id_rsa.pub file content here
nodePoolsLaunchKind: "launch_templates"
nodePools:
- name: fury
version: null
minSize: 3
maxSize: 3
instanceType: t3.large
volumeSize: 50
executor:
# state:
# backend: s3
# config:
# bucket: <S3_BUCKET>
# key: furyctl/cluster
# region: <S3_BUCKET_REGION>
provisioner: eks
Open the file with a text editor and replace:
<VPC_ID>
with the VPC ID (vpc-0d2fd9bcb4f68379e
) created in the previous phase.<PRIVATE_SUBNET1_ID>
with ID of the first private subnet ID (subnet-072b1e3405f662c70
) created in the previous phase.<PRIVATE_SUBNET2_ID>
with ID of the second private subnet ID (subnet-subnet-0a23db3b19e5a7ed7
) created in the previous phase.<PRIVATE_SUBNET3_ID>
with ID of the third private subnet ID (subnet-08f4930148ab5223f
) created in the previous phase.- (optional) As before, add the details of the S3 Bucket that holds the Terraform remote state.
â ī¸ if you are using an S3 bucket to store the Terraform state make sure to use a different key in
executor.state.config.key
than the one used in the boorstrap phase.
Provision EKS Clusterâ
- Initialize the cluster provisioner:
furyctl cluster init
- Create EKS cluster:
furyctl cluster apply
âą This phase may take some minutes.
Logs are available at
/demo/infrastructure/cluster/logs/terraform.logs
.
- When the
furyctl cluster apply
completes, test the connection with the cluster:
export KUBECONFIG=/demo/infrastructure/cluster/secrets/kubeconfig
kubectl get nodes
Step 2 - Download fury modulesâ
furyctl
can do a lot more than deploying infrastructure. In this section, you use furyctl
to download the monitoring, logging, and ingress modules of the Fury distribution.
Inspect the Furyfileâ
furyctl
needs a Furyfile.yml
to know which modules to download.
For this tutorial, use the Furyfile.yml
located at /demo/Furyfile.yaml
:
versions:
networking: v1.10.0
monitoring: v2.0.1
logging: v3.0.1
ingress: v1.13.1
dr: v1.10.1
auth: v0.0.2
aws: v2.0.0
bases:
- name: networking
- name: monitoring
- name: logging
- name: ingress
- name: aws
- name: dr
- name: opa
modules:
- name: aws
- name: dr
Download Fury modulesâ
- Download the Fury modules with
furyctl
:
cd /demo/
furyctl vendor -H
- Inspect the downloaded modules in the
vendor
folder:
tree -d /demo/vendor -L 3
Output:
$ tree -d vendor -L 3
vendor
âââ katalog
â âââ aws
â â âââ cluster-autoscaler
â â âââ ebs-csi-driver
â â âââ load-balancer-controller
â â âââ node-termination-handler
â âââ dr
â â âââ tests
â â âââ velero
â âââ ingress
â â âââ cert-manager
â â âââ dual-nginx
â â âââ external-dns
â â âââ forecastle
â â âââ nginx
â â âââ tests
â âââ logging
â â âââ cerebro
â â âââ configs
â â âââ logging-operated
â â âââ logging-operator
â â âââ loki-configs
â â âââ loki-single
â â âââ opensearch-dashboards
â â âââ opensearch-single
â â âââ opensearch-triple
â â âââ tests
â âââ monitoring
â â âââ aks-sm
â â âââ alertmanager-operated
â â âââ blackbox-exporter
â â âââ configs
â â âââ eks-sm
â â âââ gke-sm
â â âââ grafana
â â âââ kube-proxy-metrics
â â âââ kube-state-metrics
â â âââ kubeadm-sm
â â âââ node-exporter
â â âââ prometheus-adapter
â â âââ prometheus-operated
â â âââ prometheus-operator
â â âââ tests
â â âââ thanos
â â âââ x509-exporter
â âââ networking
â â âââ calico
â â âââ ip-masq
â â âââ tests
â â âââ tigera
â âââ opa
â âââ gatekeeper
â âââ tests
âââ modules
âââ aws
â âââ iam-for-cluster-autoscaler
â âââ iam-for-ebs-csi-driver
â âââ iam-for-load-balancer-controller
âââ dr
âââ aws-velero
âââ azure-velero
âââ gcp-velero
Step 3 - Installationâ
Terraform projectâ
Each module can contain Kustomize bases or Terraform modules.
First of all, we need to initialize the additional Terraform project to create resources for DR (Velero), AWS (EBS CSI Driver).
In the repository, you can find the main.tf file /demo/terraform/main.yml
. In this file you need to change the values for the S3 bucket that will contain the state:
terraform {
# backend "s3" {
# bucket: <S3_BUCKET>
# key: <MY_KEY>
# region: <S3_BUCKET_REGION>
# }
required_version = ">= 0.12"
required_providers {
aws = "=3.37.0"
}
}
Then, create a file terraform.tfvars
with the following content (Change the values accordingly to your environment):
cluster_name = "fury-eks-demo"
velero_bucket_name = "velero-demo-sa"
Then apply the terraform project:
cd /demo/terraform/
make init
make plan
make apply
After everything is applied, extract the kustomize patches we need in the next step with the following command:
make generate-output
Kustomize projectâ
Kustomize allows to group together related Kubernetes resources and combines them to create more complex deployments. Moreover, it is flexible, and it enables a simple patching mechanism for additional customization.
To deploy the Fury distribution, use the following root kustomization.yaml
located /demo/manifests/kustomization.yaml
:
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ingress
- logging
- monitoring
- networking
- dr
- opa
- aws
This kustomization.yaml
wraps the other kustomization.yaml
s in subfolders. For example in /demo/manifests/logging/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../vendor/katalog/logging/cerebro
- ../../vendor/katalog/logging/logging-operator
- ../../vendor/katalog/logging/logging-operated
- ../../vendor/katalog/logging/configs
- ../../vendor/katalog/logging/opensearch-single
- ../../vendor/katalog/logging/opensearch-dashboards
- resources/ingress.yml
patchesStrategicMerge:
- patches/opensearch-resources.yml
- patches/cerebro-resources.yml
Each kustomization.yaml
:
- references the modules downloaded in the previous section
- patches the upstream modules (e.g.
patches/opensearch-resources.yml
limits the resources requested by OpenSearch) - deploys some additional custom resources (e.g.
resources/ingress.yml
)
Install the modules:
cd /demo/manifests/
make apply
# Due to some chicken-egg đđĨ problem with custom resources you have to apply multiple times
make apply
Step 4 - Explore the distributionâ
đ The distribution is finally deployed! In this section, you explore some of its features.
Setup local DNSâ
In Step 3, alongside the distribution, you have deployed Kubernetes ingresses to expose underlying services at the following HTTP routes:
forecastle.fury.info
grafana.fury.info
opensearch-dashboards.fury.info
To access the ingresses more easily via the browser, configure your local DNS to resolve the ingresses to the internal load balancer IP:
- Get the address of the internal load balancer:
dig $(kubectl get svc ingress-nginx -n ingress-nginx --no-headers | awk '{print $4}')
Output:
...
;; ANSWER SECTION:
xxx.elb.eu-west-1.amazonaws.com. 77 IN A <FIRST_IP>
xxx.elb.eu-west-1.amazonaws.com. 77 IN A <SECOND_IP>
xxx.elb.eu-west-1.amazonaws.com. 77 IN A <THIRD_IP>
...
- Add the following line to your machine's
/etc/hosts
(not the container's):
<FIRST_IP> forecastle.fury.info cerebro.fury.info opensearch-dashboards.fury.info grafana.fury.info
Now, you can reach the ingresses directly from your browser.
Forecastleâ
Forecastle is an open-source control panel where you can access all exposed applications running on Kubernetes.
Navigate to http://forecastle.fury.info to see all the other ingresses deployed, grouped by namespace.
OpenSearch Dashboardsâ
OpenSearch Dashboards is an open-source analytics and visualization platform for OpenSearch. OpenSearch Dashboards lets you perform advanced data analysis and visualize data in various charts, tables, and maps. You can use it to search, view, and interact with data stored in OpenSearch indices.
Navigate to http://opensearch-dashboards.fury.info or click the OpenSearch Dashboards icon from Forecastle.
Read the logsâ
The Fury Logging module already collects data from the following indices:
kubernetes-*
systemd-*
ingress-controller-*
events-*
Click on Discover
to see the main dashboard. On the top left corner select one of the indices to explore the logs.
Grafanaâ
Grafana is an open-source platform for monitoring and observability. Grafana allows you to query, visualize, alert on and understand your metrics.
Navigate to http://grafana.fury.info or click the Grafana icon from Forecastle.
Fury provides some pre-configured dashboards to visualize the state of the cluster. Examine an example dashboard:
- Click on the search icon on the left sidebar.
- Write
pods
and click enter. - Select the
Kubernetes/Pods
dashboard.
This is what you should see:
Step 5 (optional) - Advanced Distribution usageâ
(optional) Create a backup with Veleroâ
- Create a backup with the
velero
command-line utility:
velero backup create --from-schedule manifests test -n kube-system
- Check the backup status:
velero backup get -n kube-system
(optional) Enforce a Policy with OPA Gatekeeperâ
This section is under construction.
Please refer to the OPA module's documentation while we work on this part of the guide. Sorry for the inconvenience.
Step 6 - Teardownâ
Clean up the demo environment:
- Delete the namespaces containing external resources like volumes and load balancers:
kubectl delete namespace logging monitoring ingress-nginx
Wait until the namespaces are completeley deleted, or that:
kubectl get pvc -A
# and
kubectl get svc -A
return no result for pvc and no LoadBalancer for svc.
- Destroy the additional Terraform resources used by Velero:
cd /demo/terraform/
terraform destroy
- Destroy EKS cluster:
cd /demo/infrastructure/
furyctl cluster destroy
- Some resources are created outside Terraform, for example when you create a LoadBalancer service it will create an ELB. You can find a script to delete the target groups, load balancers, volumes, and snapshots associated with the EKS cluster using AWS CLI:
âđģ Check that the
TAG_KEY
variable has the right value before running the script. It should finihs with the cluster name.
bash cleanup.sh
- Destroy network infrastructure (remember to disconnect from the VPN before deleting):
furyctl bootstrap destroy
- (Optional) Destroy the S3 bucket holding the Terraform state
aws s3api delete-objects --bucket $S3_BUCKET \
--delete "$(aws s3api list-object-versions --bucket $S3_BUCKET --query='{Objects: Versions[].{Key:Key,VersionId:VersionId}}')"
aws s3api delete-bucket --bucket $S3_BUCKET
- Exit from the docker container:
exit
Conclusionsâ
Congratulations, you made it! đĨŗđĨŗ
We hope you enjoyed this tour of Fury!
Issues/Feedbackâ
In case your ran into any problems feel free to open an issue here on GitHub.
Where to go next?â
More tutorials:
More about Fury: