Skip to main content

Getting Started with Kube-Hetzner

Welcome to the Kube-Hetzner project! This guide is designed to help new team members, especially those new to Terraform and Kubernetes, quickly understand the project's infrastructure and get started with deployment and management.

1. Project Overview​

1.1 Features​

Key Features:​

  • Maintenance-free: Automatic upgrades for both MicroOS and k3s.
  • Multi-architecture support: Compatible with various Hetzner Cloud instances (including ARM).
  • Hetzner private network: Minimizes latency for internal communication.
  • CNI Options: Choice between Flannel, Calico, or Cilium.
  • Ingress Controllers: Traefik, Nginx, or HAProxy with Hetzner Load Balancer integration.
  • Automatic HA: Default setup includes three control-plane nodes and two agent nodes for high availability.
  • Autoscaling: Node autoscaling via Kubernetes Cluster Autoscaler.
  • Storage: Optional Longhorn and Hetzner CSI for persistent storage, with encryption at rest.
  • Flexible Configuration: Extensive customization options via Terraform variables and Kustomization.

openSUSE MicroOS​

  • Optimized container OS, mostly read-only filesystem for security.
  • Hardened by default (e.g., automatic IP ban for SSH).
  • Evergreen release, leveraging OpenSUSE Tumbleweed's rolling release.
  • Automatic updates and rollbacks using BTRFS snapshots.
  • Supports Kured for proper node draining and reboots in HA setups.

k3s​

  • Certified Kubernetes Distribution, synced with upstream Kubernetes.
  • Fast deployment due to its single binary nature.
  • Batteries-included with in-cluster helm-controller.
  • Easy automatic updates via the system-upgrade-controller.

1.2 Project Diagram​

Hetzner CloudPrivate Network (10.0.0.0/16)Control Plane Nodecx21 (2 vCPU, 4GB RAM)OpenSUSE MicroOSKubernetes Control PlaneWorker Node 1cx21 (2 vCPU, 4GB RAM)OpenSUSE MicroOSKubernetes WorkerWorker Node 2cx21 (2 vCPU, 4GB RAM)OpenSUSE MicroOSKubernetes WorkerLoad Balancer (lb11)Hetzner Cloud LBHTTP(S) TrafficKubernetes ServicesNginx IngressApplication RoutingTLS TerminationCert ManagerTLS CertificatesAuto RenewalArgoCDGitOps DeploymentsApp ManagementLonghornPersistent Storage2 ReplicasPrometheusMonitoringMetrics CollectionGrafanaDashboardsVisualizationHome Lab ApplicationsDeployed and Managed by ArgoCDAccessible via homelab.local DomainπŸ‘€

Home Lab Kubernetes Architecture

Infrastructure:

  • 1 Control Plane Node (cx21 - 2vCPU, 4GB RAM)
  • 2 Worker Nodes (cx21 - 2vCPU, 4GB RAM each)
  • OpenSUSE MicroOS as base operating system
  • Hetzner Cloud Load Balancer (lb11) for ingress traffic

Core Services:

  • Nginx Ingress Controller: Handles external traffic routing
  • Cert-Manager: Manages TLS certificates
  • Longhorn: Distributed persistent storage with 2 replicas
  • ArgoCD: GitOps-based application deployment and management
  • Prometheus & Grafana: Monitoring and visualization

Access: Applications are accessible via the homelab.local domain

2. Prerequisites​

Before you begin, ensure you have the following:

  1. Hetzner Cloud Account: Sign up for free here.

  2. API Token: Create a Read & Write API token in your Hetzner Cloud Console (Project > Security > API Tokens). Keep this key secure.

  3. SSH Key Pair:

    1. Generate a passphrase-less ed25519 SSH key pair, refer to [[Linux SSH Key Generation]]
    2. Note the paths to your private and public keys (e.g., ~/.ssh/id_ed25519 and ~/.ssh/id_ed25519.pub).
    3. For more details on SSH options, refer to docs/ssh.md.
  4. CLI Tools: Install the following command-line tools. The easiest way is using Homebrew (available on Linux, Mac, and Windows Subsystem for Linux):

    brew tap hashicorp/tap
    brew install hashicorp/tap/terraform # Or brew install opentofu
    brew install hashicorp/tap/packer # For initial snapshot creation only
    brew install kubectl
    brew install hcloud
    brew install coreutils # Provides 'timeout' command on MacOS
  5. create hetzner context

# create a hcloud-cli context
hcloud context create landing-zone
#you will be prompted to enter your API token.
#You should see a confirmation messageΒ `Context my-super-project created and activated`.

# active context when you run
hcloud context list

3. Update kube.tf File and OpenSUSE MicroOS Snapshot​

3.1 Initialize Project​

Navigate to the directory where you want your project to live and execute the following command. This script will create a new folder, download the kube.tf.example and hcloud-microos-snapshots.pkr.hcl files, and guide you through creating the initial MicroOS snapshot.

tmp_script=$(mktemp) && curl -sSL -o "${tmp_script}" https://raw.githubusercontent.com/kube-hetzner/terraform-hcloud-kube-hetzner/master/scripts/create.sh && chmod +x "${tmp_script}" && "${tmp_script}" && rm "${tmp_script}"

create TF_VAR_hcloud_token

export TF_VAR_hcloud_token="YOUR_HETZNER_API_TOKEN"

update the kube.tf file

# home-lab/kube.tf

# home-lab/kube.tf

locals {
# Your Hetzner token - consider using environment variables for security
hcloud_token = "" # Leave empty if using TF_VAR_hcloud_token environment variable

# Home lab specific settings
lab_name = "homelab"
domain = "homelab.local"
}

module "kube-hetzner" {
source = "github.com/kube-hetzner/terraform-hcloud-kube-hetzner"

# Pass the provider configuration to the module
providers = {
hcloud = hcloud
}

# Hetzner Cloud Token
hcloud_token = var.hcloud_token != "" ? var.hcloud_token : local.hcloud_token

# Basic cluster configuration
cluster_name = local.lab_name

# Replace with your SSH keys or generate them if needed
ssh_public_key = file("~/.ssh/id_rsa.pub")
ssh_private_key = file("~/.ssh/id_rsa")

# Home lab networking
network_region = "eu-central" # Change to your preferred region

# Control plane - using a single smaller node for home lab
control_plane_nodepools = [
{
name = "control-plane"
server_type = "cpx11"
location = "nbg1"
labels = []
taints = []
count = 1
}
]

# Worker nodes - adjust based on your home lab needs
agent_nodepools = [
{
name = "worker"
server_type = "cpx11"
location = "nbg1"
labels = []
taints = []
count = 2
}
]

# Load Balancer configuration for home lab
load_balancer_type = "lb11"
load_balancer_location = "nbg1"

# Enable longhorn for storage
enable_longhorn = true

# Configure cert-manager
enable_cert_manager = true

# Use Nginx ingress controller
ingress_controller = "nginx"

# Automatically create a kustomization for all the services you want to deploy
automatically_upgrade_k3s = true
initial_k3s_channel = "stable"

# Enable metrics server for basic monitoring
enable_metrics_server = true
}

provider "hcloud" {
token = var.hcloud_token != "" ? var.hcloud_token : local.hcloud_token
}

terraform {
required_version = ">= 1.5.0"
required_providers {
hcloud = {
source = "hetznercloud/hcloud"
version = ">= 1.51.0"
}
}
}

output "kubeconfig" {
value = module.kube-hetzner.kubeconfig
sensitive = true
}

variable "hcloud_token" {
sensitive = true
default = ""
}


4. Installation​

4.1 Provisioning​

Once your kube.tf file is customized and the MicroOS snapshot is created in your Hetzner project, you can proceed with the installation:

# 🏠 Navigate to your home-lab directory
cd home-lab


# 🧹 Clean up previous Terraform state files
rm -rf .terraform .terraform.lock.hcl


####################################################
################πŸš€ Initialization Options ##########
# -----------------------
# πŸ”° First-time initialization
terraform init

# πŸ”„ Re-initialization (changes backends)
terraform init -reconfigure

# ⬆️ Upgrade modules to latest versions
terraform init --upgrade


####################################################
############ πŸ“ Plan Creation Options ##############
# Create and save plan file
terraform plan -out=tfplan

# Export plan as JSON for inspection
terraform show -json tfplan > tfplan.json

# Export plan as text for review
terraform show tfplan > tfplan.txt


#####################################################
############## βš™οΈ Apply Options #####################
# βœ… Apply using saved plan file (safest)
terraform apply "tfplan"

# ⚑ Apply directly with auto-approval (use with caution)
terraform apply -auto-approve

4.2 Deliverables after terraform init​

Based on the configuration we've created, here's what will be installed automatically after running terraform apply:

ServiceIncluded?Notes
Nginx Ingress Controllerβœ… Yesmanage external access to K8s clusterIncluded because we set ingress_controller = "nginx"
Cert-Managerβœ… YesIncluded because we set enable_cert_manager = true
Longhornβœ… YesDistributed block storage for K8sIncluded because we set enable_longhorn = true
ArgoCD❌ NoGitOps continuous deliveryNot included in our simplified configuration
Prometheus & Grafana❌ NoNot included in our simplified configuration

The configuration we ended up with (after fixing compatibility issues) includes:

  1. K3s Kubernetes cluster (1 control plane, 2 worker nodes)
  2. Nginx Ingress Controller
  3. Cert-Manager
  4. Longhorn storage
  5. Metrics Server (basic monitoring)

5. Basic Usage​

After your cluster is deployed, you can interact with it using kubectl.

5.1 Connecting to the Kube API​

The module generates a homelab_kubeconfig.yaml file in your project directory after installation.

  • Directly with kubectl:
kubectl --kubeconfig homelab_kubeconfig.yaml get nodes
NAME STATUS ROLES AGE VERSION
homelab-control-plane-ppt Ready control-plane,etcd,master 38m v1.33.3+k3s1
homelab-worker-mpz Ready <none> 38m v1.33.3+k3s1
homelab-worker-zky Ready <none> 38m v1.33.3+k3s1
  • **Add the homelab_kubeconfig.yaml path to your KUBECONFIG env variable :
export KUBECONFIG=/<path-to-your-project-folder>/homelab_kubeconfig.yaml
  • Generate homelab_kubeconfig.yaml file manually If you set create_kubeconfig = false in your kube.tf (a good security practice), you can generate the file manually:
    terraform output --raw kubeconfig > clustername_kubeconfig.yaml

5.2 Get the Nodes IP's​


# If set KUBECONFIG environment variable
kubectl get nodes -o wide

# If you did not set KUBECONFIG environment variable
kubectl --kubeconfig homelab_kubeconfig.yaml get nodes -o wide

# most important columns for your nodes
kubectl --kubeconfig homelab_kubeconfig.yaml get nodes -o custom-columns="NAME:.metadata.name,STATUS:.status.conditions[?(@.type=='Ready')].status,ROLE:.metadata.labels.node-role\.kubernetes\.io/control-plane,INTERNAL-IP:.status.addresses[?(@.type=='InternalIP')].address,EXTERNAL-IP:.status.addresses[?(@.type=='ExternalIP')].address,VERSION:.status.nodeInfo.kubeletVersion"

Connecting via SSH​

You can SSH into any control plane node to manage your workloads directly from there:

ssh root@<control-plane-ip> -i /path/to/private_key -o StrictHostKeyChecking=no

Replace <control-plane-ip> with the public IP of one of your control plane nodes (you can get this from terraform output control_planes_public_ipv4 or hcloud server list).

Security Best Practice: Configure firewall_ssh_source in your kube.tf to restrict SSH access to your own IP address(es) instead of 0.0.0.0/0. Similarly, restrict firewall_kube_api_source for the Kube API.

6. Key Concepts and Advanced Configuration​

This project offers extensive customization. Here are some key areas to explore further:

  • Nodepools: Define control plane and agent nodepools with various server types, locations, labels, and taints. Understand the implications of HA (odd number of control planes) and scaling.
  • CNI (Container Network Interface): Choose between Flannel, Calico, or Cilium. Cilium offers advanced features like Egress Gateway and Hubble observability.
  • Load Balancers: Configure the main application load balancer and an optional dedicated control plane load balancer.
  • Automatic Upgrades: Understand how MicroOS and k3s are automatically upgraded and how to manage or disable this behavior for non-HA setups.
  • Storage: Integrate Longhorn for distributed block storage or use Hetzner CSI for Hetzner Cloud Volumes.
  • Kustomize and Extra Manifests: Extend the cluster with your own Kubernetes manifests or Helm charts using the extra-manifests feature.
  • Firewall Rules: Customize network security with extra_firewall_rules and restrict access to SSH and Kube API.
  • SELinux: Learn how to work with SELinux using udica for container-specific policies instead of disabling it globally.
  • Rancher Integration: Optionally deploy Rancher Manager for multi-cluster management.

For a deep dive into every configuration option, refer to the LLMs and Kubernetes file, which provides a line-by-line explanation of the kube.tf configuration.

7. Takedown​

To destroy your cluster and all associated Hetzner Cloud resources:

terraform destroy -auto-approve

If the destroy process hangs (often due to Hetzner LB or autoscaled nodes), you can use the cleanup script:

tmp_script=$(mktemp) && curl -sSL -o "${tmp_script}" https://raw.githubusercontent.com/kube-hetzner/terraform-hcloud-kube-hetzner/master/scripts/cleanup.sh && chmod +x "${tmp_script}" && "${tmp_script}" && rm "${tmp_script}"

Caution: These commands will delete all resources, including volumes. Use the dry run option if available (cleanup.sh offers this) before a full destroy.

This guide should provide a solid foundation for your journey with Kube-Hetzner. Feel free to explore the codebase and other documentation for more advanced topics.