Merge pull request #152 from kube-hetzner/network-subnet-dissociation

Network subnet dissociation
This commit is contained in:
Karim Naufal 2022-04-13 14:23:04 +02:00 committed by GitHub
commit a6522814a9
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
7 changed files with 45 additions and 27 deletions

View File

@ -97,17 +97,13 @@ _Once you start with Terraform, it's best not to change the state manually in He
### Scaling Nodes ### Scaling Nodes
To scale the number of nodes up or down, just make sure to properly `kubectl drain` the nodes in question first if scaling down. Then just edit your `terraform.tfvars` and re-apply terraform with `terraform apply -auto-approve`. Two things can be scaled, the number of nodepools or the count of nodes in these nodepools. You have two list of nodepools you can add to in terraform.tfvars, the control plane nodepool list and the agent nodepool list. Both combined cannot exceed 255 nodepools (you extremely unlikely to reach this limit). As for the count of nodes per nodepools, if you raise your limits in Hetzner, you can have up to 64,670 nodes per nodepool (also very unlikely to need that much).
About nodepools, `terraform.tfvars.example` has clear example how to configure them.
There are some limitations (to scaling down mainly) that you need to be aware of: There are some limitations (to scaling down mainly) that you need to be aware of:
_Once the cluster is created, you can change nodepool count, and even set it to 0 (in the case of the first control-plane nodepool, the minimum is 1), you can also rename a nodepool (if the count is taken to 0), but should not remove a nodepool from the list after the cluster is created. This is due to how IPs are allocated to the nodes, and how the Hetzner API works._ _Once the cluster is created, you can change nodepool count, and even set it to 0 (in the case of the first control-plane nodepool, the minimum is 1), you can also rename a nodepool (if the count is taken to 0), but should not remove a nodepool from the list after the cluster is created. This is due to how subnets and IPs are allocated. The only nodepools you can remove are the ones at the end of each list of nodepools._
_However when a cluster is already initialized, you cannot add more control plane nodepools (you can only add nodes to the already created control plane nodepools). As for the angent nodepools, you can freely add others agent nodepools the end of the list if you want._ _However you can freely add others nodepools the end of the list if you want, and of course increase the node count. You can also decrease the node count, but make sure you drain the node in question before, otherwise it will leave your cluster in a bad state. The only nodepool that needs at least to have a count of 1 always, is the first control-plane nodepool, for obvious reasons._
_Also, before decreasing the count of any nodepools to 0, it's important to drain and cordon it the nodes in question, otherwise it will leave your cluster in a bad state._
## High Availability ## High Availability

View File

@ -12,11 +12,9 @@ module "agents" {
placement_group_id = var.placement_group_disable ? 0 : element(hcloud_placement_group.agent.*.id, ceil(each.value.index / 10)) placement_group_id = var.placement_group_disable ? 0 : element(hcloud_placement_group.agent.*.id, ceil(each.value.index / 10))
location = each.value.location location = each.value.location
server_type = each.value.server_type server_type = each.value.server_type
ipv4_subnet_id = hcloud_network_subnet.subnet[[for i, v in var.agent_nodepools : i if v.name == each.value.nodepool_name][0] + length(var.control_plane_nodepools) + 1].id ipv4_subnet_id = hcloud_network_subnet.agent[[for i, v in var.agent_nodepools : i if v.name == each.value.nodepool_name][0]].id
# We leave some room so 100 eventual Hetzner LBs that can be created perfectly safely private_ipv4 = cidrhost(hcloud_network_subnet.agent[[for i, v in var.agent_nodepools : i if v.name == each.value.nodepool_name][0]].ip_range, each.value.index + 101)
# It leaves the subnet with 254 x 254 - 100 = 64416 IPs to use, so probably enough.
private_ipv4 = cidrhost(local.network_ipv4_subnets[[for i, v in var.agent_nodepools : i if v.name == each.value.nodepool_name][0] + length(var.control_plane_nodepools) + 1], each.value.index + 101)
labels = { labels = {
"provisioner" = "terraform", "provisioner" = "terraform",
@ -24,7 +22,7 @@ module "agents" {
} }
depends_on = [ depends_on = [
hcloud_network_subnet.subnet hcloud_network_subnet.agent
] ]
} }
@ -80,6 +78,6 @@ resource "null_resource" "agents" {
depends_on = [ depends_on = [
null_resource.first_control_plane, null_resource.first_control_plane,
hcloud_network_subnet.subnet hcloud_network_subnet.agent
] ]
} }

View File

@ -12,11 +12,11 @@ module "control_planes" {
placement_group_id = var.placement_group_disable ? 0 : element(hcloud_placement_group.control_plane.*.id, ceil(each.value.index / 10)) placement_group_id = var.placement_group_disable ? 0 : element(hcloud_placement_group.control_plane.*.id, ceil(each.value.index / 10))
location = each.value.location location = each.value.location
server_type = each.value.server_type server_type = each.value.server_type
ipv4_subnet_id = hcloud_network_subnet.subnet[[for i, v in var.control_plane_nodepools : i if v.name == each.value.nodepool_name][0] + 1].id ipv4_subnet_id = hcloud_network_subnet.control_plane[[for i, v in var.control_plane_nodepools : i if v.name == each.value.nodepool_name][0]].id
# We leave some room so 100 eventual Hetzner LBs that can be created perfectly safely # We leave some room so 100 eventual Hetzner LBs that can be created perfectly safely
# It leaves the subnet with 254 x 254 - 100 = 64416 IPs to use, so probably enough. # It leaves the subnet with 254 x 254 - 100 = 64416 IPs to use, so probably enough.
private_ipv4 = cidrhost(local.network_ipv4_subnets[[for i, v in var.control_plane_nodepools : i if v.name == each.value.nodepool_name][0] + 1], each.value.index + 101) private_ipv4 = cidrhost(hcloud_network_subnet.control_plane[[for i, v in var.control_plane_nodepools : i if v.name == each.value.nodepool_name][0]].ip_range, each.value.index + 101)
labels = { labels = {
"provisioner" = "terraform", "provisioner" = "terraform",
@ -24,7 +24,7 @@ module "control_planes" {
} }
depends_on = [ depends_on = [
hcloud_network_subnet.subnet hcloud_network_subnet.control_plane
] ]
} }
@ -83,6 +83,6 @@ resource "null_resource" "control_planes" {
depends_on = [ depends_on = [
null_resource.first_control_plane, null_resource.first_control_plane,
hcloud_network_subnet.subnet hcloud_network_subnet.control_plane
] ]
} }

View File

@ -57,7 +57,7 @@ resource "null_resource" "first_control_plane" {
} }
depends_on = [ depends_on = [
hcloud_network_subnet.subnet["control_plane"] hcloud_network_subnet.control_plane
] ]
} }

View File

@ -205,7 +205,7 @@ locals {
# The first two subnets are respectively the default subnet 10.0.0.0/16 use for potientially anything and 10.1.0.0/16 used for control plane nodes. # The first two subnets are respectively the default subnet 10.0.0.0/16 use for potientially anything and 10.1.0.0/16 used for control plane nodes.
# the rest of the subnets are for agent nodes in each nodepools. # the rest of the subnets are for agent nodes in each nodepools.
network_ipv4_subnets = [for index in range(length(var.control_plane_nodepools) + length(var.agent_nodepools) + 1) : cidrsubnet(local.network_ipv4_cidr, 8, index)] network_ipv4_subnets = [for index in range(256) : cidrsubnet(local.network_ipv4_cidr, 8, index)]
# disable k3s extras # disable k3s extras
disable_extras = concat(["local-storage"], local.is_single_node_cluster ? [] : ["servicelb"], var.traefik_enabled ? [] : ["traefik"], var.metrics_server_enabled ? [] : ["metrics-server"]) disable_extras = concat(["local-storage"], local.is_single_node_cluster ? [] : ["servicelb"], var.traefik_enabled ? [] : ["traefik"], var.metrics_server_enabled ? [] : ["metrics-server"])

18
main.tf
View File

@ -13,8 +13,19 @@ resource "hcloud_network" "k3s" {
ip_range = local.network_ipv4_cidr ip_range = local.network_ipv4_cidr
} }
resource "hcloud_network_subnet" "subnet" { # We start from the end of the subnets cird array,
count = length(local.network_ipv4_subnets) # as we would have fewer control plane nodepools, than angent ones.
resource "hcloud_network_subnet" "control_plane" {
count = length(local.control_plane_nodepools)
network_id = hcloud_network.k3s.id
type = "cloud"
network_zone = var.network_region
ip_range = local.network_ipv4_subnets[255 - count.index]
}
# Here we start at the beginning of the subnets cird array
resource "hcloud_network_subnet" "agent" {
count = length(local.agent_nodepools)
network_id = hcloud_network.k3s.id network_id = hcloud_network.k3s.id
type = "cloud" type = "cloud"
network_zone = var.network_region network_zone = var.network_region
@ -73,7 +84,8 @@ resource "null_resource" "destroy_traefik_loadbalancer" {
depends_on = [ depends_on = [
local_sensitive_file.kubeconfig, local_sensitive_file.kubeconfig,
null_resource.control_planes[0], null_resource.control_planes[0],
hcloud_network_subnet.subnet, hcloud_network_subnet.control_plane,
hcloud_network_subnet.agent,
hcloud_placement_group.control_plane, hcloud_placement_group.control_plane,
hcloud_placement_group.agent, hcloud_placement_group.agent,
hcloud_network.k3s, hcloud_network.k3s,

View File

@ -23,12 +23,16 @@ network_region = "eu-central" # change to `us-east` if location is ash
# Of course, you can choose any number of nodepools you want, with the location you want. The only contraint on the location is that you need to stay in the same network region, basically Europe or US, see above. # Of course, you can choose any number of nodepools you want, with the location you want. The only contraint on the location is that you need to stay in the same network region, basically Europe or US, see above.
# For the server type, # The type of control plane nodes, the minimum instance supported is cpx11 (just a few cents more than cx11), see https://www.hetzner.com/cloud. # For the server type, # The type of control plane nodes, the minimum instance supported is cpx11 (just a few cents more than cx11), see https://www.hetzner.com/cloud.
# IMPORTANT: Once the cluster is created, you can change nodepool count, and even set it to 0 (in the case of the first control-plane nodepool, the minimum is 1), # IMPORTANT: Before the your cluster is created, you can do anything you want with the nodepools, but you need at least one of each control plane and agent.
# You can also rename it (if the count is taken to 0), but do not remove a nodepool from the list after the cluster is created. This is due to how IPs are allocated. # Once the cluster is created, you can change nodepool count, and even set it to 0 (in the case of the first control-plane nodepool, the minimum is 1),
# Once the cluster is initialized, you cannot add more control plane nodepools. You can freely add others agent nodepools the end of the list if you want! # you can also rename it (if the count is taken to 0), but do not remove a nodepool from the list after the cluster is created.
# The only nodepools that are safe to remove from the list when you edit it, are the ones at the end of the lists. This is due to how IPs are allocated.
# You can however freely add others nodepools the end of each list if you want! The maximum number of nodepools you can create, combined for both lists is 255.
# Also, before decreasing the count of any nodepools to 0, it's important to drain and cordon it the nodes in question, otherwise it will leave your cluster in a bad state. # Also, before decreasing the count of any nodepools to 0, it's important to drain and cordon it the nodes in question, otherwise it will leave your cluster in a bad state.
# Before initializing the cluster, you can change all parameters and add or remove any nodepools. # Before initializing the cluster, you can change all parameters and add or remove any nodepools. You just need at least one nodepool of each kind, control plane and agent.
# The nodepool names are fully arbitrary, you can choose whatever you want, but no special characters or underscore, only alphanumeric characters and dashes are allowed.
# If you want to have a single node cluster, just have 1 control plane nodepools with a count of 1, and one agent nodepool with a count of 0. # If you want to have a single node cluster, just have 1 control plane nodepools with a count of 1, and one agent nodepool with a count of 0.
@ -41,7 +45,7 @@ control_plane_nodepools = [
location = "fsn1", location = "fsn1",
labels = [], labels = [],
taints = [], taints = [],
count = 2 count = 1
}, },
{ {
name = "control-plane-nbg1", name = "control-plane-nbg1",
@ -50,6 +54,14 @@ control_plane_nodepools = [
labels = [], labels = [],
taints = [], taints = [],
count = 1 count = 1
},
{
name = "control-plane-hel1",
server_type = "cpx11",
location = "hel1",
labels = [],
taints = [],
count = 1
} }
] ]