tweaked readme and tfvars.example
This commit is contained in:
parent
5fe0004314
commit
6e47e4c30a
96
README.md
96
README.md
@ -23,22 +23,22 @@
|
||||
|
||||
[Hetzner Cloud](https://hetzner.com) is a good cloud provider that offers very affordable prices for cloud instances, with data center locations in both Europe and the US.
|
||||
|
||||
The goal of this project is to create an optimal and highly optimized Kubernetes installation that is easily maintained, secure, and automatically upgrades. We aimed for functionality as close as possible to GKE's auto-pilot.
|
||||
This project aims to create an optimal and highly optimized Kubernetes installation that is easily maintained, secure and automatic upgrades. We aimed for functionality as close as possible to GKE's auto-pilot.
|
||||
|
||||
In order to achieve this, we built it on the shoulders of giants, by choosing [openSUSE MicroOS](https://en.opensuse.org/Portal:MicroOS) as the base operating system, and [k3s](https://k3s.io/) as the Kubernetes engine.
|
||||
To achieve this, we built it on the shoulders of giants by choosing [openSUSE MicroOS](https://en.opensuse.org/Portal:MicroOS) as the base operating system and [k3s](https://k3s.io/) as the Kubernetes engine.
|
||||
|
||||
_Please note that we are not affiliated to Hetzner, this is just an open source project striving to be an optimal solution for deploying and maintaining Kubernetes on Hetzner Cloud._
|
||||
_Please note that we are not affiliates of Hetzner; this is just an open-source project striving to be an optimal solution for deploying and maintaining Kubernetes on Hetzner Cloud._
|
||||
|
||||
### Features
|
||||
|
||||
- Maintenance free with auto-upgrade to the latest version of MicroOS and k3s.
|
||||
- Maintenance-free with auto-upgrade to the latest version of MicroOS and k3s.
|
||||
- Proper use of the Hetzner private network to minimize latency and remove the need for encryption.
|
||||
- Automatic HA with the default setting of three control-plane nodes and two agent nodes.
|
||||
- Super-HA: Nodepools for both control-plane and agent nodes can be in different locations.
|
||||
- Possibility to have a single node cluster with a proper ingress controller.
|
||||
- Ability to add nodes and nodepools when the cluster running.
|
||||
- Ability to add nodes and nodepools when the cluster is running.
|
||||
- Traefik ingress controller attached to a Hetzner load balancer with proxy protocol turned on.
|
||||
- Tons of flexible configuration options to suits all needs.
|
||||
- Tons of flexible configuration options to suit all needs.
|
||||
|
||||
_It uses Terraform to deploy as it's easy to use, and Hetzner provides a great [Hetzner Terraform Provider](https://registry.terraform.io/providers/hetznercloud/hcloud/latest/docs)._
|
||||
|
||||
@ -48,7 +48,7 @@ _It uses Terraform to deploy as it's easy to use, and Hetzner provides a great [
|
||||
|
||||
## Getting Started
|
||||
|
||||
Follow those simple steps, and your world's cheapest Kube cluster will be up and running in no time.
|
||||
Follow those simple steps, and your world's cheapest Kube cluster will be up and running.
|
||||
|
||||
### ✔️ Prerequisites
|
||||
|
||||
@ -56,21 +56,22 @@ First and foremost, you need to have a Hetzner Cloud account. You can sign up fo
|
||||
|
||||
Then you'll need to have [terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli), [kubectl](https://kubernetes.io/docs/tasks/tools/) cli, and [hcloud](<https://github.com/hetznercloud/cli>) the Hetzner cli. The easiest way is to use the [homebrew](https://brew.sh/) package manager to install them (available on Linux, Mac, and Windows Linux Subsystem).
|
||||
|
||||
```sh
|
||||
"`sh
|
||||
brew install terraform
|
||||
brew install kubectl
|
||||
brew install hcloud
|
||||
|
||||
```
|
||||
|
||||
### 💡 [Do not skip] Creating the terraform.tfvars file
|
||||
|
||||
1. Create a project in your [Hetzner Cloud Console](https://console.hetzner.cloud/), and go to **Security > API Tokens** of that project to grab the API key. Take note of the key! ✅
|
||||
2. Generate a passphrase-less ed25519 SSH key-pair for your cluster, take note of the respective paths of your private and public keys. Or, see our detailed [SSH options](https://github.com/kube-hetzner/kube-hetzner/blob/master/docs/ssh.md). ✅
|
||||
2. Generate a passphrase-less ed25519 SSH key pair for your cluster; take note of the respective paths of your private and public keys. Or, see our detailed [SSH options](https://github.com/kube-hetzner/kube-hetzner/blob/master/docs/ssh.md). ✅
|
||||
3. Copy `terraform.tfvars.example` to `terraform.tfvars`, and replace the values from steps 1 and 2. ✅
|
||||
4. Make sure you have the latest Terraform version, ideally at least 1.1.0. You can check with `terraform -v`. ✅
|
||||
5. (Optional) There are other variables in `terraform.tfvars` that could be customized, like Hetzner region, and the node counts and sizes.
|
||||
5. (Optional) Other variables in `terraform.tfvars` can be customized, like the Hetzner region and the node counts and sizes.
|
||||
|
||||
_It can also be used as a Terraform module, see the [examples](#examples) section, but basically you just copy the content of terraform.tfvars to the module body. More on the [Kube-Hetzner Terraform module](https://registry.terraform.io/modules/kube-hetzner/kube-hetzner/hcloud/latest) page._
|
||||
_One of the easiest ways to use this project is as a Terraform module; see the [examples](#examples) section or the [Kube-Hetzner Terraform module](https://registry.terraform.io/modules/kube-hetzner/kube-hetzner/hcloud/latest) page._
|
||||
|
||||
### 🎯 Installation
|
||||
|
||||
@ -83,49 +84,50 @@ It will take around 5 minutes to complete, and then you should see a green outpu
|
||||
|
||||
## Usage
|
||||
|
||||
When the cluster is up and running, you can do whatever you wish with it! 🎉
|
||||
When your brand new cluster is up and running, the sky is your limit! 🎉
|
||||
|
||||
You can immediately kubectl into it (using the kubeconfig.yaml saved to the project's directory after the install). By doing `kubectl --kubeconfig kubeconfig.yaml`, but for more convenience, either create a symlink from `~/.kube/config` to `kubeconfig.yaml`, or add an export statement to your `~/.bashrc` or `~/.zshrc` file, as follows (you can get the path of kubeconfig.yaml by running `pwd`):
|
||||
You can immediately kubectl into it (using the `kubeconfig.yaml` saved to the project's directory after the installation). By doing `kubectl --kubeconfig kubeconfig.yaml`, but for more convenience, either create a symlink from `~/.kube/config` to `kubeconfig.yaml` or add an export statement to your `~/.bashrc` or `~/.zshrc` file, as follows (you can get the path of `kubeconfig.yaml` by running `pwd`):
|
||||
|
||||
```sh
|
||||
export KUBECONFIG=/<path-to>/kubeconfig.yaml
|
||||
```
|
||||
|
||||
_Once you start with Terraform, it's best not to change the state manually in Hetzner, otherwise when you try to scale up or down, or even destroy the cluster, you'll get an error._
|
||||
_Once you start with Terraform, it's best not to change the state manually in Hetzner; otherwise, you'll get an error when you try to scale up or down or even destroy the cluster._
|
||||
|
||||
### Scaling Nodes
|
||||
|
||||
Two things can be scaled, the number of nodepools or the count of nodes in these nodepools. You have two list of nodepools you can add to in terraform.tfvars, the control plane nodepool list and the agent nodepool list. Both combined cannot exceed 255 nodepools (you extremely unlikely to reach this limit). As for the count of nodes per nodepools, if you raise your limits in Hetzner, you can have up to 64,670 nodes per nodepool (also very unlikely to need that much).
|
||||
Two things can be scaled: the number of nodepools or the number of nodes in these nodepools. You have two lists of nodepools you can add to in `terraform.tfvars`, the control plane nodepool and the agent nodepool list. Combined, they cannot exceed 255 nodepools (you are extremely unlikely to reach this limit). As for the count of nodes per nodepools, if you raise your limits in Hetzner, you can have up to 64,670 nodes per nodepool (also very unlikely to need that much).
|
||||
|
||||
There are some limitations (to scaling down mainly) that you need to be aware of:
|
||||
|
||||
_Once the cluster is created, you can change nodepool count, and even set it to 0 (in the case of the first control-plane nodepool, the minimum is 1), you can also rename a nodepool (if the count is taken to 0), but should not remove a nodepool from the list after the cluster is created. This is due to how subnets and IPs are allocated. The only nodepools you can remove are the ones at the end of each list of nodepools._
|
||||
_Once the cluster is up; you can change any nodepool count and even set it to 0 (in the case of the first control-plane nodepool, the minimum is 1); you can also rename a nodepool (if the count is to 0), but should not remove a nodepool from the list after once the cluster is up. That is due to how subnets and IPs get allocated. The only nodepools you can remove are those at the end of each list of nodepools._
|
||||
|
||||
_However you can freely add others nodepools the end of the list if you want, and of course increase the node count. You can also decrease the node count, but make sure you drain the node in question before, otherwise it will leave your cluster in a bad state. The only nodepool that needs at least to have a count of 1 always, is the first control-plane nodepool, for obvious reasons._
|
||||
_However, you can freely add other nodepools at the end of the list, increasing the node count. You can also decrease the node count, but make sure you drain the node in question before; otherwise, it will leave your cluster in a bad state. For obvious reasons, the only nodepool that needs at least to have a count of 1 always is the first control-plane nodepool._
|
||||
|
||||
## High Availability
|
||||
|
||||
By default, we have 3 control planes and 3 agents configured, with automatic upgrades and reboots of the nodes.
|
||||
By default, we have three control planes and three agents configured, with automatic upgrades and reboots of the nodes.
|
||||
|
||||
If you want to remain HA (no downtime), it's important to **keep a number of control planes nodes of at least 3** (2 minimum to maintain quorum when 1 goes down for automated upgrades and reboot), see [Rancher's doc on HA](https://rancher.com/docs/k3s/latest/en/installation/ha-embedded/).
|
||||
If you want to remain HA (no downtime), it's essential to **keep a count of control planes nodes of at least three** (two minimum to maintain quorum when one goes down for automated upgrades and reboot), see [Rancher's doc on HA](https://rancher.com/docs/k3s/latest/en/installation/ha-embedded/).
|
||||
|
||||
Otherwise, it's important to turn off automatic upgrades of the OS only (k3s can continue to update without issue) for the control-plane nodes (when 2 or less control-plane nodes), and do the maintenance yourself.
|
||||
Otherwise, it's essential to turn off automatic OS upgrades (k3s can continue to update without issue) for the control-plane nodes (when two or fewer control-plane nodes) and do the maintenance yourself.
|
||||
|
||||
## Automatic Upgrade
|
||||
|
||||
By default, MicroOS gets upgraded automatically on each node, and reboot safely via [Kured](https://github.com/weaveworks/kured) installed in the cluster.
|
||||
By default, MicroOS gets upgraded automatically on each node and reboot safely via [Kured](https://github.com/weaveworks/kured) installed in the cluster.
|
||||
|
||||
As for k3s, it also automatically upgrades thanks to Rancher's [system upgrade controller](https://github.com/rancher/system-upgrade-controller). By default it follows the k3s `stable` channel, but you can also change to `latest` one if needed, or specify a target version to upgrade to via the upgrade plan.
|
||||
As for k3s, it also automatically upgrades thanks to Rancher's [system upgrade controller](https://github.com/rancher/system-upgrade-controller). By default, it follows the k3s `stable` channel, but you can also change to the `latest` one if needed or specify a target version to upgrade to via the upgrade plan.
|
||||
|
||||
You can copy and modify the [one in the templates](https://github.com/kube-hetzner/kube-hetzner/blob/master/templates/plans.yaml.tpl) for that! More on the subject in [k3s upgrades basic](https://rancher.com/docs/k3s/latest/en/upgrades/basic/).
|
||||
You can copy and modify the [one in the templates](https://github.com/kube-hetzner/kube-hetzner/blob/master/templates/plans.yaml.tpl) for that! More on the subject in [k3s upgrades](https://rancher.com/docs/k3s/latest/en/upgrades/basic/).
|
||||
|
||||
_If you wish to turn off automatic MicroOS upgrades on a specific node, you need to ssh into it and issue the following command:_
|
||||
|
||||
```sh
|
||||
"`sh
|
||||
systemctl --now disable transactional-update.timer
|
||||
|
||||
```
|
||||
|
||||
_To turn off k3s upgrades, you can either set the `k3s_upgrade=true` label in the node you want, or set it to `false`. To just remove it, apply:_
|
||||
_To turn off k3s upgrades, you can either set the `k3s_upgrade=true` label in the node you want or set it to `false`. To remove it, apply:_
|
||||
|
||||
```sh
|
||||
kubectl -n system-upgrade label node <node-name> k3s_upgrade-
|
||||
@ -139,15 +141,16 @@ kubectl -n system-upgrade label node <node-name> k3s_upgrade-
|
||||
|
||||
Here is an example of an ingress to run an application with TLS, change the host to fit your need in `examples/tls/ingress.yaml` and then deploy the example:
|
||||
|
||||
```sh
|
||||
"`sh
|
||||
kubectl apply -f examples/tls/.
|
||||
|
||||
```
|
||||
|
||||
```yml
|
||||
"`yml
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: nginx-ingress
|
||||
name: Nginx-ingress
|
||||
annotations:
|
||||
traefik.ingress.kubernetes.io/router.tls: "true"
|
||||
traefik.ingress.kubernetes.io/router.tls.certresolver: le
|
||||
@ -175,9 +178,9 @@ spec:
|
||||
|
||||
<summary>Single-node cluster</summary>
|
||||
|
||||
Running a development cluster on a single node, without any high-availability is possible as well. You need one control plane nodepool with a count of 1, and one agent nodepool with a count of 0.
|
||||
Running a development cluster on a single node without any high availability is also possible. You need one control plane nodepool with a count of 1 and one agent nodepool with a count of 0.
|
||||
|
||||
In this case, we don't deploy an external load-balancer, but use the default [k3s service load balancer](https://rancher.com/docs/k3s/latest/en/networking/#service-load-balancer) on the host itself and open up port 80 & 443 in the firewall (done automatically).
|
||||
In this case, we don't deploy an external load-balancer but use the default [k3s service load balancer](https://rancher.com/docs/k3s/latest/en/networking/#service-load-balancer) on the host itself and open up port 80 & 443 in the firewall (done automatically).
|
||||
|
||||
</details>
|
||||
|
||||
@ -199,28 +202,29 @@ module "kube-hetzner" {
|
||||
|
||||
## Debugging
|
||||
|
||||
First and foremost, it depends, but it's always good to have a quick look into Hetzner quickly without having to login to the UI. That is where the `hcloud` cli comes in.
|
||||
First and foremost, it depends, but it's always good to have a quick look into Hetzner quickly without logging in to the UI. That is where the `hcloud` cli comes in.
|
||||
|
||||
- Activate it with `hcloud context create kube-hetzner`, it will prompt for your Hetzner API token, paste that and hit `enter`.
|
||||
- To check the nodes, if they are running, for instance, use `hcloud server list`.
|
||||
- To check the network use `hcloud network describe k3s`.
|
||||
- To see a look at the LB, use `hcloud loadbalancer describe traefik`.
|
||||
- Activate it with `hcloud context create Kube-hetzner`; it will prompt for your Hetzner API token, paste that, and hit `enter`.
|
||||
- To check the nodes, if they are running, use `hcloud server list`.
|
||||
- To check the network, use `hcloud network describe k3s`.
|
||||
- To look at the LB, use `hcloud loadbalancer describe traefik`.
|
||||
|
||||
Then for the rest, you'll often need to login to your cluster via ssh, to do that, use:
|
||||
|
||||
```sh
|
||||
"`sh
|
||||
ssh root@xxx.xxx.xxx.xxx -i ~/.ssh/id_ed25519 -o StrictHostKeyChecking=no
|
||||
|
||||
```
|
||||
|
||||
Then, for control-plane nodes, use `journalctl -u k3s` to see the k3s logs, and for agents, use `journalctl -u k3s-agent` instead.
|
||||
|
||||
Last but not least, to see when the last reboot took place, you can use both `last reboot`and `uptime`.
|
||||
Last but not least, to see when the previous reboot took place, you can use both `last reboot` and `uptime`.
|
||||
|
||||
## Takedown
|
||||
|
||||
If you want to takedown the cluster, you can proceed as follows:
|
||||
If you want to take down the cluster, you can proceed as follows:
|
||||
|
||||
```sh
|
||||
"`sh
|
||||
terraform destroy -auto-approve
|
||||
```
|
||||
|
||||
@ -238,25 +242,25 @@ There is also a branch where openSUSE MicroOS came preinstalled with the k3s RPM
|
||||
|
||||
## Contributing
|
||||
|
||||
🌱 This project currently installs openSUSE MicroOS via the Hetzner rescue mode, which makes things a few minutes slower. If you could **take a few minutes to send a support request to Hetzner, asking them to please add openSUSE MicroOS as a default image**, not just an ISO, it would be wonderful. The more requests they receive the likelier they are to add support for it, and if they do, that would cut the deploy time by half. The official link to openSUSE MicroOS is <https://get.opensuse.org/microos>, and their `OpenStack Cloud` image has full support for Cloud-init, so it's a great option to propose to them!
|
||||
🌱 This project currently installs openSUSE MicroOS via the Hetzner rescue mode, making things a few minutes slower. If you could **take a few minutes to send a support request to Hetzner, asking them to please add openSUSE MicroOS as a default image**, not just an ISO. The more requests they receive, the likelier they are to add support for it, and if they do, that will cut the deployment time by half. The official link to openSUSE MicroOS is <https://get.opensuse.org/microos>, and their `OpenStack Cloud` image has full support for Cloud-init, which would probably suit the Hetzner Ops team!
|
||||
|
||||
About code contributions, they are **greatly appreciated**.
|
||||
Code contributions are very much **welcome**.
|
||||
|
||||
1. Fork the Project
|
||||
2. Create your Branch (`git checkout -b AmazingFeature`)
|
||||
3. Commit your Changes (`git commit -m 'Add some AmazingFeature'`)
|
||||
3. Commit your Changes (`git commit -m 'Add some AmazingFeature")
|
||||
4. Push to the Branch (`git push origin AmazingFeature`)
|
||||
5. Open a Pull Request
|
||||
5. Open a Pull Request targetting the `staging` branch.
|
||||
|
||||
<!-- ACKNOWLEDGEMENTS -->
|
||||
|
||||
## Acknowledgements
|
||||
|
||||
- [k-andy](https://github.com/StarpTech/k-andy) was the starting point for this project. It wouldn't have been possible without it.
|
||||
- [Best-README-Template](https://github.com/othneildrew/Best-README-Template) that made writing this readme a lot easier.
|
||||
- [Best-README-Template](https://github.com/othneildrew/Best-README-Template) made writing this readme a lot easier.
|
||||
- [Hetzner Cloud](https://www.hetzner.com) for providing a solid infrastructure and terraform package.
|
||||
- [Hashicorp](https://www.hashicorp.com) for the amazing terraform framework that makes all the magic happen.
|
||||
- [Rancher](https://www.rancher.com) for k3s, an amazing Kube distribution that is the very core engine of this project.
|
||||
- [Rancher](https://www.rancher.com) for k3s, an amazing Kube distribution that is the core engine of this project.
|
||||
- [openSUSE](https://www.opensuse.org) for MicroOS, which is just next level Container OS technology.
|
||||
|
||||
[contributors-shield]: https://img.shields.io/github/contributors/mysticaltech/kube-hetzner.svg?style=for-the-badge
|
||||
@ -269,4 +273,4 @@ About code contributions, they are **greatly appreciated**.
|
||||
[issues-url]: https://github.com/mysticaltech/kube-hetzner/issues
|
||||
[license-shield]: https://img.shields.io/github/license/mysticaltech/kube-hetzner.svg?style=for-the-badge
|
||||
[license-url]: https://github.com/mysticaltech/kube-hetzner/blob/master/LICENSE.txt
|
||||
[product-screenshot]: https://github.com/kube-hetzner/kube-hetzner/raw/master/.images/kubectl-pod-all-17022022.png
|
||||
[product-screenshot]: https://github.com/kube-hetzner/kube-hetzner/raw/master/.images/kubectl-pod-all-17022022.png
|
@ -1,40 +1,40 @@
|
||||
# Only the first values starting with a * are obligatory, the rest can remain with their default values, or you
|
||||
# Only the first values starting with a * are obligatory; the rest can remain with their default values, or you
|
||||
# could adapt them to your needs.
|
||||
#
|
||||
# Note that some values, notably "location" and "public_key" have no effect after the initial cluster has been setup.
|
||||
# This is in order to keep terraform from re-provisioning all nodes at once which would loose data. If you want to update,
|
||||
# those, you should instead change the value here and then manually re-provision each node one-by-one. Grep for "lifecycle".
|
||||
# Note that some values, notably "location" and "public_key" have no effect after initializing the cluster.
|
||||
# This is to keep Terraform from re-provisioning all nodes at once, which would lose data. If you want to update
|
||||
# those, you should instead change the value here and manually re-provision each node. Grep for "lifecycle".
|
||||
|
||||
# * Your Hetzner project API token
|
||||
hcloud_token = "xxxxxxxxxxxxxxxxxxYYYYYYYYYYYYYYYYYYYzzzzzzzzzzzzzzzzzzzzz"
|
||||
# * Your public key
|
||||
public_key = "/home/username/.ssh/id_ed25519.pub"
|
||||
# * Your private key, must be "private_key = null" when you want to use ssh-agent, for a Yubikey like device auth or an SSH key-pair with passphrase
|
||||
# * Your private key must be "private_key = null" when you want to use ssh-agent for a Yubikey-like device authentification or an SSH key-pair with a passphrase.
|
||||
private_key = "/home/username/.ssh/id_ed25519"
|
||||
|
||||
# These can be customized, or left with the default values
|
||||
# For Hetzner locations see https://docs.hetzner.com/general/others/data-centers-and-connection/
|
||||
network_region = "eu-central" # change to `us-east` if location is ash
|
||||
|
||||
# For the control-planes, at least 3 nodes is recommended for HA, otherwise you need to turn off automatic upgrade (see ReadMe).
|
||||
# As per rancher docs, it must be always an odd number, never even! See https://rancher.com/docs/k3s/latest/en/installation/ha-embedded/
|
||||
# For instance, 1 is ok (non-HA), 2 not ok, 3 is ok (becomes HA). It does not matter if they are in the same nodepool or not! So they can be in different locations, and of different types.
|
||||
# For the control planes, at least three nodes are the minimum for HA. Otherwise, you need to turn off the automatic upgrade (see ReadMe).
|
||||
# As per rancher docs, it must always be an odd number, never even! See https://rancher.com/docs/k3s/latest/en/installation/ha-embedded/
|
||||
# For instance, one is ok (non-HA), two is not ok, and three is ok (becomes HA). It does not matter if they are in the same nodepool or not! So they can be in different locations and of various types.
|
||||
|
||||
# Of course, you can choose any number of nodepools you want, with the location you want. The only contraint on the location is that you need to stay in the same network region, basically Europe or US, see above.
|
||||
# For the server type, # The type of control plane nodes, the minimum instance supported is cpx11 (just a few cents more than cx11), see https://www.hetzner.com/cloud.
|
||||
# Of course, you can choose any number of nodepools you want, with the location you want. The only constraint on the location is that you need to stay in the same network region, Europe, or the US.
|
||||
# For the server type, the minimum instance supported is cpx11 (just a few cents more than cx11); see https://www.hetzner.com/cloud.
|
||||
|
||||
# IMPORTANT: Before the your cluster is created, you can do anything you want with the nodepools, but you need at least one of each control plane and agent.
|
||||
# Once the cluster is created, you can change nodepool count, and even set it to 0 (in the case of the first control-plane nodepool, the minimum is 1),
|
||||
# you can also rename it (if the count is taken to 0), but do not remove a nodepool from the list after the cluster is created.
|
||||
# IMPORTANT: Before you create your cluster, you can do anything you want with the nodepools, but you need at least one of each control plane and agent.
|
||||
# Once the cluster is up and running, you can change nodepool count and even set it to 0 (in the case of the first control-plane nodepool, the minimum is 1),
|
||||
# you can also rename it (if the count is 0), but do not remove a nodepool from the list.
|
||||
|
||||
# The only nodepools that are safe to remove from the list when you edit it, are the ones at the end of the lists. This is due to how subnets and IPs are allocated (FILO).
|
||||
# You can however freely add others nodepools the end of each list if you want! The maximum number of nodepools you can create, combined for both lists is 255.
|
||||
# Also, before decreasing the count of any nodepools to 0, it's important to drain and cordon it the nodes in question, otherwise it will leave your cluster in a bad state.
|
||||
# The only nodepools that are safe to remove from the list when you edit it are at the end of the lists. That is due to how subnets and IPs get allocated (FILO).
|
||||
# You can, however, freely add other nodepools at the end of each list if you want! The maximum number of nodepools you can create combined for both lists is 255.
|
||||
# Also, before decreasing the count of any nodepools to 0, it's essential to drain and cordon the nodes in question. Otherwise, it will leave your cluster in a bad state.
|
||||
|
||||
# Before initializing the cluster, you can change all parameters and add or remove any nodepools. You just need at least one nodepool of each kind, control plane and agent.
|
||||
# The nodepool names are fully arbitrary, you can choose whatever you want, but no special characters or underscore, only alphanumeric characters and dashes are allowed.
|
||||
# Before initializing the cluster, you can change all parameters and add or remove any nodepools. You need at least one nodepool of each kind, control plane, and agent.
|
||||
# The nodepool names are entirely arbitrary, you can choose whatever you want, but no special characters or underscore; only alphanumeric characters and dashes are allowed.
|
||||
|
||||
# If you want to have a single node cluster, just have 1 control plane nodepools with a count of 1, and one agent nodepool with a count of 0.
|
||||
# If you want to have a single node cluster, have one control plane nodepools with a count of 1, and one agent nodepool with a count of 0.
|
||||
|
||||
# Example below:
|
||||
|
||||
@ -100,9 +100,9 @@ agent_nodepools = [
|
||||
load_balancer_type = "lb11"
|
||||
load_balancer_location = "fsn1"
|
||||
|
||||
### The following values are fully optional
|
||||
### The following values are entirely optional
|
||||
|
||||
# If you want to use a specific Hetzner CCM and CSI version, set them below, otherwise leave as is for the latest versions
|
||||
# If you want to use a specific Hetzner CCM and CSI version, set them below; otherwise, leave them as-is for the latest versions
|
||||
# hetzner_ccm_version = ""
|
||||
# hetzner_csi_version = ""
|
||||
|
||||
@ -111,17 +111,17 @@ load_balancer_location = "fsn1"
|
||||
# traefik_acme_tls = true
|
||||
# traefik_acme_email = "mail@example.com"
|
||||
|
||||
# If you want to use disable the traefik ingress controller, you can. By default is it enabled!
|
||||
# If you want to disable the Traefik ingress controller, you can. Default is "true".
|
||||
# traefik_enabled = false
|
||||
|
||||
# If you want to disable the metric server, you can! By defaults it is enabled.
|
||||
# If you want to disable the metric server, you can! Default is "true".
|
||||
# metrics_server_enabled = false
|
||||
|
||||
# If you want to allow non-control-plane workloads to run on the control-plane nodes set "true" below. The default is "false".
|
||||
# If you want to allow non-control-plane workloads to run on the control-plane nodes, set "true" below. The default is "false".
|
||||
# True by default for single node clusters.
|
||||
# allow_scheduling_on_control_plane = true
|
||||
|
||||
# If you want to disable automatic upgrade of k3s, you can set this to false, default is "true".
|
||||
# If you want to disable the automatic upgrade of k3s, you can set this to false. The default is "true".
|
||||
# automatically_upgrade_k3s = false
|
||||
|
||||
# Allows you to specify either stable, latest, or testing (defaults to stable), see https://rancher.com/docs/k3s/latest/en/upgrades/basic/
|
||||
@ -130,11 +130,11 @@ load_balancer_location = "fsn1"
|
||||
# The cluster name, by default "k3s"
|
||||
# cluster_name = ""
|
||||
|
||||
# Whether to use the cluster name in the node name, in the form of {cluster_name}-{nodepool_name} the default is "true".
|
||||
# Whether to use the cluster name in the node name, in the form of {cluster_name}-{nodepool_name}, the default is "true".
|
||||
# use_cluster_name_in_node_name = false
|
||||
|
||||
# Adding extra firewall rules, like opening a port
|
||||
# In this example with allow port TCP 5432 for a Postgres service we will open via a nodeport and allow outgoing SMTP traffic on port TCP 465
|
||||
# In this example, we allow port TCP 5432 for a Postgres service that we will open via a node port and also allow outgoing SMTP traffic on port TCP 465
|
||||
# More info on the format here https://registry.terraform.io/providers/hetznercloud/hcloud/latest/docs/resources/firewall
|
||||
# extra_firewall_rules = [
|
||||
# {
|
||||
|
Loading…
Reference in New Issue
Block a user