2021-12-03 02:11:52 +01:00
|
|
|
output "controlplanes_public_ip" {
|
|
|
|
value = concat([hcloud_server.first_control_plane.ipv4_address], hcloud_server.control_planes.*.ipv4_address)
|
|
|
|
description = "The public IP addresses of the controlplane server."
|
|
|
|
}
|
|
|
|
|
|
|
|
output "agents_public_ip" {
|
|
|
|
value = hcloud_server.agents.*.ipv4_address
|
|
|
|
description = "The public IP addresses of the agent server."
|
|
|
|
}
|
Expose kubeconfig in outputs...
* To do so, we need to ensure that the generated kubeconfig is part of
terraforms dependency graph. This has the additional benefit of not
depending on local files anymore which should enable multi-user
setups.
* This also means that we can't deploy CCM, CSI & Traefik from our local
host, because we don't have kubeconfig.yaml locally while provisioning
the control plane, only afterwards.
* So we just run kubectl apply on the control plane itself, after k3s is
ready.
* To do so, we need to deploy all manifests. I've merged the patches
into a single kustomization.yaml file, because that makes the
deployment of those files to the control-plane server easier.
* we could also put the traefik config into the same kustomization file,
which would save us one of the file provisioner blocks. I didn't want
this PR to get any bigger, and will consider merging this config later
on. kustomization.yaml is small enough that we could yamlencode() for
it and store the patches in separate files again, not as
inline-strings which is kind of ugly.
2022-02-11 12:45:03 +01:00
|
|
|
|
2022-02-17 13:19:21 +01:00
|
|
|
/*
|
2022-02-14 00:24:08 +01:00
|
|
|
output "load_balancer_public_ip" {
|
|
|
|
description = "The public IPv4 address of the Hetzner load balancer"
|
2022-02-14 11:24:30 +01:00
|
|
|
value = data.hcloud_load_balancer.traefik.ipv4
|
2022-02-14 00:24:08 +01:00
|
|
|
}
|
2022-02-17 13:19:21 +01:00
|
|
|
*/
|
2022-02-14 00:24:08 +01:00
|
|
|
|
Expose kubeconfig in outputs...
* To do so, we need to ensure that the generated kubeconfig is part of
terraforms dependency graph. This has the additional benefit of not
depending on local files anymore which should enable multi-user
setups.
* This also means that we can't deploy CCM, CSI & Traefik from our local
host, because we don't have kubeconfig.yaml locally while provisioning
the control plane, only afterwards.
* So we just run kubectl apply on the control plane itself, after k3s is
ready.
* To do so, we need to deploy all manifests. I've merged the patches
into a single kustomization.yaml file, because that makes the
deployment of those files to the control-plane server easier.
* we could also put the traefik config into the same kustomization file,
which would save us one of the file provisioner blocks. I didn't want
this PR to get any bigger, and will consider merging this config later
on. kustomization.yaml is small enough that we could yamlencode() for
it and store the patches in separate files again, not as
inline-strings which is kind of ugly.
2022-02-11 12:45:03 +01:00
|
|
|
output "kubeconfig_file" {
|
|
|
|
value = local.kubeconfig_external
|
|
|
|
description = "Kubeconfig file content with external IP address"
|
|
|
|
sensitive = true
|
|
|
|
}
|
|
|
|
|
|
|
|
output "kubeconfig" {
|
|
|
|
description = "Structured kubeconfig data to supply to other providers"
|
|
|
|
value = local.kubeconfig_data
|
|
|
|
sensitive = true
|
|
|
|
}
|