After deploying the app on a bucket provisioned from the web interface, let's see how Dagger can be leveraged to extend our deployment pipeline using Cloudformation's relay.
## Prerequisites
### Reminder
#### Guidelines
The provisioning strategy detailed below follows S3 best practices. In order to remain agnostic of your current AWS level, it deeply relies on S3 and Cloudformation documentation.
#### Relays
When developing a plan based on relays, the first thing to consider is to read their universe reference: it summarizes the expected inputs and their corresponding formats. [<u>Here</u>](https://dagger.io/aws/cloudformation) is the Cloudformation one.
### Setup
1. Initialize a new folder and a new workspace
```bash
mkdir infra-provisioning
cd ./infra-provisioning
dagger init
```
2. Create a new environment
```bash
dagger new s3-provisioning
cd ./.dagger/env/s3-provisioning/plan/ #Personal preference to directly work inside the plan
```
3. Create `main.cue` file with its corresponding `main` package
Now that a plan has been set, let's implement the Cloudformation template and convert it to a Cue definition for further flexibility.
### Template creation
The idea here is to follow best practices in [<u>S3 buckets</u>](https://docs.aws.amazon.com/AmazonS3/latest/userguide/HostingWebsiteOnS3Setup.html) provisioning. Thanksfully, the AWS documentation contains a working [<u>Cloudformation template</u>](https://docs.aws.amazon.com/fr_fr/AWSCloudFormation/latest/UserGuide/quickref-s3.html#scenario-s3-bucket-website) that fits 95% of our needs.
1. Tweaking the template: removing some of the ouputs
The [<u>template</u>](https://docs.aws.amazon.com/fr_fr/AWSCloudFormation/latest/UserGuide/quickref-s3.html#scenario-s3-bucket-website) has far more outputs than necessary, as we just want to retrieve the bucket name:
Once you'll get used to Cue, you might directly write Cloudformation templates in this language. As most of the current examples are either written in JSON or in YAML, let's see how to lazily convert them in Cue (optional but recommended).
###### 1. Modify main.cue
We will temporarly modify `main.cue` to process the conversion
This Cue version of the JSON template is going to be integrated inside our provisioning plan. Save the output for the next steps of the guide.
## Personal plan
With the Cloudformation template now finished, tested and converted in Cue. We can now enter the last part of our guide: piping everything together inside our personal plan.
As our plan relies on [<u>Cloudformation's relay</u>](https://dagger.io/aws/cloudformation), let's dissect the expected inputs by gradually incorporating them in our plan.
As seen before in the documentation, values starting with `*` are default values. However, as a plan developer, we may face the need to add default values to inputs from relays that don't have one : Cue gives you this flexibility (cf. `config` value detailed below).
The config values are all part of the `aws` relay. Regarding this package, as you can see above, none of the 3 required inputs contain default options.
For the sake of the exercise, let's say that our company's policy is to mainly deploy on the `us-east-2` region. Having this value set as a default option could be a smart and efficient decision for our dev teams. Let's see how to implement it:
dagger up # Try to run the plan. As expected, we encounter a failure
# Output:
# 9:07PM ERR system | required input is missing input=awsConfig.accessKey
# 9:07PM ERR system | required input is missing input=awsConfig.secretKey
# 9:07PM FTL system | some required inputs are not set, please re-run with `--force` if you think it's a mistake missing=0s
```
</TabItem>
</Tabs>
Inside the `firstCommand` tab, we see that the `awsConfig.region` key has a default value set. It wasn't the case when we just imported the base relay.
Furthemore, in the `Failed execution` tab, the execution of the `dagger up` command fails because of the unspecified secret inputs.
3. Integrating Cloudformation relay
Now that we have the `config` definition properly configured, we can now import the Cloudformation one, and properly fill it :
"dagger.io/aws" // <-- Import AWS relay to instanciate aws.#Config
"dagger.io/random" // <-- Import Random relay to instanciate random.#String
"dagger.io/aws/cloudformation" // <-- Import Cloudformation relay to instanciate aws.#Cloudformation
)
// AWS account: credentials and region
awsConfig: aws.#Config & { // Assign an aws.#Config definition to a field named `awsConfig`
// awsConfig will be a directly requestable key : `dagger query awsConfig`
// awsConfig sets the region to either an input, or a default string: "us-east-2"
region: *"us-east-2" | string @dagger(input)
// As we declare an aws.#Config, Dagger/Cue will automatically know that some others values inside this definition
// are inputs, especially secrets (AccessKey, secretKey). Due to the confidential nature of secrets, we won't declare default values to them
}
// AWS Cloudformation stdlib
cfnStack: cloudformation.#Stack & { // Assign an aws.#Cloudformation definition to a field named `cfnStack`
// This definition is the stdlib package to use in order to deploy AWS instances programmatically
config: awsConfig // As seen in the relay doc, 3 config fields have to be provided : `config.region`, `config.accessKey` and `config.secretKey`
// As their names contain a `.`, it means that the value `config` expects 3 fields `region`, `accessKey` and `secretKey`, included in a `aws.#Config` parent definition
stackName: cfnStackName // We assign to the `stackName` the `cfnStackName` declared below.
// `stackName` expects a string type. However, as a plan developer, we wanted to give the developer a choice : either a default random value, or an input
// The default random value *"stack-\(suffix.out)" uses the random.#String relay to generate a random value. We append it's result inside `"\(append_happening_here)"`
Finally ! We now have a working template ready to be used to provision S3 infrastructures. Let's add the missing inputs (aws credentials) and let's deploy it :
In case of a failure, the `Debug deploy` tab shows the command to use in order to get more informations.
The name of the provisioned S3 instance lies in the `cfnStack.outputs.Name` output key, without `arn:aws:s3:::`
> With this provisioning infrastructure, your dev team will easily be able to instanciate aws infrastructures : all they need to know is `dagger input list` and `dagger up`, isn't that awesome ? :-D
PS: This plan could be further extended with the AWS S3 example : it could not only provision an infrastructure but also easily deploy on it.
PS1: As it could make a nice first exercise for you, this won't be detailed here. However, we're interested in your imagination : let us know your implementations :-)