Docs: fix dup navbar, pt 2

Signed-off-by: Solomon Hykes <solomon@dagger.io>
This commit is contained in:
Solomon Hykes
2021-07-21 15:21:19 +00:00
committed by Solomon Hykes
parent c3139cbd35
commit 4a9a355156
10 changed files with 0 additions and 0 deletions

View File

@@ -0,0 +1,124 @@
---
slug: /1003/get-started/
---
# Get started with Dagger
In this guide, you will learn the basics of Dagger by interacting with a pre-configured environment.
Then you will move on to creating your environment from scratch.
Our pre-configured environment deploys a simple [React](https://reactjs.org/)
application to a unique hosting environment created and managed by us, the Dagger team, for this tutorial.
This will allow you to deploy something "real" right away without configuring your infrastructure first.
In later guides, you will learn how to configure Dagger to deploy to your infrastructure. And, for advanced users,
how to share access to your infrastructure in the same way that we share access to ours now.
## Initial setup
### Install Dagger
First, make sure [you have installed Dagger on your local machine](/install).
### Setup example app
You will need a local copy of the [Dagger examples repository](https://github.com/dagger/examples).
NOTE: you may use the same local copy across all tutorials.
```shell
git clone https://github.com/dagger/examples
```
Make sure that all commands are run from the `todoapp` directory:
```shell
cd examples/todoapp
```
### Import the tutorial key
Dagger natively supports encrypted secrets: when a user inputs a value marked as secret
(for example, a password, API token, or ssh key) it is automatically encrypted with that user's key,
and no other user can access that value unless they are explicitly given access.
In the interest of security, Dagger has no way _not_ to encrypt a secret value.
But this causes a dilemma for this tutorial: how do we give unrestricted, public access to our
(carefully sandboxed) infrastructure so that anyone can deploy to it?
To solve this dilemma, we included the private key used to encrypt the tutorial's secret inputs.
Import the key to your Dagger installation, and you're good to go:
```shell
./import-tutorial-key.sh
```
## First deployment
Now that your environment is set up, you are ready to deploy:
```shell
dagger up
```
That's it! You have just made your first deployment with Dagger.
The URL of your newly deployed app should be visible towards the end of the command output.
If you visit that URL, you should see your application live!
## Code, deploy, repeat
This environment is pre-configured to deploy from the `./todoapp` directory,
so you can make any change you want to that directory, then deploy it with `dagger up`.
You can even replace our example React code with any React application!
NOTE: you don't have to commit your changes to the git repository before deploying them.
## Under the hood
This example showed you how to deploy and develop an application that is already configured with Dagger. Now, let's learn a few concepts to help you understand how this was put together.
### The Environment
An Environment holds the entire deployment configuration.
You can list existing environment from the `./todoapp` directory:
```shell
dagger list
```
You should see an environment named `s3`. You can have many environments within your app. For instance, one for `staging`, one for `dev`, etc...
Each environment can have a different kind of deployment code. For example, a `dev` environment can deploy locally; a `staging` environment can deploy to a remote infrastructure, and so on.
### The plan
The plan is the deployment code that includes the logic to deploy the local application to an AWS S3 bucket. From the `todoapp` directory, you can list the code of the plan:
```shell
ls -l ./s3
```
Any code change to the plan will be applied during the next `dagger up`.
### The inputs
The plan can define one or several `inputs`. Inputs may be configuration values, artifacts, or encrypted secrets provided by the user. Here is how to list the current inputs:
```shell
dagger input list
```
The inputs are persisted inside the `.dagger` directory and pushed to your git repository. That's why this example application worked out of the box.
### The outputs
The plan defines one or several `outputs`. They can show helpful information at the end of the deployment. That's how we read the deploy `url` at the end of the deployment. Here is the command to list all outputs:
```shell
dagger output list
```
## What's next?
At this point, you have deployed your first application using Dagger and learned some dagger commands. You are now ready to [learn more about how to program Dagger](/learn/102-dev).

View File

@@ -0,0 +1,285 @@
---
slug: /1004/dev-first-env/
---
# Create your first Dagger environment
## Overview
In this guide, you will create your first Dagger environment from scratch,
and use it to deploy a React application to two locations in parallel:
a dedicated [Amazon S3](https://wikipedia.org/wiki/Amazon_S3) bucket, and a
[Netlify](https://en.wikipedia.org/wiki/Netlify) site.
### Anatomy of a Dagger environment
A Dagger environment contains all the code and data necessary to deliver a particular application in a specific way.
For example, the same application might be delivered to a production and staging environment, each with its own configuration.
An environment is made of 3 parts:
- A _plan_, authored by the environment's _developer_, using the [Cue](https://cuelang.org) language.
- _Inputs_, supplied by the environment's _user_ via the `dagger input` command and written to a particular file. Inputs may be configuration values, artifacts, or encrypted secrets.
- _Outputs_, computed by the Dagger engine via the `dagger up` command and recorded to a particular directory.
We will first develop our environment's _plan_, configure its initial inputs, then finally run it to verify that it works.
### Anatomy of a plan
A _plan_ specifies, in code, how to deliver a particular application in a specific way.
It is your environment's source code.
Unlike regular imperative programs, which specify a sequence of instructions to execute,
a Dagger plan is _declarative_: it lays out your application's supply chain as a graph
of interconnected nodes.
Each node in the graph represents a component of the supply chain, for example:
- Development tools: source control, CI, build systems, testing systems
- Hosting infrastructure: compute, storage, networking, databases, CDNs
- Software dependencies: operating systems, languages, libraries, frameworks, etc.
Each link in the graph represents a flow of data between nodes. For example:
- source code flows from a git repository to a build system
- system dependencies are combined in a docker image, then uploaded to a registry
- configuration files are generated then sent to a compute cluster or load balancer
### Introduction to Cue development
Dagger delivery plans are developed in Cue.
Cue is a powerful declarative language by Marcel van Lohuizen. Marcel co-created the Borg Configuration Language (BCL), the [language used to deploy all applications at Google](https://storage.googleapis.com/pub-tools-public-publication-data/pdf/43438.pdf). It is a superset of JSON, with additional features to make declarative, data-driven programming as pleasant and productive as regular imperative programming.
If you are new to Cue development, don't worry: this tutorial will walk you through the basic
steps to get started, and give you resources to learn more.
In technical terms, our plan is a [Cue Package](https://cuelang.org/docs/concepts/packages/#packages). This tutorial will develop a new Cue package from scratch for our plan, but you can use any Cue package as a plan.
## Initial setup
### Install Cue
Although not strictly necessary, for an optimal development experience, we recommend
[installing a recent version of Cue](https://github.com/cuelang/cue/releases/).
### Prepare Cue learning resources
If you are new to Cue, we recommend keeping the following resources in browser tabs:
The unofficial but excellent [Cuetorials](https://cuetorials.com/overview/foundations/) in a browser tab, to look up Cue concepts as they appear.
- The official [Cue interactive sandbox](https://cuelang.org/play) for easy experimentation.
### Setup example app
You will need a local copy of the [Dagger examples repository](https://github.com/dagger/examples).
NOTE: you may use the same local copy across all tutorials.
```shell
git clone https://github.com/dagger/examples
```
Make sure that all commands are run from the `todoapp` directory:
```shell
cd examples/todoapp
```
## Develop the plan
### Initialize a Cue module
Developing for Dagger takes place in a [Cue module](https://cuelang.org/docs/concepts/packages/#modules).
If you are familiar with Go, Cue modules are directly inspired by Go modules.
Otherwise, don't worry: a Cue module is simply a directory with one or more Cue packages in it. For example, a Cue module has a `cue.mod` directory at its root.
This guide will use the same directory as the root of the Dagger workspace and the Cue module, but you can create your Cue module anywhere inside the Dagger workspace.
```shell
cue mod init
```
### Create a Cue package
Now we start developing our Cue package at the root of our Cue module.
In this guide, we will split our package into multiple files, one per component.
Thus, it is typical for a Cue package to have only one file. However, you can organize your package any way you want: the Cue evaluator merges all files from the same package, as long as they are in the same directory, and start with the same
`package` clause...
See the [Cue documentation](https://cuelang.org/docs/concepts/packages/#files-belonging-to-a-package) for more details.
We will call our package `multibucket` because it sounds badass and vaguely explains what it does.
But you can call your packages anything you want.
Let's create a new directory for our Cue package:
```shell
mkdir multibucket
```
### Component 1: app source code
The first component of our plan is the source code of our React application.
In Dagger terms, this component has two essential properties:
1. It is an _artifact_: something that can be represented as a directory.
2. It is an _input_: something that is provided by the end-user.
Let's write the corresponding Cue code to a new file in our package:
```cue title="todoapp/multibucket/source.cue"
package multibucket
import (
"alpha.dagger.io/dagger"
)
// Source code of the sample application
src: dagger.#Artifact & dagger.#Input
```
This code defines a component at the key `src` and specifies that it is both an artifact and an input.
### Component 2: yarn package
The second component of our plan is the Yarn package built from the app source code:
```cue title="todoapp/multibucket/yarn.cue"
package multibucket
import (
"alpha.dagger.io/js/yarn"
)
// Build the source code using Yarn
app: yarn.#Package & {
source: src
}
```
Let's break it down:
- `package multibucket`: this file is part of the multibucket package
- `import ( "alpha.dagger.io/js/yarn" )`: import a package from the [Dagger Universe](../reference/universe/README.md).
- `app: yarn.#Package`: apply the `#Package` definition at the key `app`
- `&`: also merge the following values at the same key...
- `{ source: src }`: set the key `app.source` to the value of `src`. This snippet of code connects our two components, forming the first link in our DAG
### Component 3: dedicated S3 bucket
_FIXME_: this section is not yet available because the [Amazon S3 package](https://github.com/dagger/dagger/tree/main/stdlib/aws/s3) does [not yet support bucket creation](https://github.com/dagger/dagger/issues/623). We welcome external contributions :)
### Component 4: deploy to Netlify
The third component of our plan is the Netlify site to which the app will be deployed:
```cue title="todoapp/multibucket/netlify.cue"
package multibucket
import (
"alpha.dagger.io/netlify"
)
// Netlify site
site: "netlify": netlify.#Site & {
contents: app.build
}
```
This component is very similar to the previous one:
- We use the same package name as the other files
- We import another package from the [Dagger Universe](../reference/universe/README.md).
- `site: "netlify": site.#Netlify`: apply the `#Site` definition at the key `site.netlify`. Note the use of quotes to protect the key from name conflict.
- `&`: also merge the following values at the same key...
- `{ contents: app.build }`: set the key `site.netlify.contents` to the value of `app.build`. This line connects our components 2 and 3, forming the second link in our DAG.
### Exploring a package documentation
But wait: how did we know what fields were available in `yarn.#Package` and `netlify.#Site`?
Answer: thanks to the `dagger doc` command, which prints the documentation of any package from [Dagger Universe](../reference/universe/README.md).
```shell
dagger doc alpha.dagger.io/netlify
dagger doc alpha.dagger.io/js/yarn
```
You can also browse the [Dagger Universe](../reference/universe/README.md) reference in the documentation.
## Setup the environment
### Create a new environment
Now that your Cue package is ready, let's create an environment to run it:
```shell
dagger new 'multibucket' -p ./multibucket
```
### Configure user inputs
You can inspect the list of inputs (both required and optional) using dagger input list:
```shell
dagger input list -e multibucket
# Input Value Set by user Description
# site.netlify.account.name *"" | string false Use this Netlify account name (also referred to as "team" in the Netlify docs)
# site.netlify.account.token dagger.#Secret false Netlify authentication token
# site.netlify.name string false Deploy to this Netlify site
# site.netlify.create *true | bool false Create the Netlify site if it doesn't exist?
# src dagger.#Artifact false Source code of the sample application
# app.cwd *"." | string false working directory to use
# app.writeEnvFile *"" | string false Write the contents of `environment` to this file, in the "envfile" format
# app.buildDir *"build" | string false Read build output from this directory (path must be relative to working directory)
# app.script *"build" | string false Run this yarn script
# app.args *[] | [] false Optional arguments for the script
```
All the values without default values (without `*`) have to be specified by the user. Here, required fields are:
- `site.netlify.account.token`, your access token
- `site.netlify.name`, name of the published website
- `src`, source code of the app
Please note the type of the user inputs: a string, a #Secret, and an artifact. Let's see how to input them:
```shell
# As a string input is expected for `site.netlify.name`, we set a `text` input
dagger input text site.netlify.name <GLOBALLY-UNIQUE-NAME> -e multibucket
# As a secret input is expected for `site.netlify.account.token`, we set a `secret` input
dagger input secret site.netlify.account.token <PERSONAL-ACCESS-TOKEN> -e multibucket
# As an Artifact is expected for `src`, we set a `dir` input (dagger input list for alternatives)
dagger input dir src . -e multibucket
```
### Deploy
Now that everything is appropriately set, let's deploy on Netlify:
```shell
dagger up -e multibucket
```
### Using the environment
[This section is not yet written](https://github.com/dagger/dagger/blob/main/CONTRIBUTING.md)
## Share your environment
### Introduction to gitops
[This section is not yet written](https://github.com/dagger/dagger/blob/main/CONTRIBUTING.md)
### Review changes
[This section is not yet written](https://github.com/dagger/dagger/blob/main/CONTRIBUTING.md)
### Commit changes
[This section is not yet written](https://github.com/dagger/dagger/blob/main/CONTRIBUTING.md)

View File

@@ -0,0 +1,11 @@
---
slug: /1005/custom-script/
---
# Integrate custom shell scripts
In this guide, you will learn how to incorporate your custom shell scripts into a Dagger environment. For example, run integration tests before deployment; call a custom processing step; or any other custom task which you have already automated and want to incorporate into your Dagger environment with minimal effort.
This section is not yet written. Help Dagger growing by suggesting content improvements.
[![github-contribute](https://user-images.githubusercontent.com/1186424/122426439-8cd5e380-cf90-11eb-944b-c75fadecaefe.png)](https://github.com/dagger/dagger/blob/main/CONTRIBUTING.md)

View File

@@ -0,0 +1,115 @@
---
slug: /1006/google-cloud-run/
---
# Deploy to Google Cloud Run with Dagger
This tutorial illustrates how to use Dagger to build, push and deploy Docker images to Cloud Run.
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
## Initialize a Dagger Workspace and Environment
### (optional) Setup example app
You will need the local copy of the [Dagger examples repository](https://github.com/dagger/examples) used in previous guides
```shell
git clone https://github.com/dagger/examples
```
Make sure that all commands are being run from the todoapp directory:
```shell
cd examples/todoapp
```
### (optional) Initialize a Cue module
This guide will use the same directory as the root of the Dagger workspace and the root of the Cue module, but you can create your Cue module anywhere inside the Dagger workspace.
```shell
cue mod init
```
### Organize your package
Let's create a new directory for our Cue package:
```shell
mkdir gcpcloudrun
```
### Create a basic plan
```cue title="todoapp/gcpcloudrun/source.cue"
package gcpcloudrun
import (
"alpha.dagger.io/dagger"
"alpha.dagger.io/docker"
"alpha.dagger.io/gcp"
"alpha.dagger.io/gcp/cloudrun"
"alpha.dagger.io/gcp/gcr"
)
// Source code of the sample application
src: dagger.#Artifact & dagger.#Input
// GCR full image name
imageRef: string & dagger.#Input
image: docker.#Build & {
source: src
}
gcpConfig: gcp.#Config
creds: gcr.#Credentials & {
config: gcpConfig
}
push: docker.#Push & {
target: imageRef
source: image
auth: {
username: creds.username
secret: creds.secret
}
}
deploy: cloudrun.#Service & {
config: gcpConfig
image: push.ref
}
```
## Set up the environment
### Create a new environment
Now that your Cue package is ready, let's create an environment to run it:
```shell
dagger new 'gcpcloudrun' -p ./gcpcloudrun
```
### Configure user inputs
```shell
dagger input dir src . -e gcpcloudrun
dagger input text deploy.name todoapp -e gcpcloudrun
dagger input text imageRef gcr.io/<your-project>/todoapp -e gcpcloudrun
dagger input text gcpConfig.region us-west2 -e gcpcloudrun
dagger input text gcpConfig.project <your-project> -e gcpcloudrun
dagger input secret gcpConfig.serviceKey -f ./gcp-sa-key.json -e gcpcloudrun
```
## Deploy
Now that everything is set correctly, let's deploy on Cloud Run:
```shell
dagger up -e gcpcloudrun
```

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,675 @@
---
slug: /1008/aws-cloudformation/
---
# Provision infrastructure with Dagger and AWS CloudFormation
In this guide, you will learn how to automatically [provision infrastructure](https://dzone.com/articles/infrastructure-provisioning-) on AWS by integrating [Amazon Cloudformation](https://aws.amazon.com/cloudformation/) in your Dagger environment.
We will start with something simple: provisioning a new bucket on [Amazon S3](https://en.wikipedia.org/wiki/Amazon_S3). But Cloudformation can provision almost any AWS resource, and Dagger can integrate with the full Cloudformation API.
## Prerequisites
### Reminder
#### Guidelines
The provisioning strategy detailed below follows S3 best practices. However, to remain agnostic of your current AWS level, it profoundly relies on S3 and Cloudformation documentation.
#### Relays
The first thing to consider when developing a plan based on relays is to read their universe reference: it summarizes the expected inputs and their corresponding formats. [Here](/reference/universe/aws/cloudformation) is the Cloudformation one.
## Initialize a Dagger Workspace and Environment
### (optional) Setup example app
You will need the local copy of the [Dagger examples repository](https://github.com/dagger/examples) used in previous guides
```shell
git clone https://github.com/dagger/examples
```
Make sure to run all commands from the todoapp directory:
```shell
cd examples/todoapp
```
### (optional) Initialize a Cue module
This guide will use the same directory as the root of the Dagger workspace and the root of the Cue module, but you can create your Cue module anywhere inside the Dagger workspace.
```shell
cue mod init
```
### Organize your package
Let's create a new directory for our Cue package:
```shell
mkdir cloudformation
```
## Create a basic plan
Let's implement the Cloudformation template and convert it to a Cue definition for further flexibility.
### Setup the template and the environment
#### Setup the template
The idea here is to follow best practices in [S3 buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/HostingWebsiteOnS3Setup.html) provisioning. Thankfully, the AWS documentation contains a working [Cloudformation template](https://docs.aws.amazon.com/fr_fr/AWSCloudFormation/latest/UserGuide/quickref-s3.html#scenario-s3-bucket-website) that fits 95% of our needs.
##### 1. Tweaking the template: output bucket name only
Create a file named `template.cue` and add the following configuration to it.
```cue title="todoapp/cloudformation/template.cue"
package cloudformation
// inlined s3 cloudformation template as a string
template: """
{
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"S3Bucket": {
"Type": "AWS::S3::Bucket",
"Properties": {
"AccessControl": "PublicRead",
"WebsiteConfiguration": {
"IndexDocument": "index.html",
"ErrorDocument": "error.html"
}
},
"DeletionPolicy": "Retain"
},
"BucketPolicy": {
"Type": "AWS::S3::BucketPolicy",
"Properties": {
"PolicyDocument": {
"Id": "MyPolicy",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadForGetBucketObjects",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": {
"Fn::Join": [
"",
[
"arn:aws:s3:::",
{
"Ref": "S3Bucket"
},
"/*"
]
]
}
}
]
},
"Bucket": {
"Ref": "S3Bucket"
}
}
}
},
"Outputs": {
"Name": {
"Value": {
"Fn::GetAtt": ["S3Bucket", "Arn"]
},
"Description": "Name S3 Bucket"
}
}
}
"""
```
##### 2. Cloudformation relay
As our plan relies on [Cloudformation's relay](/reference/universe/aws/cloudformation), let's dissect the expected inputs by gradually incorporating them into our plan.
```shell
dagger doc alpha.dagger.io/aws/cloudformation
# Inputs:
# config.region string AWS region
# config.accessKey dagger.#Secret AWS access key
# config.secretKey dagger.#Secret AWS secret key
# source string Source is the Cloudformation template (JSON/YAML…
# stackName string Stackname is the cloudformation stack
# parameters struct Stack parameters
# onFailure *"DO_NOTHING" | "ROLLBACK" | "DELETE" Behavior when failure to create/update the Stack
# timeout *10 | >=0 & int Maximum waiting time until stack creation/update…
# neverUpdate *false | true Never update the stack if already exists
```
###### 1. General insights
As seen above in the documentation, values starting with `*` are default values. However, as a plan developer, we may need to add default values to inputs from relays without one: Cue gives you this flexibility.
###### 2. The config value
The config values are all part of the `aws` relay. Regarding this package, as you can see above, five of the required inputs miss default options (`parameters` field is optional):
> - _config.region_
> - _config.accessKey_
> - _config.secretKey_
> - _source_
> - _stackName_
Let's implement the first step, use the `aws.#Config` relay, and request its first inputs: the region to deploy and the AWS credentials.
```cue title="todoapp/cloudformation/source.cue"
package cloudformation
import (
"alpha.dagger.io/aws"
)
// AWS account: credentials and region
awsConfig: aws.#Config
```
This defines:
- `awsConfig`: AWS CLI Configuration step using the package `alpha.dagger.io/aws`. It takes three user inputs: a `region`, an `accessKey`, and a `secretKey`
#### Setup the environment
##### 1. Create a new environment
Now that the Cue package is ready, let's create an environment to run it:
```shell
dagger new 'cloudformation' -p ./cloudformation
```
##### 2. Check plan
_Pro tips_: To check whether it worked or not, these three commands might help
```shell
dagger input list -e cloudformation # List our personal plan's inputs
# Input Value Set by user Description
# awsConfig.region string false AWS region
# awsConfig.accessKey dagger.#Secret false AWS access key
# awsConfig.secretKey dagger.#Secret false AWS secret key
dagger query -e cloudformation # Query values / inspect default values (Instrumental in case of conflict)
# {}
dagger up -e cloudformation # Try to run the plan. As expected, we encounter a failure because some user inputs haven't been set
# 4:11PM ERR system | required input is missing input=awsConfig.region
# 4:11PM ERR system | required input is missing input=awsConfig.accessKey
# 4:11PM ERR system | required input is missing input=awsConfig.secretKey
# 4:11PM FTL system | some required inputs are not set, please re-run with `--force` if you think it's a mistake missing=0s
```
#### Finish template setup
Now that we have the `config` definition properly configured, let's modify the Cloudformation one:
```cue title="todoapp/cloudformation/source.cue"
package cloudformation
import (
"alpha.dagger.io/aws"
"alpha.dagger.io/dagger"
"alpha.dagger.io/random"
"alpha.dagger.io/aws/cloudformation"
)
// AWS account: credentials and region
awsConfig: aws.#Config
// Create a random suffix
suffix: random.#String & {
seed: ""
}
// Query the Cloudformation stackname, or create one with a random suffix to keep unicity
cfnStackName: *"stack-\(suffix.out)" | string & dagger.#Input
// AWS Cloudformation stdlib
cfnStack: cloudformation.#Stack & {
config: awsConfig
stackName: cfnStackName
source: template
}
```
This defines:
- `suffix`: random suffix leveraging the `random` relay. It doesn't have a seed because we don't care about predictability
- `cfnStackName`: Name of the stack, either a default value `stack-suffix` or user input
- `cfnStack`: Cloudformation relay with `AWS config`, `stackName` and `JSON template` as inputs
### Configure the environment
Before bringing up the deployment, we need to provide the `cfnStack` inputs declared in the configuration. Otherwise, Dagger will complain about missing inputs.
```shell
dagger up -e cloudformation
# 3:34PM ERR system | required input is missing input=awsConfig.region
# 3:34PM ERR system | required input is missing input=awsConfig.accessKey
# 3:34PM ERR system | required input is missing input=awsConfig.secretKey
# 3:34PM FTL system | some required inputs are not set, please re-run with `--force` if you think it's a mistake missing=0s
```
You can inspect the list of inputs (both required and optional) using dagger input list:
```shell
dagger input list -e cloudformation
# Input Value Set by user Description
# awsConfig.region string false AWS region
# awsConfig.accessKey dagger.#Secret false AWS access key
# awsConfig.secretKey dagger.#Secret false AWS secret key
# suffix.length *12 | number false length of the string
# cfnStack.onFailure *"DO_NOTHING" | "ROLLBACK" | "DELETE" false Behavior when failure to create/update the Stack
# cfnStack.timeout *10 | >=0 & int false Maximum waiting time until stack creation/update (in minutes)
# cfnStack.neverUpdate *false | true false Never update the stack if already exists
```
Let's provide the missing inputs:
```shell
dagger input text awsConfig.region us-east-2 -e cloudformation
dagger input secret awsConfig.accessKey yourAccessKey -e cloudformation
dagger input secret awsConfig.secretKey yourSecretKey -e cloudformation
```
### Deploying
Finally ! We now have a working template ready to be used to provision S3 infrastructures. Let's deploy it:
<Tabs
defaultValue="nd"
values={[
{ label: 'Normal deploy', value: 'nd', },
{ label: 'Debug deploy', value: 'dd', },
]
}>
<TabItem value="nd">
```shell
dagger up -e cloudformation
#2:22PM INF suffix.out | computing
#2:22PM INF suffix.out | completed duration=200ms
#2:22PM INF cfnStack.outputs | computing
#2:22PM INF cfnStack.outputs | #15 1.304 {
#2:22PM INF cfnStack.outputs | #15 1.304 "Parameters": []
#2:22PM INF cfnStack.outputs | #15 1.304 }
#2:22PM INF cfnStack.outputs | #15 2.948 {
#2:22PM INF cfnStack.outputs | #15 2.948 "StackId": "arn:aws:cloudformation:us-east-2:817126022176:stack/stack-emktqcfwksng/207d29a0-cd0b-11eb-aafd-0a6bae5481b4"
#2:22PM INF cfnStack.outputs | #15 2.948 }
#2:22PM INF cfnStack.outputs | completed duration=35s
dagger output list -e cloudformation
# Output Value Description
# suffix.out "emktqcfwksng" generated random string
# cfnStack.outputs.Name "arn:aws:s3:::stack-emktqcfwksng-s3bucket-9eiowjs1jab4" -
```
</TabItem>
<TabItem value="dd">
```shell
dagger up -l debug -e cloudformation
#Output:
# 3:50PM DBG system | detected buildkit version version=v0.8.3
# 3:50PM DBG system | spawning buildkit job localdirs={
# "/tmp/infra-provisioning/.dagger/env/infra/plan": "/tmp/infra-provisioning/.dagger/env/infra/plan"
# } attrs=null
# 3:50PM DBG system | loading configuration
# ... Lots of logs ... :-D
# Output Value Description
# suffix.out "abnyiemsoqbm" generated random string
# cfnStack.outputs.Name "arn:aws:s3:::stack-abnyiemsoqbm-s3bucket-9eiowjs1jab4" -
dagger output list -e cloudformation
# Output Value Description
# suffix.out "abnyiemsoqbm" generated random string
# cfnStack.outputs.Name "arn:aws:s3:::stack-abnyiemsoqbm-s3bucket-9eiowjs1jab4" -
```
</TabItem>
</Tabs>
The deployment went well!
In case of a failure, the `Debug deploy` tab shows the command to get more information.
The name of the provisioned S3 instance lies in the `cfnStack.outputs.Name` output key, without `arn:aws:s3:::`
> With this provisioning infrastructure, your dev team will easily be able to instantiate aws infrastructures: all they need to know is `dagger input list -e cloudformation` and `dagger up -e cloudformation` isn't that awesome? :-D
## Cue Cloudformation template
This section will convert the inlined JSON template to CUE to take advantage of the language features.
To do so quickly, we will first transform the template from JSON format to Cue format, then optimize it to leverage Cue's forces.
### 1. Create convert.cue
We will create a new `convert.cue` file to process the conversion
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
<Tabs
defaultValue="sv"
values={[
{ label: 'JSON Generic Code', value: 'sv', },
{ label: 'YAML Generic Code', value: 'yv', },
]
}>
<TabItem value="sv">
```cue title="todoapp/cloudformation/convert.cue"
package cloudformation
import "encoding/json"
s3Template: json.Unmarshal(template)
```
</TabItem>
<TabItem value="yv">
```cue title="todoapp/cloudformation/convert.cue"
package cloudformation
import "encoding/yaml"
s3Template: yaml.Unmarshal(template)
```
</TabItem>
</Tabs>
This defines:
- `s3Template`: contains the unmarshalled template.
You need to empty the plan and copy the `convert.cue` file to the plan for Dagger to reference it
```shell
mv cloudformation/source.cue ~/tmp/
```
### 2. Retrieve the Unmarshalled JSON
Then, still in the same folder, query the `s3Template` value to retrieve the Unmarshalled result of `s3`:
```shell
dagger query s3Template -e cloudformation
# {
# "AWSTemplateFormatVersion": "2010-09-09",
# "Outputs": {
# "Name": {
# "Description": "Name S3 Bucket",
# "Value": {
# "Fn::GetAtt": [
# "S3Bucket",
# "Arn"
# ...
```
The commented output above is the cue version of the JSON Template, copy it
### 3. Remove convert.cue
```shell
rm cloudformation/convert.cue
```
### 4. Store the output
Open `cloudformation/template.cue` and append below elements with copied Cue definition of the JSON:
```cue title="todoapp/cloudformation/template.cue"
// Add this line, to make it part to the cloudformation template
package cloudformation
import "encoding/json"
// Wrap exported Cue in previous point inside the `s3` value
s3: {
"AWSTemplateFormatVersion": "2010-09-09",
"Outputs": {
"Name": {
"Description": "Name S3 Bucket",
"Value": {
"Fn::GetAtt": [
"S3Bucket",
"Arn"
]
}
}
},
"Resources": {
"BucketPolicy": {
"Properties": {
"Bucket": {
"Ref": "S3Bucket"
},
"PolicyDocument": {
"Id": "MyPolicy",
"Statement": [
{
"Action": "s3:GetObject",
"Effect": "Allow",
"Principal": "*",
"Resource": {
"Fn::Join": [
"",
[
"arn:aws:s3:::",
{
"Ref": "S3Bucket"
},
"/*"
]
]
},
"Sid": "PublicReadForGetBucketObjects"
}
],
"Version": "2012-10-17"
}
},
"Type": "AWS::S3::BucketPolicy"
},
"S3Bucket": {
"DeletionPolicy": "Retain",
"Properties": {
"AccessControl": "PublicRead",
"WebsiteConfiguration": {
"ErrorDocument": "error.html",
"IndexDocument": "index.html"
}
},
"Type": "AWS::S3::Bucket"
}
}
}
// Template contains the marshalled value of the s3 template
template: json.Marshal(s3)
```
We're using the built-in `json.Marshal` function to convert CUE back to JSON, so Cloudformation still receives the same template.
You can inspect the configuration using `dagger query -e cloudformation` to verify it produces the same manifest:
```shell
dagger query template -f text -e cloudformation
```
Now that the template is defined in CUE, we can use the language to add more flexibility to our template.
Let's define a re-usable `#Deployment` definition in `todoapp/cloudformation/deployment.cue`:
```cue title="todoapp/cloudformation/deployment.cue"
package cloudformation
#Deployment: {
// Bucket's output description
description: string
// index file
indexDocument: *"index.html" | string
// error file
errorDocument: *"error.html" | string
// Bucket policy version
version: *"2012-10-17" | string
// Retain as default deletion policy. Delete is also accepted but requires the s3 bucket to be empty
deletionPolicy: *"Retain" | "Delete"
// Canned access control list (ACL) that grants predefined permissions to the bucket
accessControl: *"PublicRead" | "Private" | "PublicReadWrite" | "AuthenticatedRead" | "LogDeliveryWrite" | "BucketOwnerRead" | "BucketOwnerFullControl" | "AwsExecRead"
// Modified copy of s3 value in `todoapp/cloudformation/template.cue`
template: {
"AWSTemplateFormatVersion": "2010-09-09",
"Outputs": {
"Name": {
"Description": description,
"Value": {
"Fn::GetAtt": [
"S3Bucket",
"Arn"
]
}
}
},
"Resources": {
"BucketPolicy": {
"Properties": {
"Bucket": {
"Ref": "S3Bucket"
},
"PolicyDocument": {
"Id": "MyPolicy",
"Statement": [
{
"Action": "s3:GetObject",
"Effect": "Allow",
"Principal": "*",
"Resource": {
"Fn::Join": [
"",
[
"arn:aws:s3:::",
{
"Ref": "S3Bucket"
},
"/*"
]
]
},
"Sid": "PublicReadForGetBucketObjects"
}
],
"Version": version
}
},
"Type": "AWS::S3::BucketPolicy"
},
"S3Bucket": {
"DeletionPolicy": deletionPolicy,
"Properties": {
"AccessControl": "PublicRead",
"WebsiteConfiguration": {
"ErrorDocument": errorDocument,
"IndexDocument": indexDocument
}
},
"Type": "AWS::S3::Bucket"
}
}
}
}
```
`template.cue` can be rewritten as follows:
```cue title="todoapp/cloudformation/template.cue"
package cloudformation
import "encoding/json"
s3: #Deployment & {
description: "Name S3 Bucket"
}
// Template contains the marshalled value of the s3 template
template: json.Marshal(s3.template)
```
Verify template
Double-checks at the template level can be done with manual uploads on Cloudformation's web interface or by executing the below command locally:
```shell
tmpfile=$(mktemp ./tmp.XXXXXX) && dagger query template -f text -e cloudformation > "$tmpfile" && aws cloudformation validate-template --template-body file://"$tmpfile" ; rm "$tmpfile"
```
Let's make sure it yields the same result:
```shell
dagger query template -f text -e cloudformation
# {
# "description": "Name S3 Bucket",
# "indexDocument": "index.html",
# "errorDocument": "error.html",
# "version": "2012-10-17",
# "deletionPolicy": "Retain",
# "accessControl": "PublicRead",
# "template": {
# "AWSTemplateFormatVersion": "2010-09-09",
# "Outputs": {
# "Name": {
# "Description": "Name S3 Bucket",
# "Value": {
```
You need to move back the `source.cue` for Dagger to instanciate a bucket:
```shell
mv ~/tmp/source.cue cloudformation/source.cue
```
And we can now deploy it:
```shell
dagger up -e cloudformation
#2:22PM INF suffix.out | computing
#2:22PM INF suffix.out | completed duration=200ms
#2:22PM INF cfnStack.outputs | computing
#2:22PM INF cfnStack.outputs | #15 1.304 {
#2:22PM INF cfnStack.outputs | #15 1.304 "Parameters": []
#2:22PM INF cfnStack.outputs | #15 1.304 }
#2:22PM INF cfnStack.outputs | #15 2.948 {
#2:22PM INF cfnStack.outputs | #15 2.948 "StackId": "arn:aws:cloudformation:us-east-2:817126022176:stack/stack-emktqcfwksng/207d29a0-cd0b-11eb-aafd-0a6bae5481b4"
#2:22PM INF cfnStack.outputs | #15 2.948 }
#2:22PM INF cfnStack.outputs | completed duration=35s
```
Name of the deployed bucket:
```shell
dagger output list -e cloudformation
# Output Value Description
# suffix.out "ucwcecwwshdl" generated random string
# cfnStack.outputs.Name "arn:aws:s3:::stack-ucwcecwwshdl-s3bucket-gaqmj8rzsl08" -
```
The name of the provisioned S3 instance lies in the `cfnStack.outputs.Name` output key, without `arn:aws:s3:::`
PS: This plan could be further extended with the AWS S3 example. It could provide infrastructure and quickly deploy it.
PS1: As it could be an excellent first exercise for you, this won't be detailed here. However, we're interested in your imagination: let us know your implementations :-)

View File

@@ -0,0 +1,51 @@
---
slug: /1009/github-actions/
---
# Integrate Dagger with Github Actions
This tutorial illustrates how to use Github Actions and Dagger to build, push and deploy Docker images to Cloud Run.
## Prerequisites
We assume that you've finished our 106-cloudrun tutorial as this one continues right after.
## Setup new Github repo
Push existing `examples/todoapp` directory to your new Github repo (public or private). It should contain all the code
from `https://github.com/dagger/examples/tree/main/todoapp`, `gcpcloudrun` and `.dagger/env/gcpcloudrun/` directory.
### Add Github Actions Secret
Dagger encrypts all input secrets using your key stored at `~/.config/dagger/keys.txt`. Copy the entire line starting
with `AGE-SECRET-KEY-` and save it to a Github secret named `DAGGER_AGE_KEY`. In case you don't know how to create
secrets on Github take a look at [this tutorial](https://docs.github.com/en/actions/reference/encrypted-secrets).
## Create a Github Actions Workflow
Create `.github/workflows/gcpcloudrun.yml` file and paste the following code into it:
```yaml title=".github/workflows/gcpcloudrun.yml"
name: CloudRun
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Dagger
uses: dagger/dagger-action@v1
with:
age-key: ${{ secrets.DAGGER_AGE_KEY }}
args: up -e gcpcloudrun
```
## Run
On any push to `main` branch this workflow should run and deploy the `todoapp` to your GCP Cloud Run instance.

View File

@@ -0,0 +1,165 @@
---
slug: /1010/dev-cue-package/
---
# Develop a new CUE package for Dagger
This tutorial illustrates how to create new packages, manually distribute them among your applications and contribute to
the Dagger stdlib packages.
## Creating your own package
### Initializing workspace
Create an empty directory for your new Dagger workspace:
```shell
mkdir workspace
cd workspace
```
As described in the previous tutorials, initialize your Dagger workspace:
```shell
dagger init
```
That will create 2 directories: `.dagger` and `cue.mod` where our package will reside:
```shell
.
├── cue.mod
│ ├── module.cue
│ ├── pkg
│ └── usr
├── .dagger
│ └── env
```
### Writing the package
Now that you've initialized your workspace it's time to write a simple package. Package name usually starts with a
domain name (as in Go) followed with a descriptive name. In this example we reuse the Cloud Run example and create a
package from it.
```shell
mkdir -p cue.mod/pkg/github.com/tjovicic/gcpcloudrun
```
Let's write the package logic. It is basically what we've seen in the 106-cloudrun example:
```shell
touch cue.mod/pkg/github.com/tjovicic/gcpcloudrun/source.cue
```
```cue title="cue.mod/pkg/github.com/tjovicic/gcpcloudrun/source.cue"
package gcpcloudrun
import (
"alpha.dagger.io/dagger"
"alpha.dagger.io/docker"
"alpha.dagger.io/gcp"
"alpha.dagger.io/gcp/cloudrun"
"alpha.dagger.io/gcp/gcr"
)
#Run: {
// Source code of the sample application
src: dagger.#Artifact & dagger.#Input
// GCR full image name
imageRef: string & dagger.#Input
image: docker.#Build & {
source: src
}
gcpConfig: gcp.#Config
creds: gcr.#Credentials & {
config: gcpConfig
}
push: docker.#Push & {
target: imageRef
source: image
auth: {
username: creds.username
secret: creds.secret
}
}
deploy: cloudrun.#Service & {
config: gcpConfig
image: push.ref
}
}
```
### Running the package
Now that you've successfully created a package, let's run it in a new environment. Create a new test package using
our reusable `gcpcloudrun`:
```shell
mkdir test
cat > test/source.cue << EOF
package test
import (
"github.com/tjovicic/gcpcloudrun"
)
run: gcpcloudrun.#Run
EOF
dagger new staging -p ./test
```
Run it:
```shell
dagger up -e staging
```
You should see a familiar output:
```shell
9:32AM ERR system | required input is missing input=run.src
9:32AM ERR system | required input is missing input=run.imageRef
9:32AM ERR system | required input is missing input=run.gcpConfig.region
9:32AM ERR system | required input is missing input=run.gcpConfig.project
9:32AM ERR system | required input is missing input=run.gcpConfig.serviceKey
9:32AM ERR system | required input is missing input=run.deploy.name
9:32AM FTL system | some required inputs are not set, please re-run with `--force` if you think it's a mistake missing=0s
```
## Manually distributing packages
You've probably guessed this package isn't tied to just your workspace. You can easily copy/paste it into any number
of different workspaces and use it as we've showed above.
```shell
mkdir -p /my-new-workspace/cue.mod/pkg/github.com/tjovicic/gcpcloudrun
cp ./cue.mod/pkg/github.com/tjovicic/gcpcloudrun/source.cue /new-workspace/cue.mod/pkg/github.com/tjovicic/gcpcloudrun
```
## Contributing to Dagger stdlib
Our [stdlib](https://github.com/dagger/dagger/tree/main/stdlib) has many useful packages that you can use.
You've probably seen it when you've initialized your workspace:
```shell
.
├── cue.mod
│ ├── module.cue
│ ├── pkg
│ │ ├── alpha.dagger.io
│ │ └── .gitignore
│ └── usr
```
We are still a small community and are constantly looking for new contributors that will work with us improve this
amazing project. If you feel like we are missing a package or want to improve an existing one, please start with our
[contributing docs](https://github.com/dagger/dagger/blob/main/CONTRIBUTING.md) and open a PR.