Co-authored-by: Vasek - Tom C. <tom.chauveau@epitech.eu>
Signed-off-by: Gerhard Lazu <gerhard@lazu.co.uk>
Signed-off-by: Sam Alba <samalba@users.noreply.github.com>
Co-authored-by: Vasek - Tom C. <tom.chauveau@epitech.eu>
Signed-off-by: Gerhard Lazu <gerhard@lazu.co.uk>
Signed-off-by: Sam Alba <samalba@users.noreply.github.com>
The Docker daemon that I am using is running remotely, and I connect to
it via Tailscale tunnel, which means that my DOCKER_HOST is set to
tcp://100.113.182.91:2375
This change makes Dagger work with my setup. It's been running well for
me for a few weeks now, this started as https://github.com/thechangelog/changelog.com/pull/395
I think there is an opportunity to add support for ssh:// & file://,
but I am keeping this first addition small on purpose.
Signed-off-by: Gerhard Lazu <gerhard@lazu.co.uk>
Signed-off-by: Sam Alba <samalba@users.noreply.github.com>
This change helps the transition between `dagger input` and `#Plan.context`.
In summary, the codebase now relies on a *context* for execution with mapping to *IDs*.
In the future, *context* will come from a `#Plan.context`.
In the meantime, a bridge converts `dagger input` to a plan context. This allows both *old* and *new* style configurations to co-exist with the same underlying engine.
- Implement `plancontext`. Context holds the execution context for a plan. Currently this includes the platform, local directories, secrets and services (e.g. unix/npipe).
- Contextual data can be registered at any point. In the future, this will be done by `#Plan.context`
- Migrated the `dagger input` codebase to register inputs in a `plancontext`
- Migrated low-level types/operations to the *Context ID* pattern.
- `dagger.#Stream` now only includes an `id` (instead of `unix` path)
- `dagger.#Secret` still includes only an ID, but now it's based off `plancontext`
- `op.#Local` now only includes an `id` (instead of `path`, `include`, `exclude`.
Signed-off-by: Andrea Luzzardi <aluzzardi@gmail.com>
This adds support to loading artifacts (e.g. docker.#Build,
os.#Container, ...) into any arbitrary docker engine (through a
dagger.#Stream for UNIX sockets or SSH for a remote engine)
Implementation:
- Add op.#SaveImage which serializes an artifact into an arbitrary path
(docker tarball format)
- Add docker.#Load which uses op.#SaveImage to serialize to disk and
executes `docker load` to load it back
Caveats: Because we're doing this in userspace rather than letting
dagger itself load the image, the performance is pretty bad.
The buildkit API is meant for streaming (get a stream of a docker image
pipe it into docker load). Because of userspace, we have to load the
entire docker image into memory, then serialize it in a single WriteFile
LLB operation.
Example:
```cue
package main
import (
"alpha.dagger.io/dagger"
"alpha.dagger.io/docker"
)
source: dagger.#Input & dagger.#Artifact
dockersocket: dagger.#Input & dagger.#Stream
build: docker.#Build & {
"source": source
}
load: docker.#Load & {
source: build
tag: "testimage"
socket: dockersocket
}
```
Signed-off-by: Andrea Luzzardi <aluzzardi@gmail.com>
I found an issue when during tests execution : there was orphan.
It's because #App doesn't give way to specify the compose project,
by default it's the directory where you launch your app but in our
definition, it will always be source.
The problem is that if we launch two differents docker-compose in the same
server, his project name will be source for both and it will create
orphans problems on cleanup (by docker-compose down).
This case is exactly what we do in tests so I've add the field name
to specify the projet name and avoid that issue.
Signed-off-by: Tom Chauveau <tom.chauveau@epitech.eu>
Add some feature to docker.#Command to :
- Copy artifact in the container
- Write files in the container
- Login to registries
Signed-off-by: Tom Chauveau <tom.chauveau@epitech.eu>