a bite size introduction to bazel

building, testing, and publishing containers with rules_docker

Unless you’re already familiar with Java, C++, or mobile development, the official Bazel tutorials aren’t going to be extremely useful. Beyond that, the official docs throw you into the deep end to sort out your own mental model of how this pile of new abstractions play with each other.

This example makes a basic change to an existing open source project with a container hosted on DockerHub©, tests it using GoogleContainerTools/container-structure-test, and publishes it on DockerHub© under my own repository.

Example source:

digital-plumbers-union/containers@60920ed263485c5da06a019dff3936b714d8e957

setup

The official Bazel installation docs are fine, but since you are a progressive-minded individual, you will want to use Bazelisk. I recommend following their advice and either aliasing it to bazel or installing it to /usr/local/bin/bazel.

Configure it for our repository:

1
echo "2.2.0" > .bazelversion

Don’t commit Bazel output:

1
echo 'bazel-*' > .gitignore

Create an empty BUILD.bazel in workspace root since there is no need for one at the moment.

wire up dependencies in WORKSPACE

We will need:

  • rules_docker
  • rules_pkg to create tars of the files I want to add to containers at specific paths.
  • Container images for both ARM and AMD.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")

# setup rules_docker

http_archive(
name = "io_bazel_rules_docker",
sha256 = "dc97fccceacd4c6be14e800b2a00693d5e8d07f69ee187babfd04a80a9f8e250",
strip_prefix = "rules_docker-0.14.1",
urls = ["https://github.com/bazelbuild/rules_docker/releases/download/v0.14.1/rules_docker-v0.14.1.tar.gz"],
)

load(
"@io_bazel_rules_docker//repositories:repositories.bzl",
container_repositories = "repositories",
)

container_repositories()

# This is NOT needed when going through the language lang_image
# "repositories" function(s).
load("@io_bazel_rules_docker//repositories:deps.bzl", container_deps = "deps")

container_deps()

# load container_pull so we can pull in base
# images we depend on
load(
"@io_bazel_rules_docker//container:container.bzl",
"container_pull",
)

# setup rules_pkg
load("@rules_pkg//:deps.bzl", "rules_pkg_dependencies")

rules_pkg_dependencies()

# pull our containers
container_pull(
name = "nfs_server_arm_base",
digest = "sha256:fa1f27cefb54f26df59f4fb8efce02ab0e45f1fe34cfb818dadbf8ddac6b2bc3",
registry = "index.docker.io",
repository = "itsthenetwork/nfs-server-alpine",
)

container_pull(
name = "nfs_server_amd_base",
digest = "sha256:7fa99ae65c23c5af87dd4300e543a86b119ed15ba61422444207efc7abd0ba20",
registry = "index.docker.io",
repository = "itsthenetwork/nfs-server-alpine",
)

extend itsthenetwork/nfs-server-alpine to enable crossmnt

I want to make a simple change by adding crossmnt to the exports file that configures the mounts that the NFS server will expose:

1
{{SHARED_DIRECTORY}} {{PERMITTED}}({{READ_ONLY}},fsid=0,{{SYNC}},no_subtree_check,no_auth_nlm,insecure,no_root_squash,crossmnt)
1
2
mkdir containers/
touch containers/BUILD

I need to create these customized images for both ARM and AMD so that I can run the servers on a mixed-arch k3s cluster without too much fuss, so in containers/BUILD, I define them as an array I can iterate on when defining targets:

1
2
3
4
architectures = [
"amd",
"arm",
]

Before using container_image, I need to package up the modified exports file in such a way that when it is extracted into the container, it ends up at the right location with the right permissions, which we do with pkg_tar:

1
2
3
4
5
6
7
8
9
10
11
pkg_tar(
name = "exports-file",
srcs = ["exports"],
extension = "tar.gz",
mode = "644",
# we want the final resting place to be /etc/exports
package_dir = "/etc",
# have to set strip_prefix = "." because of this bug:
# https://github.com/bazelbuild/rules_pkg/issues/82
strip_prefix = ".",
)

Now I can iterate over the array I declared above to create targets for building the containers (via container_image):

1
2
3
4
5
[container_image(
name = a,
base = "@nfs_server_" + a + "_base//image",
tars = [":exports-file"],
) for a in architectures]

When containers/BUILD is evaluated, I will have two targets named //containers:arm and //containers:amd and each will use the correct base image since I am computing the string value based on the architecture.

testing via container_test

1
2
3
4
5
6
7
8
[container_test(
name = a + "_test",
configs = [":tests.yaml"],
# previously created targets were simply the
# archicture name, so we can now reference them
# as we would a non-generated target
image = ":" + a,
) for a in architectures]

The tests themselves are simple container structure tests and not particularly relevant:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
##
# See https://github.com/GoogleContainerTools/container-structure-test
# for schema definition
##
schemaVersion: 2.0.0

fileExistenceTests:
- name: 'exports-file-exists'
path: '/etc/exports'
shouldExist: true
permissions: '-rw-r--r--'

fileContentTests:
- name: 'exports-file-contains-crossmnt'
path: '/etc/exports'
expectedContents: ['.*crossmnt.*']

publishing them individually via container_push

1
2
3
4
5
6
7
8
[container_push(
name = "push_" + a + "_container",
format = "Docker",
image = ":" + a,
registry = "index.docker.io",
repository = "shimmerjs/nfs-alpine-server",
tag = a,
) for a in architectures]

Now I can bazel query //... and see:

1
2
3
4
5
6
7
8
9
10
//nfs-server-alpine:push_arm_container
//nfs-server-alpine:push_amd_container
//nfs-server-alpine:arm_test
//nfs-server-alpine:arm_test.image
//nfs-server-alpine:arm
//nfs-server-alpine:amd_test
//nfs-server-alpine:amd_test.image
//nfs-server-alpine:amd
//nfs-server-alpine:exports-file
Loading: 1 packages loaded

At this point I have accomplished all that I set out to do functionallity.

eat a snickers

Still hungry?

setting up a single push target for both containers

I use container_bundle to create a single target for both images:

1
2
3
4
container_bundle(
name = "bundle",
images = {image_name(a): ":" + a for a in architectures},
)

There is an alternative container_push implementation at @io_bazel_rules_docker//contrib:push-all.bzl, and since it has the same name, I alias it to push_all when I load it:

1
2
3
4
5
6
7
8
load("@io_bazel_rules_docker//contrib:push-all.bzl", push_all = "container_push")

# push our bundle of containers
push_all(
name = "push_images",
bundle = ":bundle",
format = "Docker",
)

Now my entire containers/BUILD file is:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
load("@rules_pkg//:pkg.bzl", "pkg_tar")
load("@io_bazel_rules_docker//container:container.bzl", "container_bundle", "container_image", "container_push")
load("@io_bazel_rules_docker//contrib:push-all.bzl", push_all = "container_push")
load("@io_bazel_rules_docker//contrib:test.bzl", "container_test")
load(":image-name.bzl", "image_name")

architectures = [
"amd",
"arm",
]

# create tar file out of our exports file so that it is placed
# in the correct location in our container
pkg_tar(
name = "exports-file",
srcs = ["exports"],
extension = "tar.gz",
mode = "644",
package_dir = "/etc",
strip_prefix = ".",
)

# define image build, test, and publish targets for both architectures using
# list comprehensions
[container_image(
name = a,
base = "@nfs_server_" + a + "_base//image",
tars = [":exports-file"],
) for a in architectures]

[container_test(
name = a + "_test",
configs = [":tests.yaml"],
image = ":" + a,
) for a in architectures]

[container_push(
name = "push_" + a + "_container",
format = "Docker",
image = ":" + a,
registry = "index.docker.io",
repository = "shimmerjs/nfs-alpine-server",
tag = a,
) for a in architectures]

# create a container bundle containing both architectures
container_bundle(
name = "bundle",
images = {image_name(a): ":" + a for a in architectures},
)

# push our bundle of containers
push_all(
name = "push_images",
bundle = ":bundle",
format = "Docker",
)

adding buildifier to lint and reformat your Bazel files

Follow the stupid long dependencies for buildifier.

When done, my WORKSPACE was:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
workspace(name = "containers")

load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")

#########################################
# BETTY DOCKER
#########################################

http_archive(
name = "io_bazel_rules_docker",
sha256 = "dc97fccceacd4c6be14e800b2a00693d5e8d07f69ee187babfd04a80a9f8e250",
strip_prefix = "rules_docker-0.14.1",
urls = ["https://github.com/bazelbuild/rules_docker/releases/download/v0.14.1/rules_docker-v0.14.1.tar.gz"],
)

load(
"@io_bazel_rules_docker//repositories:repositories.bzl",
container_repositories = "repositories",
)

container_repositories()

# This is NOT needed when going through the language lang_image
# "repositories" function(s).
load("@io_bazel_rules_docker//repositories:deps.bzl", container_deps = "deps")

container_deps()

load(
"@io_bazel_rules_docker//container:container.bzl",
"container_pull",
)

#########################################
# NOLANG
#########################################

http_archive(
name = "io_bazel_rules_go",
sha256 = "e6a6c016b0663e06fa5fccf1cd8152eab8aa8180c583ec20c872f4f9953a7ac5",
urls = [
"https://mirror.bazel.build/github.com/bazelbuild/rules_go/releases/download/v0.22.1/rules_go-v0.22.1.tar.gz",
"https://github.com/bazelbuild/rules_go/releases/download/v0.22.1/rules_go-v0.22.1.tar.gz",
],
)

load("@io_bazel_rules_go//go:deps.bzl", "go_register_toolchains", "go_rules_dependencies")

go_rules_dependencies()

go_register_toolchains()

# gazell generates BUILD files for go/protobuf
http_archive(
name = "bazel_gazelle",
sha256 = "be9296bfd64882e3c08e3283c58fcb461fa6dd3c171764fcc4cf322f60615a9b",
urls = [
"https://storage.googleapis.com/bazel-mirror/github.com/bazelbuild/bazel-gazelle/releases/download/0.18.1/bazel-gazelle-0.18.1.tar.gz",
"https://github.com/bazelbuild/bazel-gazelle/releases/download/0.18.1/bazel-gazelle-0.18.1.tar.gz",
],
)

load("@bazel_gazelle//:deps.bzl", "gazelle_dependencies")

gazelle_dependencies()

# protobuf
http_archive(
name = "com_google_protobuf",
sha256 = "06aba70e1e918b70dbbae9ccd36ba94f5f20874fcf9d02c57e971036d387880b",
strip_prefix = "protobuf-master",
urls = ["https://github.com/protocolbuffers/protobuf/archive/master.zip"],
)

load("@com_google_protobuf//:protobuf_deps.bzl", "protobuf_deps")

protobuf_deps()

# pull buildtools now that all dependencies are loaded/installed:
# - go
# - gazelle
# - protobuf
http_archive(
name = "com_github_bazelbuild_buildtools",
strip_prefix = "buildtools-master",
url = "https://github.com/bazelbuild/buildtools/archive/master.zip",
)

# enables packaging of various formats (tar, etc)
http_archive(
name = "rules_pkg",
sha256 = "352c090cc3d3f9a6b4e676cf42a6047c16824959b438895a76c2989c6d7c246a",
urls = [
"https://github.com/bazelbuild/rules_pkg/releases/download/0.2.5/rules_pkg-0.2.5.tar.gz",
"https://mirror.bazel.build/github.com/bazelbuild/rules_pkg/releases/download/0.2.5/rules_pkg-0.2.5.tar.gz",
],
)

load("@rules_pkg//:deps.bzl", "rules_pkg_dependencies")

rules_pkg_dependencies()

#########################################
# EXTERNAL DEPENDENCIES
#########################################

#########################################
# BASE IMAGES
#########################################

# for jjj
container_pull(
name = "alpine_base",
digest = "sha256:ddba4d27a7ffc3f86dd6c2f92041af252a1f23a8e742c90e6e1297bfa1bc0c45",
registry = "index.docker.io",
repository = "library/alpine",
)

container_pull(
name = "nfs_server_arm_base",
digest = "sha256:fa1f27cefb54f26df59f4fb8efce02ab0e45f1fe34cfb818dadbf8ddac6b2bc3",
registry = "index.docker.io",
repository = "itsthenetwork/nfs-server-alpine",
)

container_pull(
name = "nfs_server_amd_base",
digest = "sha256:7fa99ae65c23c5af87dd4300e543a86b119ed15ba61422444207efc7abd0ba20",
registry = "index.docker.io",
repository = "itsthenetwork/nfs-server-alpine",
)

using bazel query to make sure you always push containers in CI

container_test and container_image are rules that produce test and build targets, so they will be executed when you run bazel test //... or bazel build //....

container_push is different. If you bazel build //containers:push_arm_container, it will produce the image digest:

1
2
cat bazel-bin/nfs-server-alpine/push_arm_container.digest                               1 ↵
sha256:0fedfd2a4402ead0085bd5bf7c217e3fa027d98e1a10aec726febfe7e0a4a7e5%

Which is pretty neat, but doesn’t actually push your container. You will need to bazel run container_push targets in CI to publish containers. To avoid hardcoding a set of targets, you can use bazel query to produce the list of targets that are of kind container_push and bazel run each one:

1
2
3
#!/usr/bin/env bash
PUSH_TARGETS=`bazel query 'kind(container_push, //...)'`
for target in $PUSH_TARGETS; do "bazel run $target"; done