docker Docker Private Registry 102: Use Amazon S3 for data storage

In a real setup, having ephemeral data storage or at least unreliable data storage is most often a no-go. Hopefully, the registry container provides a nice and easy way to use a «cloud» backend storage.

A few different options exist and are supported out of the box by the registry container image, like Amazon S3, Google Cloud Storage, Openstack Swift.

There is a few differences between the storage engines used, but mostly, it's a matter of changing the SETTINGS_FLAVOR environment variable and a few required env vars that depends on your choice. We'll see how to use Amazon S3 as our data storage backend.

Create a S3 bucket and IAM permissions

Create an S3 bucket, and write down its name and the AWS zone it runs in.

Now, you need write permissions for your container. I advice that you create a new IAM (the amazon ACL manager service) user dedicated for the task of storing and retrieving docker images, so you don't mix it up with eventual other users / credentials you may have on amazon web services.

Run a container backed by S3

docker run \
         -e SETTINGS_FLAVOR=s3 \
         -e AWS_BUCKET=mybucket \
         -e STORAGE_PATH=/registry \
         -e AWS_KEY=myawskey \
         -e AWS_SECRET=myawssecret \
         -e SEARCH_BACKEND=sqlalchemy \
         -p 5000:5000 \

According to the readme, it does the job. You need to know a few things though...

S3 flavour comes with DEBUG=false by default

Unlike the first container we run with local storage, default behaviour is not to output anything, and if you need debug, you'll have to add

-e DEBUG=True

If you need to tune the logging verbosity, you also can use

-e LOGLEVEL=debug

Specify AWS region

You should provide the Amazon Web Service region in environment (this depends on your S3 bucket location).

-e AWS_REGION="eu-west-1"

Make sure that your host time is correct

One of the hardest to understand problems I came into when setting it up first was that the boot2docker virtual machine time was set on docker host time on boot, then never synced again. What happened is that everytime my laptop was closed, the host clock was still running on the main computer, but not on the docker host virtual machine, which caused time differences. Not a big deal, unless you try to authenticate with amazon web services ...

If you use boot2docker (and run into AWS authentication problems, freeze on container start, boto connection problems, timeouts ...), then make sure that you sync time before you try to run the registry:

boot2docker ssh -- sudo ntpclient -s -h


boto is a python library that handles connections and exchanges with amazon web services, and it is the library used by the registry container to connect, read and write to S3.

Running container on Digital Ocean with IPv6 enabled

Apparently, docker does not behave well with the digital ocean ipv6 setup. You have to use IPv4 name servers, either by setting it globally in /etc/default/docker, or in the docker run command line.

--dns --dns

Wrapping it up

Here is my complete script that runs the registry container on S3:



docker stop $NAME
docker kill $NAME
docker rm $NAME

docker run \
         -e SETTINGS_FLAVOR=s3 \
         -e AWS_BUCKET=$BUCKET \
         -e AWS_KEY=$AWS_KEY \
         -e AWS_SECRET=$AWS_SECRET \
         -e AWS_REGION=$AWS_REGION \
         -e SEARCH_BACKEND=sqlalchemy \
         -e GUNICORN_OPTS=[--preload] \
         --dns --dns \
         -p 5000:5000 \
         --name $NAME \
         -d $IMAGE

Docker Private Registry 103 — Nginx front container →

Share the love!

Liked this article? Please consider sharing it on your favorite network, it really helps me a lot!

You can also add your valuable insights by commenting below.