Building an Ubuntu Repository Mirror with Docker and Docker Compose

September 8, 2019
docker linux ubuntu

Docker logo

Have you ever noticed that some days your Ubuntu server updates take a really long time to download? Or maybe you run a small datacenter’s worth of Ubuntu servers and want to update them regularly but not use 30x more of your bandwidth to download packages. Either way, a local Ubuntu repository is a good solution to both of these problems.

We’ll set up our mirror to run in a Docker stack using Docker Compose. This is a nice way of encapsulating all of the configuration that goes into a setup like this in an easily commitable package.

Building an apt-mirror Docker image

Start by setting up a directory for the project:

$ mkdir aptmirror-docker
$ cd aptmirror-docker

We’re going to use a tool called apt-mirror to do the heavy lifting of cloning the official repos for us, so we need to make a container for it to run in. We’ll use Alpine Linux as a base image to reduce image size. This means we won’t be able to use apt-get to install apt-mirror, but we can just clone the Github repo instead.

Here’s what my Dockerfile looks like:

FROM alpine:3.10
RUN apk --no-cache add perl wget git gzip && \
    git clone /tmp && \
    cp /tmp/apt-mirror /usr/bin
WORKDIR /aptmirror
COPY . .
VOLUME ["/aptmirror"]
CMD ["/bin/sh", ""]

We’re using a common technique to reduce image size here by consolidating everything into one RUN command. This creates less layers in the Docker container, but comes with the cost of reduced readability.

You’ll need an script to start apt-mirror and run it again every 6 hours.


while true;
  echo "Start sleep 6h..."
  sleep 6h

Now let’s build the image and see what happens.

$ docker build -t aptmirror .
$ docker run --rm -it aptmirror

apt-mirror: can't open config file (/etc/apt/mirror.list) at /usr/bin/apt-mirror line 318.
Start sleep 6h...

Well, there is no /etc/apt directory and there certainly isn’t a mirror.list. The repo contains a sample mirror.list so we can start from there.

Make your own copy of mirror.list in the project directory and edit it to suit your preferences. The syntax for specifying repos is the same as what you’d find in sources.list on an Ubuntu system, which is nice.

Here’s what mine looks like:

set base_path    /aptmirror
set cleanscript $var_path/
set defaultarch  amd64
set nthreads     20
set _tilde 0

deb-i386 bionic main restricted universe multiverse
deb bionic main restricted universe multiverse
deb-i386 bionic-updates main restricted universe multiverse
deb bionic-updates main restricted universe multiverse


If you support more than one release of Ubuntu feel free to add more deb lines but be aware that the more repositories you add to your mirror the more disk space it’s going to take up. At the time of this writing, the mirror that I set up using this procedure takes up about 280GB.

You can point these deb lines at an Ubuntu mirror of your choosing should you prefer to use a mirror closer to you for speed. Just know that some mirrors are more up to date than others. You can find a list of official Ubuntu mirrors here along with information about how up to date they are.

Even though my system is amd64 I found I had to pull the i386 arch repos using deb-i386. YMMV.

Now that we have our config file the way we want it, we can revise our run command to look like this:

$ docker run --rm -v $PWD/mirror.list:/etc/apt/mirror.list aptmirror

You should now see aptmirror happily begin constructing your mirror for you. Let this command run over night (add a -d in the command above to run it in the background) and when you get up you should see a brand new mirror in the volume or directory.

Serving the mirror

While you can serve Ubuntu repositories over FTP, I’ve always found HTTP-based mirrors to be faster and the configuration is easier too. We’ll use nginx for this but you can just as easily use Apache2 if it suits your fancy.

Here’s my nginx.conf file:

server {
    listen       80;
    server_name  localhost;

    location / {
        root   /usr/share/nginx/html/mirror/;
        autoindex on;

And finally the docker-compose.yml file to tie everything together:

version: '3'

    build: .
    image: aptmirror
    restart: always
      - aptmirror:/aptmirror
      - ./mirror.list:/etc/apt/mirror.list
    image: nginx:1.17-alpine
    restart: always
      - aptmirror:/usr/share/nginx/html:ro
      - ./nginx.conf:/etc/nginx/conf.d/default.conf
      - 80:80


We can see our new aptmirror service, along with a build: directive so Compose knows build this image instead of fetch it. There’s a shared volume so nginx can see our mirror files to serve them and a volume to get mirror.list where we need it to be.

For the web service, I’ve added the same volume as above but made it read-only. Also note that you need to specify shared volumes at the bottom of the file so Compose knows about them.

Let’s fire this up!

$ docker-compose build
$ docker-compose up

If all goes well, you should see a directory index with a single folder in it like this:


Congratulations! You’re now the proud owner of an Ubuntu repository mirror.

Configuring clients

Our mirror isn’t much use to us unless we configure apt on our Ubuntu machines to point to it. To do this, we need to edit /etc/apt/sources.list to include our mirror and comment out the official lines. You’ll probably need to update this if you ever do a dist-upgrade since sources.list will likely be overwritten during that process.

I added the following two lines to my sources.list:

deb http://<your_ip_here>/ubuntu bionic main restricted universe multiverse
deb http://<your_ip_here>/ubuntu bionic-updates main restricted universe multiverse

I also went through and commented out all of the official deb lines except the security ones. Make sure the release (bionic in this case) matches what you see in your sources.list!

Now we can run an index update and see if it worked:

$ apt-get update

You should see successful updates from your new mirror. If you don’t, make sure you’ve got the correct IP address. Also make sure there isn’t a firewall running on your Docker host.

I would definitely test this out with a VM before you mess with any production servers, but this is working pretty okay for me and I enjoyed building it.

You can find the code and configuration for this setup here.