Create a Docker Swarm on Alpine Linux 3.9.0
Introduction
This guide will show you how to create and configure a Docker swarm using multiple Alpine Linux 3.9.0 servers and Portainer. Please be aware that Vultr offers a One-Click Docker App that currently supports both CentOS 7 x64 and Ubuntu 16.04 x64.
Prerequisites
To begin, you will need at least two VC2 servers running Alpine Linux 3.9.0. Within your Docker swarm, one of these servers will act as a manager node
- interfacing with external networks and delegating jobs to worker nodes. The other server will then act as a worker node
- executing jobs delegated to it by the manager node.
Note that you can launch more than two servers if your application requires redundancy and/or more computing power, and the steps provided in this guide will still apply.
Deployment
Visit the Vultr server deployment interface.
Ensure that the Vultr Cloud (VC2)
tab is selected at the top of the page.
You can select any location from the Server Location
section, however all servers must be in the same location, otherwise it will not be possible to deploy a Docker swarm to them.
Select the ISO Library
tab of the Server Type
section and choose the Alpine Linux 3.9.0 x86_64
image.
Select an appropriate option from the Server Size
section. This guide will use the 25 GB SSD server size, but this may be insufficient to meet your application's resource requirements. While Vultr makes it easy to upgrade a server's size after it has already been launched, you should still carefully consider which server size your application needs to perform optimally.
In the Additional Features
section, you must select the Enable Private Networking
option. While the other options are not required to follow this guide, you should consider whether or not each one makes sense in the context of your application.
If you've previously enabled the Multiple Private Networks
option on your account, you will then need to either select an existing or create a new private network for your servers. If you have not enabled it, then you can ignore this section. For information on manually configuring private networks, see this guide.
Skip the Firewall Group
section for now. Only the server acting as a manager node in the Docker swarm will need exposed ports, and this should be configured after server deployment.
At the very bottom of the page, you must enter a Server Qty
of at least two. As mentioned previously, you may need more than two servers, but two is sufficient to follow this guide.
Finally, in the Server Hostname & Label
section, enter meaningful and memorable hostnames and labels for each server. For the purposes of this guide, the hostname and label of the first server will be docker-manager
and Docker Manager
, respectively- and docker-worker
and Docker Worker
for the second, respectively.
After double checking all your configurations, you can then click the Deploy Now
button at the bottom of the page to launch your servers.
Install Alpine Linux 3.9.0 on the servers
Because you chose an OS from Vultr's ISO library, you'll need to manually install and configure Alpine Linux 3.9.0 on each server.
After giving Vultr a minute or two to allocate your servers, click the triple dot more options
icon for the Docker Manager
server on the server management interface, and then choose the View Console
option.
You should be redirected to a console with a login prompt. If not, please wait another minute for Vultr to finish deploying your servers.
At that login prompt, enter root
as the username. The live version of Alpine Linux 3.9.0 (which is what your servers are currently running) does not require the superuser to enter a password when logging in.
Once you have successfully logged into the root account, you will see a welcome message followed by a shell prompt that looks like the following:
localhost:~#
To start the Alpine Linux installer, enter the following command:
# setup-alpine
First, choose an appropriate keyboard layout. This guide will use the us
layout and variant.
When setting the hostname, choose the same hostname that you set for this server during deployment. If you've been following this guide exactly, the hostname should be docker-manager
.
Two network interfaces should be available: eth0
and eth1
. If you only see eth0
, that means you did not configure your servers' private network correctly. Initialize eth0
using dhcp
, and initialize eth1
using the private IP address, netmask, and gateway this server was assigned during deployment. You can access these details from the settings interface of your server. When prompted, do not perform any manual network configuration.
Enter a new password for the root account, and then select a timezone appropriate for the location you chose to deploy these servers to.
If you intend to use an HTTP/FTP proxy, enter its URL, otherwise do not set a proxy URL.
Choose an NTP client to manage system clock synchronization. This guide will use busybox
.
When asked for a package repository mirror to use, either pick one explicitly by entering its number; automatically detect and select the fastest one by entering f
; or manually edit the repository configuration file by entering e
, which is not recommended unless you're familiar with Alpine Linux. This guide will use the first mirror.
If you plan to use SSH to access your servers or to host an SSH based file system, select an SSH server to use. This guide will use openssh
.
When prompted for a disk to use, choose disk vda
as sys
type.
Alpine Linux 3.9.0 should now be installed on your server. Repeat this process for all other servers you deployed earlier, ensuring you substitute the correct values for hostname and the eth1
network interface.
Post-installation server configuration
At this point, your servers are still running the live ISO version of Alpine Linux 3.9.0. To boot from the SSD installation, visit the settings interface of your server, navigate to the Custom ISO
side menu entry, and click the Remove ISO
button. This should reboot the server. If it does not, then manually reboot.
Once the server has finished rebooting, navigate back to the web console for the server Docker Manager
.
Log into the root account using the password you set earlier during the installation process.
Enable the community package repository by uncommenting the third line of /etc/apk/repositories
using vi
. You can enable the edge and testing repositories in a similar manner, but they are not required to follow this guide.
Synchronize the server's local package index with the remote repository you selected earlier by entering the following shell command:
# apk update
Then upgrade outdated packages:
# apk upgrade
As before, repeat this configuration process for each server you deployed earlier.
Install Docker on your servers
Before installing the Docker package itself, you may want to create a separate docker
user. You can do this using the following command:
# adduser docker
Note: This new user and any users added to the new docker
group will have root privileges once the Docker package has been installed. See the following issue from the Moby Github repository:
Due to the
--privileged
in docker, anyone added to the 'docker' group is root equivalent. Anyone in the docker group has a back door around all privilege escalation policy and auditing on the system.This is different from someone being able to run running sudo to root, where they have policy, and audit applied to them.
If you'd like to give sudo permission to the docker
user, first install the sudo
package:
# apk add sudo
Then create a sudo
group:
# addgroup sudo
Finally, add the docker
user to the sudo
group:
# adduser docker sudo
Now you can follow step 4 of this guide to finish configuring sudo.
At this point, you're ready to install the Docker package. Note that it is not strictly necessary to have a separate, sudo-capable docker
user to install and configure Docker, but this guide follows that convention.
Install the Docker package with the following command:
# apk add docker
Then enable the Docker init script:
# rc-update add docker
Finally, start the Docker daemon:
# rc-service docker start
You can verify that Docker is running with this command:
# docker info
As with last time, repeat this Docker installation process for each server you deployed at the start.
Initialize a Docker swarm with one manager node and one worker node
With all of that setup dealt with, you're finally ready to create the Docker swarm.
Create a swarm and add a manager node
Navigate back to the web console of your Docker Manager
server. You will configure this server as a manager node in your swarm. If you chose to create the docker
user earlier, log in using that account rather than the superuser.
Enter the following command, but replace 192.0.2.1
with the private, (not the public), IP address your Docker Manager
server was assigned:
$ docker swarm init --advertise-addr 192.0.2.1
Docker will display a command you can execute on other servers in the private network to add them as worker nodes to this new swarm. Save this command.
Add a worker node
Now navigate to the web console of your Docker Worker
server, signing in with the docker
user if you created it.
To add this server as a worker node to the swarm you just created, execute the command you saved from the output of the swarm creation command. It will look similar to the following:
$ docker swarm join --token SWMTKN-1-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX-XXXXXXXXXXXXXXXXXXXXXXXXX 192.0.2.1:2377
Docker will output whether the node was able to join the swarm. If you encounter issues adding worker nodes to the swarm, double check your private network configuration and refer to this guide for troubleshooting.
If you deployed more than two servers at the beginning, you can add the rest as worker nodes to your swarm using the command above, increasing the amount resources available to your application. Alternatively, you can add additional manager nodes, but that's beyond the scope of this guide.
Deploy Portainer with SSL to manage your Docker swarm
At this point your Docker swarm is ready for use. You may, however, optionally launch a Portainer stack on the manager node in your swarm. Portainer offers a convenient web interface for managing your swarm and the nodes therein.
It's now time to create a firewall group for your swarm. Unless your application specifically requires it, only expose ports on your manager nodes. Exposing ports on your worker nodes without careful consideration can introduce vulnerabilities.
Navigate to the firewall management interface and create a new firewall group. Your application should dictate which ports to expose, but you must, at the very least, expose port 9000
for Portainer. Apply this firewall group to the Docker Manager
server.
While it isn't required, securing Portainer with SSL is strongly recommended. For the sake of this guide, you'll only be using a self-signed OpenSSL certificate, but you should consider using Let's Encrypt in production.
Navigate to the web console of the Docker Manager
server, log in using the docker
user, and use the following commands to generate a self-signed OpenSSL certificate:
$ mkdir ~/certs
$ openssl genrsa -out ~/certs/portainer.key 2048
$ openssl req -new -x509 -sha256 -key ~/certs/portainer.key -out ~/certs/portainer.pem -days 3650
Create a new file, ~/portainer-agent-stack.yml
, with the following contents:
version: '3.2'
services:
agent:
image: portainer/agent
environment:
AGENT_CLUSTER_ADDR: tasks.agent
CAP_HOST_MANAGEMENT: 1
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/lib/docker/volumes:/var/lib/docker/volumes
- /:/host
networks:
- agent_network
deploy:
mode: global
portainer:
image: portainer/portainer
command: -H tcp://tasks.agent:9001 --tlsskipverify --ssl --sslcert /certs/portainer.pem --sslkey /certs/portainer.key
ports:
- target: 9000
published: 9000
protocol: tcp
mode: host
volumes:
- portainer_data:/data
- /home/docker/certs:/certs
networks:
- agent_network
deploy:
mode: replicated
replicas: 1
placement:
constraints: [node.role == manager]
networks:
agent_network:
driver: overlay
attachable: true
volumes:
portainer_data:
After modifying this Docker stack configuration file to conform to your requirements, you can deploy it:
$ docker stack deploy --compose-file ~/portainer-agent-stack.yml portainer
To verify that Portainer is working, execute the following command after having given Docker a minute or two to deploy the stack:
$ docker ps
You will see two containers with the images portainer/portainer:latest
and portainer/agent:latest
, verifying that Portainer started correctly.
You can now configure and manage your Docker swarm by visiting the public IP address of your Docker Manager
server on port 9000
using HTTPS.