How to Configure a Debian 11 Bullseye Server with Ansible
Introduction
This guide will show you how to automate the initial Debian 11 Bullseye server configuration with Ansible. Ansible is a software tool that automates the configuration of one or more remote nodes from a local control node. The local node can be another Linux system, a Mac, or a Windows PC. If you are using a Windows PC, you can install Linux using the Windows Subsystem for Linux. This guide will focus on using a Mac as the control node to set up a fresh Vultr Debian 11 server.
The Debian 11 Ansible setup playbook is listed at the end of this guide. In addition, instructions are provided on how to install and use it.
It takes a little work to set up and start using Ansible, but once it is set up and you become familiar with it, using Ansible will save a lot of time and effort. For example, you may want to experiment with different applications. Using the Ansible setup playbook described in this guide, you can quickly reinstall your Debian 11 instance and then run the playbook to configure your base server. This playbook is a good base for installing web servers, database servers, or email servers.
We will use a single Ansible playbook that will do the following:
- Replace the UFW firewall with firewalld & nftables.
- Upgrade installed apt packages.
- Install a base set of useful software packages.
- Set a fully qualified domain name (FQDN).
- Set the timezone.
- Set the SSH port number.
- Set sudo password timeout.
- Create a regular user with sudo privileges.
- Install SSH Keys for the new regular user.
- Ensure authorized key for root user is installed.
- Update/Change the root user password.
- Disable password authentication for root.
- Disable tunneled clear-text passwords.
- Create a .vimrc resource file that disables vi visual mode for root and the user.
- Create a 2-line prompt and bash ls aliases for root and the user.
- Set up a local DNS resolver, using Unbound.
- Configure a firewall, using firewalld and nftables.
- Configure brute force mitigation using fail2ban.
- Optionally configure static IP networking.
- Reboot or restart services as needed.
Nftables is the default and recommended firewall framework since Debian Buster. However, legacy iptables syntax is still supported via a iptables-nft layer. Under the hood, UFW still uses the iptables syntax. This Ansible playbook will replace UFW with an nftables ruleset for the server firewall. At the user level, the nftables ruleset is created and managed with firewalld
.
A local DNS resolver is created by installing the unbound package using the default installed configuration. It provides a local DNS recursive resolver, and the results are cached. This is important if you want to run an email server with DNS blacklist (DNSBL) lookups. Some DNSBL services will not work with a public DNS resolver because they limit the number of queries from a server IP.
If you have configured additional IPs in the Vultr control panel, you can use this playbook to install an updated network interfaces file (/etc/network/interfaces
). By default, the Configure static networking playbook task is disabled.
Prerequisites
- A Vultr server with a freshly installed Debian 11 instance.
- A local Mac, Windows with WSL, or Linux system. This guide focuses on Mac, but the procedures are similar for any Linux control node.
- If using a Mac, Homebrew should be installed.
- A previously generated SSH Key for the Vultr host, and the SSH public key should be installed for the root user.
- Ansible 2.9.x, or later stable version. This guide is tested with Ansible version 2.9.26 on a Mac, installed via Homebrew.
1. Install Ansible on the Local System
For this guide, we are using the Ansible 2.9.x Red Hat released version.
Using a Mac with Homebrew installed:
$ brew install ansible@2.9
$ brew link --force --overwrite ansible@2.9
This will install Ansible along with all the required dependencies, including python version 3.9.x. You can quickly test your installation by doing:
$ ansible --version
ansible 2.9.26
config file = /Users/george/.ansible.cfg
configured module search path = ['/Users/george/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible@2.9/2.9.26/libexec/lib/python3.9/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.9.7 (default, Sep 3 2021, 12:37:55) [Clang 12.0.5 (clang-1205.0.22.9)]
Create a Simple Ansible Configuration
Create the .ansible.cfg
configuration file in the user home directory. This tells Ansible how to locate the host's inventory file.
Add the following content to ~/.ansible.cfg
:
[defaults]
inventory = /Users/user/ansible/hosts.yml
interpreter_python = auto
Be sure to replace user with your actual user name.
Create the folder to store the hosts.yml
hosts inventory file:
$ mkdir ~/ansible
$ cd ~/ansible
Of course, you can put it anywhere you want to and give it any name. Just make sure that your .ansible.cfg
file points to the correct location.
Add the following content to ~/ansible/hosts.yml
:
all:
vars:
ansible_python_interpreter: /usr/bin/python3
ansible_become: yes
ansible_become_method: sudo
children:
vultr:
hosts:
host.example.com:
user: user
user_passwd: "{{ host_user_passwd }}"
root_passwd: "{{ host_root_passwd }}"
ssh_pub_key: "{{ lookup('file', '~/.ssh/host_ed25519.pub') }}"
ansible_become_pass: "{{ host_user_passwd }}"
cfg_static_network: false
vmware:
hosts:
debian1.local:
user: george
user_passwd: "{{ db1_user_passwd }}"
root_passwd: "{{ db1_root_passwd }}"
ssh_pub_key: "{{ lookup('file', '~/.ssh/db1_ed25519.pub') }}"
ansible_become_pass: "{{ db1_user_passwd }}"
cfg_static_network: true
The first block defines Ansible variables that are global to the host's inventory file. Hosts are listed under children groups.
Replace host
with your actual host name. The vmware
group shows a working example for setting up a VMware host on my Mac.
The user
is the regular user to be created. The host_user_passwd
and host_root_passwd
are the user and root passwords that are stored in an Ansible vault, described below. ssh_pub_key
points to the SSH public key for the Vultr host. The ansible_become
lines provide the ability for the newly created user to execute sudo commands in future Ansible playbooks.
The cfg_static_network
is a boolean variable that is set to true
if you are configuring static networking in /etc/network/interfaces
. Unless you have specifically created a static networking configuration, you should leave this set to false
. Configuring a static network is beyond the scope of this guide; I will describe how to create a static network in a future guide.
Using the Ansible Vault
Create the directory for the Ansible password vault and setup playbook:
$ mkdir -p ~/ansible/debian
$ cd ~/ansible/debian
Create the Ansible password vault:
$ ansible-vault create passwd.yml
New Vault password:
Confirm New Vault password:
This will start up your default system editor. Add the following content:
host_user_passwd: ELqZ9L70SSOTjnE0Jq
host_root_passwd: tgM2Q5h8WCeibIdJtd
Replace host
with your actual hostname and generate your own secure passwords, then save and exit your editor. This creates an encrypted file that only Ansible can read. You can add other host passwords to the same files.
pwgen
is a very handy tool that you can use to generate secure passwords. Install it on a Mac via Homebrew: brew install pwgen
. Use it as follows:
$ pwgen -s 18 2
ELqZ9L70SSOTjnE0Jq tgM2Q5h8WCeibIdJtd
You can view the contents of the ansible-vault file with:
$ ansible-vault view passwd.yml
Vault password:
You can edit the file with:
$ ansible-vault edit passwd.yml
Vault password:
2. Create an SSH Config File for the Vultr Host
Next, we need to define the Vultr hostname and SSH port number that Ansible will use to connect to the remote host.
The SSH configuration for the server host is stored in ~/.ssh/config
. An example configuration on a Mac looks like:
Host *
AddKeysToAgent yes
UseKeychain yes
IdentitiesOnly yes
AddressFamily inet
Host host.example.com host
Hostname host.example.com
Port 22
User user
IdentityFile ~/.ssh/host_ed25519
Using the SSH config file, you can change the default SSH port number if changed by the Ansible playbook. The playbook is always executed the first time with SSH port 22. If the SSH port number is changed by the playbook, then the SSH port number in the SSH config file needs to be changed after the playbook runs or during a server reboot initiated by the playbook.
With this SSH configuration file, you can use a shorthand host name to log into the server.
For the user login:
$ ssh host
For the root login:
$ ssh root@host
UserKeychain
is specific to macOS. It stores the SSH public key in the macOS key chain.
host.example.com
is your Vultr server FQDN (Fully Qualified Domain Name) that needs to be defined in your DNS or /etc/hosts file on your local system. Port 22
is optional, but required if you define a non-standard SSH port.
Important: Install your SSH Key for the root user if you have not done so already:
$ ssh-copy-id -i ~/.ssh/host_ed25519 root@host
Verify that you can log in without using a password.
Note: If you reinstall your Vultr instance, be sure to delete your Vultr hostname from ~/.ssh/known_hosts
on your local control node. Otherwise, you will see an SSH error when you try to log into your reinstalled host. The hostname is added to this file during the first login attempt:
$ ssh root@ap1
The authenticity of host 'ap1.example.com (192.0.2.22)' can't be established.
ECDSA key fingerprint is SHA256:oNczYD+xuXx0L6CM17Ciy+DWu3jOEbfVclIj9wUT7Y8.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Answer yes to the question. If you don't delete the hostname from this file after reinstalling your instance, you will see an error like:
$ ssh root@ap1
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
o o o
If this happens, just delete the line entered for your hostname in the known_hosts file and run the ssh command again.
3. Test Your SSH/Ansible Configuration
Before running the setup Ansible playbook, we need to verify that Ansible is working correctly, that you can access your Ansible vault, and connect to your Vultr host. First, verify that Ansible is installed correctly on a Mac:
$ ansible --version
ansible 2.9.26
config file = /Users/user/.ansible.cfg
o o o
This is the latest version of Ansible on a Mac/Homebrew when this guide was written.
Run this command to test your Ansible and SSH configuration:
$ cd ~/ansible/debian
$ ansible -m ping --ask-vault-pass --extra-vars '@passwd.yml' vultr -u root
Vault password:
host.example.com | SUCCESS => {
"changed": false,
"ping": "pong"
}
If you see the above output, then everything is working fine. If not, go back and double-check all your SSH and Ansible configuration settings. Start by verifying that you can execute:
$ ssh root@host
You should be able to log in without a password, because you have installed your SSH key for root.
4. Running the Ansible Debian Server Configuration Playbook
You are ready to run the playbook; when you execute the playbook, you will be prompted for your vault password. The playbook will execute a number of tasks with a PLAY RECAP
at the end. You can rerun the playbook multiple times; you may want to change something like the SSH port number, for example. It will only execute tasks when needed. Be sure to update variables at the beginning of the playbook, such as your SSH port number and your local client IP address, before running the playbook. Setting your local client IP address prevents you from being accidentally locked out by fail2ban.
You can easily determine your client IP address by logging into your host and executing the who
command:
root@host:~# who
root pts/1 2021-10-11 20:24 (192.0.2.22)
Your client IP address, 192.0.2.22, will be listed in the output.
We are finally ready to run the Ansible playbook, which I listed below. Be sure that you are in the ~/ansible/debian
directory. This is the command to run:
$ ansible-playbook --ask-vault-pass --extra-vars '@passwd.yml' setup-pb.yml -l vultr -u root
Vault password:
Depending on the speed of your Mac, it may take a few seconds to start up. If it completes successfully, you will see PLAY RECAP
like:
PLAY RECAP *************************************************************************************************************************
ap1.example.com : ok=38 changed=27 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0
The most important thing to note is that there should be no failed tasks.
Next, I will describe some basic tests that you can run to verify your server setup.
5. Debian 11 Bullseye Server Verification
After successfully executing the Ansible setup playbook, here are some basic tests that you can run to verify your server setup. I will show some real-life examples with the server host that I used to test the setup playbook (my local hostname is ap1
and user name is george
).
Verify your user login
Verify that you can log into your new user account using your host's public SSH key:
╭─george@imac1 ~/ansible/debian
╰─$ ssh ap1
Linux ap1 5.10.0-9-amd64 #1 SMP Debian 5.10.70-1 (2021-09-30) x86_64
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
george@ap1:~
$
Note the two-line prompt. The first line shows user@host
and the current directory.
Now, note how the l
, la
, and ls
LS aliases work and the presence of the .vimrc
file:
george@ap1:~
$ touch tmpfile
george@ap1:~
$ l
tmpfile
george@ap1:~
$ la
.bash_logout .bashrc .profile .ssh/ tmpfile .vimrc
george@ap1:~
$ ll
total 28
drwxr-xr-x 3 george george 4096 Oct 11 15:50 ./
drwxr-xr-x 3 root root 4096 Oct 11 15:34 ../
-rw-r--r-- 1 george george 220 Jun 21 21:26 .bash_logout
-rw-r--r-- 1 george george 3746 Oct 11 15:34 .bashrc
-rw-r--r-- 1 george george 807 Jun 21 21:26 .profile
drwx------ 2 george george 4096 Oct 11 15:34 .ssh/
-rw-r--r-- 1 george george 0 Oct 11 15:50 tmpfile
-rw-r--r-- 1 george george 13 Oct 11 15:34 .vimrc
george@ap1:~
$ cat .vimrc
set mouse-=a
The .vimrc
set mouse-=a option turns off the VI visual mode, which makes it possible to use your mouse to select and copy a block of text in a VI window.
Verify your user password
Even though you use an SSH public key to log in to your user account, you still need to use your user password with the sudo
command. For example, use the sudo command to change to the root account. Enter your user password when prompted:
george@ap1:~
$ sudo -i
We trust you have received the usual lecture from the local System Administrator. It usually boils down to these three things:
#1) Respect the privacy of others.
#2) Think before you type.
#3) With great power comes great responsibility.
[sudo] password for george:
root@ap1:~
#
root@ap1:~
# exit
logout
george@ap1:~
$
That sudo message appears the first time the sudo command is used.
Verify the root password
While in your user account, you can also use su -
to change to the root account. One difference is that you will have to enter your root password:
george@ap1:~
$ su -
Password:
root@ap1:~
#
Verify your hostname
While we are in the root account, let's verify our hostname and some other features that the playbook set up for us:
root@ap1:~
# hostname
ap1
root@ap1:~
# hostname -f
ap1.example.com
root@ap1:~
# date
Mon 11 Oct 2021 04:44:09 PM CDT
Here we verified both the short and FQDN hostnames. With the date command, verify that the timezone is set correctly.
Verify the Unbound local DNS caching resolver
An in-depth discussion of Unbound is beyond the scope of this guide; however, I can provide a few quick tests to verify that the default Unbound local DNS caching resolver configuration is working. We will use the dig
command.
To verify that the resolver is working, do, for example:
root@ap1:~
# dig +noall +answer +stats example.com
example.com. 3600 IN A 192.0.2.22
;; Query time: 40 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Mon Oct 11 16:56:48 CDT 2021
;; MSG SIZE rcvd: 58
Note that the server address is 127.0.0.1. Also, note the TTL (Time To Live). For this example, the TTL is 3600 seconds. Also, note the Query time, 40 msec. Now execute the same command again:
root@ap1:~
# dig +noall +answer +stats example.com
example.com. 3426 IN A 192.0.2.22
;; Query time: 0 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Mon Oct 11 16:59:42 CDT 2021
;; MSG SIZE rcvd: 58
The Query time should be at or near 0 msec because the second query result came from our local cache. The cached result will remain active for the time-to-live interval, which, as you can see, is counting down.
Some email blacklist servers will rate-limit your access to their pre-defined DNS resolvers. This could cause issues when using a public DNS resolver. For example, when executing the following dig command, you should see "permanent testpoint" when using a local DNS resolver.
root#ap1:~
# dig test.uribl.com.multi.uribl.com txt +short
"permanent testpoint"
If you were using a public DNS resolver, you would see a failure like this after you first create your Vultr instance, but have not executed the setup playbook:
root@ap1:~# dig test.uribl.com.multi.uribl.com txt +short
"127.0.0.1 -> Query Refused. See http://uribl.com/refused.shtml for more information [Your DNS IP: 192.0.2.22]"
You can have a look at that URL is read more about this topic.
Verify fail2ban and firewalld SSH port protection
This set of tests will verify that fail2ban and firewalld are integrated together to protect your SSH port. If you are using the default port 22, it will not take long for attackers to attempt to log in to your server. Their login attempt will fail, and fail2ban will take note of the failure. If there are multiple failed attempts in a short period of time as noted in your fail2ban configuration, fail2ban will ban the IP for the time that you configured in your fail2ban configuration. Fail2ban will notify firewalld to block the IP for the duration of the ban.
To see the current fail2ban status, you can execute fail2ban-client status sshd
, or f2bst sshd
to save some typing:
root@ap1:~
# f2bst sshd
Status for the jail: sshd
|- Filter
| |- Currently failed: 1
| |- Total failed: 4
| `- File list: /var/log/auth.log
`- Actions
|- Currently banned: 1
|- Total banned: 3
`- Banned IP list: 192.0.2.123
This output shows that there is currently one failed login attempt. There have been a total of four failures. Of those failed attempts, three IP addresses met the criteria to be banned. There is currently one IP that is being actively banned.
You can execute the following firewalld command to get a quick view of the firewalld status:
root@ap1:~
# firewall-cmd --list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: enp1s0
sources:
services:
ports: 22/tcp
protocols:
forward: no
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
rule family="ipv4" source address="192.0.2.22" port port="22" protocol="tcp" reject type="icmp-port-unreachable"
A complete discussion of firewalld is beyond the scope of this guide. However, from the above output, it's important to note the "public zone" is active. Our SSH TCP port 22 is allowed and all other ports are blocked. Also, a "rich rule" has been created to block the IP that fail2ban marked to be banned. This rule will be deleted when fail2ban clears the IP ban.
To just list the rich rules, you can execute:
root@ap1:~
# firewall-cmd --list-rich-rules
rule family="ipv4" source address="192.0.2.22" port port="22" protocol="tcp" reject type="icmp-port-unreachable"
6. Debian 11 Bullseye Ansible Set-Up Playbook Listing
This is the setup-pb.yml
playbook to create in the ~/ansible/debian
directory:
# Initial server setup
#
---
- hosts: all
become: true
vars:
ssh_port: "22"
my_client_ip: 192.0.2.22
tmzone: America/Chicago
sudo_timeout: 20
vimrc: |
set mouse-=a
f2b_jail_local: |
[DEFAULT]
banaction = firewallcmd-rich-rules[actiontype=]
banaction_allports = firewallcmd-rich-rules[actiontype=]
ignoreip = 127.0.0.1/8 ::1 {{ my_client_ip }}
findtime = 15m
bantime = 2h
maxretry = 5
[sshd]
enabled = true
maxretry = 3
port = {{ ssh_port }}
tasks:
# Stop and disable ufw before installing firewalld ...
- name: Check if ufw is installed.
stat: path="/usr/sbin/ufw"
register: ufw_installed
- name: Check if ufw status is active.
command: ufw status
changed_when: False
register: ufw_status
when: ufw_installed.stat.exists
- name: Disable ufw ruleset if ufw is installed and active.
ufw:
state: reset
when: ufw_installed.stat.exists and 'inactive' not in ufw_status.stdout
- name: Flush any existing (ufw) nftables ruleset.
command: nft flush ruleset
when: ufw_installed.stat.exists and 'inactive' not in ufw_status.stdout
- name: Stop and disable the ufw service.
service:
name: ufw
state: stopped
enabled: no
when: ufw_installed.stat.exists
# Update and install the base software
- name: Update apt package cache.
apt:
update_cache: yes
cache_valid_time: 600
- name: Upgrade installed apt packages.
apt:
upgrade: dist
register: upgrade
- name: Ensure that a base set of software packages are installed.
apt:
pkg:
- build-essential
- curl
- fail2ban
- firewalld
- git
- htop
- needrestart
- net-tools
- pwgen
- resolvconf
- rsync
- sudo
- unbound
- unzip
- vim-nox
state: latest
- name: Check if a reboot is needed for Debian-based systems
stat:
path: /var/run/reboot-required
register: reboot_required
# Host Setup
- name: Set static hostname
hostname:
name: "{{ inventory_hostname_short }}"
- name: Add FQDN to /etc/hosts
lineinfile:
dest: /etc/hosts
regexp: '^127\.0\.1\.1'
line: "127.0.1.1 {{ inventory_hostname }} {{ inventory_hostname_short }}"
state: present
- name: Check if cloud init is installed.
stat: path="/etc/cloud/templates/hosts.debian.tmpl"
register: cloud_installed
- name: Add FQDN to /etc/cloud/templates/hosts.debian.tmpl
lineinfile:
dest: /etc/cloud/templates/hosts.debian.tmpl
regexp: '^127\.0\.1\.1'
line: "127.0.1.1 {{ inventory_hostname }} {{ inventory_hostname_short }}"
state: present
when: cloud_installed.stat.exists
- name: Set timezone.
timezone:
name: "{{ tmzone }}"
notify:
- restart cron
- name: Set ssh port port number
lineinfile:
dest: /etc/ssh/sshd_config
regexp: 'Port '
line: 'Port {{ ssh_port }}'
state: present
notify:
- restart sshd
# Set sudo password timeout (default is 15 minutes)
- name: Set sudo password timeout.
lineinfile:
path: /etc/sudoers
regexp: '^Defaults\tenv_reset'
line: 'Defaults env_reset, timestamp_timeout={{ sudo_timeout }}'
validate: '/usr/sbin/visudo -cf %s'
- name: Create/update regular user with sudo privileges.
user:
name: "{{ user }}"
password: "{{ user_passwd | password_hash('sha512') }}"
groups: sudo
append: true
shell: /bin/bash
- name: Ensure authorized keys for remote user is installed.
authorized_key:
user: "{{ user }}"
key: "{{ ssh_pub_key }}"
- name: Ensure authorized key for root user is installed.
authorized_key:
user: root
key: "{{ ssh_pub_key }}"
- name: Update root user password.
user:
name: root
password: "{{ root_passwd | password_hash('sha512') }}"
- name: Disable root password login via SSH.
lineinfile:
path: /etc/ssh/sshd_config
regexp: '^#?PermitRootLogin'
line: 'PermitRootLogin prohibit-password'
notify:
- restart sshd
- name: Disable tunneled clear-text passwords.
lineinfile:
path: /etc/ssh/sshd_config
regexp: '^#?PasswordAuthentication'
line: 'PasswordAuthentication no'
notify:
- restart sshd
- name: Configure user .vimrc.
copy:
dest: /home/{{ user }}/.vimrc
content: "{{ vimrc }}"
owner: "{{ user }}"
group: "{{ user }}"
mode: 0644
- name: Configure root .vimrc.
copy:
dest: /root/.vimrc
content: "{{ vimrc }}"
owner: root
group: root
mode: 0644
- name: Configure user 2-line prompt and .bashrc aliases.
blockinfile:
path: /home/{{ user }}/.bashrc
block: |
PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\n\$ '
alias l='ls -CF'
alias la='ls -AF'
alias ll='ls -alF'
- name: Configure root 2-line prompt and .bashrc aliases.
blockinfile:
path: /root/.bashrc
block: |
PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\n\$ '
export LS_OPTIONS='--color=auto'
eval "`dircolors`"
alias ls='ls $LS_OPTIONS'
alias l='ls -CF'
alias la='ls -AF'
alias ll='ls -alF'
# Configure a firewall, using firewalld
- name: "Check if the firewalld public zone is active for interface: {{ ansible_default_ipv4.interface }}."
command: firewall-cmd --get-zone-of-interface={{ ansible_default_ipv4.interface }}
register: zone_status
failed_when: zone_status.rc != 0 and zone_status.rc != 2
changed_when: zone_status.rc == 2
- name: Set the default firewalld public zone to active if not already active.
command: firewall-cmd --permanent --zone=public --change-interface={{ ansible_default_ipv4.interface }}
when: '"public" not in zone_status.stdout'
notify:
- restart firewalld
- name: Enable the firewalld ssh port (which may be different than port 22).
firewalld:
zone: public
port: "{{ ssh_port }}/tcp"
state: enabled
permanent: yes
notify:
- restart firewalld
- name: Disable firewalld dhcpv6-client and ssh service.
firewalld:
zone: public
service: "{{ item }}"
state: disabled
permanent: yes
with_items:
- dhcpv6-client
- ssh
notify:
- restart firewalld
- name: Configure fail2ban local jail.
copy:
dest: /etc/fail2ban/jail.local
content: "{{ f2b_jail_local }}"
owner: root
group: root
mode: 0644
notify:
- restart fail2ban
# simple shell script to display fail2ban-client status info;
# example usage:
# f2bst
# f2bst sshd
- name: Create f2bst shell script.
copy:
dest: /usr/local/bin/f2bst
content: |
#!/usr/bin/sh
fail2ban-client status $*
owner: root
group: root
mode: 0755
- name: Check if any services needs to be restarted.
command: needrestart -r a
when: upgrade.changed and reboot_required.stat.exists == false
- name: Configure static networking
copy:
src: etc/network/interfaces
dest: /etc/network/interfaces
owner: root
group: root
mode: 0644
when: cfg_static_network == true
notify:
- restart networking
- name: Reboot the server if needed.
reboot:
msg: "Reboot initiated by Ansible because of reboot required file."
connect_timeout: 5
reboot_timeout: 600
pre_reboot_delay: 0
post_reboot_delay: 30
test_command: whoami
when: reboot_required.stat.exists
- name: Remove old packages from the cache.
apt:
autoclean: yes
- name: Remove dependencies that are no longer needed.
apt:
autoremove: yes
purge: yes
handlers:
- name: restart cron
service:
name: cron
state: restarted
when: reboot_required.stat.exists == false
- name: restart fail2ban
service:
name: fail2ban
state: restarted
when: reboot_required.stat.exists == false
- name: restart sshd
service:
name: sshd
state: restarted
when: reboot_required.stat.exists == false
- name: restart firewalld
service:
name: firewalld
state: restarted
- name: restart networking
service:
name: networking
state: restarted
when: reboot_required.stat.exists == false
You can read the Ansible Documentation to learn more about Ansible.
You should only have to update the vars:
section to change the settings for your specific situation. Most likely, you will want to set the client IP and timezone.
Conclusion
In this guide, we have introduced Ansible for automating the initial Debian server setup. This is very useful for deploying or redeploying a server after testing an application. It also creates a solid foundation for creating a web, database, or email server.