Getting started with Ansible

In this post we will talk about Ansible, an automation tool for provision of IT infrastructure like configuration of Linux VM's, Kubernetes clusters and more. We will focus on Linux configuration and provision in this post.

We will not cover ansible automation platform, we will talk about ansible core and modules, because the automation platform is a enterprise tool that offer many features to control and execute ansible scripts in any environment you have. It's good for big teams when you need more control and contribution over the playbooks you run. Also is a way more complex software.

The official ansible site is a little confuse to starters, because it show documentation primary focus on the Red Hat Ansible Automation Platform, and as I mention this software is focused on enterprise customers. The ansible core is opensource and free to use and is much more simpler to start with.

Core Concepts

How ansible automate the tasks necessary for Linux provision? To answer that question lets talk about ansible core

The ansible core is collection of CLI tools, the Ansible language and a architectural framework that allows extensions though ansible collections.

The ansible language is based on YAML to express the desired state of the machines you are controlling. This YAML is interpreted by python under the covers that implements the functionality with build in modules or extended modules installed with ansible-galaxy.

Ansible connects to targets through SSH and use an agentless approach to function. This has advantages over agent systems because you don't have to deal with daemons installed in the machines, this leads to a more straightforward approach, ansible only requires SSH and python installed in the target machine, which most Linux distros bring by default. The disadvantages of this approach that I can bring to table are a less performant operation, because ansible has to connect to each host, the parallelization is somewhat limited but possible (check ansible serial command and this post for parallel tasks.

To understand ansible as a minimum we need to cover this topics:

  • The ansible inventory
  • Ad Hoc Commands
  • Ansible Playbooks and Tasks
  • Modules and Collections
  • Handlers
  • Roles and the recommend layout for playbooks

We will talk about then later.

Installing Ansible

Ansible works best on Linux host, Windows is not supported. If you are in Windows I recommend installing in you WSL2 distro.

You need an minimal python 3.8 installed. In ubuntu 20.04.4 is already installed

I use arch Linux so I will cover how to install in arch as well.

In ubuntu do not install the package ansible, because it's old.

Run the follow commands:

python3 -m pip -V
# If pip is not installed
sudo apt install python3-pip
python3 -m pip install --user ansible # install ansible for your current user

Ansible will be installed for your current user. You need ~/.local/bin on your PATH. To do this edit ~/.bashrc (or ~/.zshrc in zshell) and add the line:

export PATH=~/.local/bin:$PATH

Reopen your terminal to take effect and test the installation with:

ansible --version

On Arch Linux the community package is up to date so you can just:

sudo pacman -Sy ansible

Upgrading ansible

To upgrade in ubuntu run the command:

python3 -m pip install --upgrade --user ansible

In Arch Linux:

sudo pacman -Syu ansible

Getting Started

Building an Inventory

An inventory is where you target your Linux machines and groups then so you can run commands and playbooks on it. It provides system information like username and network ip address reducing the number of command line options you need to specify.

The managed node (a remote system or host, that ansible controls) can be specified in inventories or in the file /etc/ansible/hosts. For a example of the hosts file see the file below:

[myvirtualmachines]
192.0.2.50
192.0.2.51
192.0.2.52

It creates a group of machines called myvirtualmachines and adds 3 ip address to it.

When connecting to the host you need to specify the -u option for the username (if is different than current logged user). To check the host run the command:

ansible all --list-hosts

You should see the output:

hosts (1):
192.0.2.50
192.0.2.51
192.0.2.52

To ping in the host and check connection run the command:

ansible all -m ping

Lets see how to use inventories, which are a more powerful way to catalog your managed nodes.

Create a folder ansible-test to sum up the files for this article, all files will be created inside it.

Create a file named inventory.yaml (you can use ini files as well)

Add the following example

virtualmachines:
  hosts:
    vm01:
      ansible_user: vagrant
      ansible_host: 192.168.10.4
    vm02:
      ansible_user: vagrant
      ansible_host: 192.168.10.5
    vm03:
      ansible_user: vagrant
      ansible_host: 192.168.10.6

Verify your inventory:

ansible-inventory -i inventory.yaml --list

I'm using a few options here because I'm using hyper-v and vagrant to spin up the machines. Lets explain them:

  • ansible_ssh_private_key_file: Use a private key file that has generated by vagrant in machine creation. This is optional
  • ansible_user: the user that ansible will connect to
  • ansible_host: the ip address of the VM

Lets ping your inventory but first you need some Ubuntu server Linux machines, spin around some in a virtualized environment and make sure you can connect with SSH. If you want use Hyper-v and want overcome some problems with it, check my blog post about Hyper-V, Vagrant and Ansible

SSH will check host key on first connection effectively stopping the connection on first because we need to say 'yes' to each host. To avoid that. Add the following to your /etc/ansible/ansible.cfg file:

[ssh_connection]
ssh_args = -o StrictHostKeyChecking=accept-new

Now we can:

ansible virtualmachines -m ping -i inventory.yaml

We receive something like this as response:

vm03 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
vm02 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
vm01 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}

Your communication is set up. Lets see how we can run ad-hoc commands:

Running Ad-Hoc Commands

Ad hoc commands is a way to run just a few commands in the target hosts. We can use that to for example, update all host packages or to check the contents of a file in all hosts. This are not as powerfull as playbooks but are usefull.

Lets get started by checking the contents of a file in all machines:

Run the command;

ansible virtualmachines -m shell -a "cat /etc/resolv.conf" -i inventory.yaml

You should see the output:

vm02 | CHANGED | rc=0 >>
# This is /run/systemd/resolve/stub-resolv.conf managed by man:systemd-resolved(8).
# Do not edit.

# This file might be symlinked as /etc/resolv.conf. If you're looking at
# /etc/resolv.conf and seeing this text, you have followed the symlink.

# This is a dynamic resolv.conf file for connecting local clients to the
# internal DNS stub resolver of systemd-resolved. This file lists all
# configured search domains.

# Run "resolvectl status" to see details about the uplink DNS servers
# currently in use.

# Third party programs should typically not access this file directly, but only
# through the symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a
# different way, replace this symlink by a static file or a different symlink.

# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.

nameserver 127.0.0.53
options edns0 trust-ad
search .
vm01 | CHANGED | rc=0 >>
# This is /run/systemd/resolve/stub-resolv.conf managed by man:systemd-resolved(8).
# Do not edit.

# This file might be symlinked as /etc/resolv.conf. If you're looking at
# /etc/resolv.conf and seeing this text, you have followed the symlink.

# This is a dynamic resolv.conf file for connecting local clients to the
# internal DNS stub resolver of systemd-resolved. This file lists all
# configured search domains.

# Run "resolvectl status" to see details about the uplink DNS servers
# currently in use.

# Third party programs should typically not access this file directly, but only
# through the symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a
# different way, replace this symlink by a static file or a different symlink.

# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.

nameserver 127.0.0.53
options edns0 trust-ad
search .
vm03 | CHANGED | rc=0 >>
# This is /run/systemd/resolve/stub-resolv.conf managed by man:systemd-resolved(8).
# Do not edit.

# This file might be symlinked as /etc/resolv.conf. If you're looking at
# /etc/resolv.conf and seeing this text, you have followed the symlink.

# This is a dynamic resolv.conf file for connecting local clients to the
# internal DNS stub resolver of systemd-resolved. This file lists all
# configured search domains.

# Run "resolvectl status" to see details about the uplink DNS servers
# currently in use.

# Third party programs should typically not access this file directly, but only
# through the symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a
# different way, replace this symlink by a static file or a different symlink.

# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.

nameserver 127.0.0.53
options edns0 trust-ad
search .

Which shows the output of the command for each machine.

Now lets update all packages in the hosts:

ansible virtualmachines -m shell -a "apt update && apt upgrade -y" --become -i inventory.yaml

We use the --become option to run the command as root. We also need to add -y to the apt upgrade because we do not has access to a interactive shell so we say yes to everything.

This command takes a few time to execute, and the result is the output of the apt command. If you add --ask-become-pass or -K, Ansible will prompt your for the password to use for privilege escalation.

Let me explain the options of this command:

  • virtualmachines: The first parameter is a selector for the machines in inventory. In this example we are selectiong all machine in group "virtualmachines".
  • -m: selects the module that ansible will execute. Ansible is a righly modular tools and even the core is modular. In this case we are selecting the "shell" module that executes commands in the target machine.
  • -a: Module parameters. We provide de command for the module shell
  • --become: Run the module as root using privilege escalation (sudo)
  • -i: Select the inventory

We use the generic shell module to run a command. But we can use any module, in this case the use of a package module like yum or apt is more powerfull to install a specific package because is indempontent (it will not run if not needed by checking the package state.

To install a package use the command:

ansible virtualmachines -m ansible.builtin.apt -a "name=httpd state=present"

This command will ensure the package name httpd is installed. If you want to make sure is updated use:

ansible virtualmachines -m ansible.builtin.apt -a "name=httpd state=latest"

Another good use case for ad hoc command is to manage files using the built in file and copy module. File module allows changing owernership and permissions on files, or create directories:

# Copy a file from host to target
ansible virtualmachines  -m ansible.builtin.copy -a "src=/etc/hosts dest=/tmp/hosts"
# change permission of file
ansible virtualmachines -m ansible.builtin.file -a "dest=/srv/foo/a.txt mode=600"
ansible virtualmachines -m ansible.builtin.file -a "dest=/srv/foo/b.txt mode=600 owner=mdehaan group=mdehaan"
# creates a directory
ansible virtualmachines -m ansible.builtin.file -a "dest=/path/to/c mode=755 owner=mdehaan group=mdehaan state=directory"
# deletes a file or directory
ansible virtualmachines -m ansible.builtin.file -a "dest=/path/to/c state=absent"

Ansible also has the built in module user to manage users and groups. Check it's documentation

Another useful thing to do in ad hoc commands is to manage services using the service module. Examples:

Ensure a service is started on all webservers:

$ ansible webservers -m ansible.builtin.service -a "name=httpd state=started"

Alternatively, restart a service on all webservers:

$ ansible webservers -m ansible.builtin.service -a "name=httpd state=restarted"

Ensure a service is stopped:

$ ansible webservers -m ansible.builtin.service -a "name=httpd state=stopped"

Selecting Targets

There is a few ways you could select targets in the inventory. You can use a pattern to do that:

DescriptionPattern(s)Targets
All hostsall (or *)
One hosthost1
Multiple hostshost1:host2 (or host1,host2)
One groupwebservers
Multiple groupswebservers:dbserversall hosts in webservers plus all hosts in dbservers
Excluding groupswebservers:!atlantaall hosts in webservers except those in atlanta
Intersection of groupswebservers:&stagingany hosts in webservers that are also in staging

You can also combine this expressions. For example webservers:dbservers:&staging:!phoenix

There is a wildcard option for FQDNS or IP address, as long as the hosts are named in your inventory by FQDN or IP address:

192.0.*
*.example.com
*.com

You can mix wildcard patterns and groups at the same time:

one*.com:dbservers

Limitations of patterns

Patterns depend on inventory. If a host or group is not listed in your inventory, you cannot use a pattern to target it. If your pattern includes an IP address or hostname that does not appear in your inventory, you will see an error like this:

[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: Could not match supplied host pattern, ignoring: *.not_in_inventory.com

Your pattern must match your inventory syntax. If you define a host as an alias:

atlanta:
  host1:
    http_port: 80
    maxRequestsPerChild: 808
    host: 127.0.0.2

you must use the alias in your pattern. In the example above, you must use host1 in your pattern. If you use the IP address, you will once again get the error:

[WARNING]: Could not match supplied host pattern, ignoring: 127.0.0.2

Using regexes in patterns

You can specify a pattern as a regular expression by starting the pattern with ~:

~(web|db).*\.example\.com

Patterns and ad-hoc commands

You can change the behavior of the patterns defined in ad-hoc commands using command-line options. You can also limit the hosts you target on a particular run with the --limit flag.

  • Limit to one host
$ ansible -m [module] -a "[module options]" --limit "host1"
  • Limit to multiple hosts
$ ansible -m [module] -a "[module options]" --limit "host1,host2"
  • Negated limit. Note that single quotes MUST be used to prevent bash interpolation.
$ ansible -m [module] -a "[module options]" --limit 'all:!host1'
  • Limit to host group
$ ansible -m [module] -a "[module options]" --limit 'group1'

For more information about selection pattern check the docs: https://docs.ansible.com/ansible/latest/user_guide/intro_patterns.html

Ansible Playbooks and Tasks

This is where the fun starts, a playbook is a repeatable, re-usable configuration management and multi-machine deployment system. Is a good idea to version control your playbooks so you can backup and share with others.

They are expressed as YAML format with a easy syntax. The playbook runs from top to bottom. Within each play, tasks also run in order from top to bottom. Playbooks can also include each other and orchestrate multiple machine deployments, one for webservers another for databases and other for network infrastructure, and so on. As a minimum each play defines two things:

  • the managed nodes to target, using the pattern we saw early
  • at least one task to execute

Many Ansible modules have idempotency built in (but not all of them) that means ansible check the desired state has already been achieved and exit without performing any actions if that state has been achieved. I you repeat the playbook execution or the tasks that use idempotent modules the final state is not changed.

Lets see an example of playbook:

---
- name: Update web servers
  hosts: webservers
  become: true
  vars:
    httpd_config_folder: /etc/apache2
    vhost_domain: example.local
    proxy_timeout: 60
  tasks:
  - name: Ensure apache is at the latest version
    ansible.builtin.apt:
      name: apache2
      state: latest
  - name: Creates the html directory for site
    ansible.builtin.file:
      dest: '/var/www/vhosts/{{ vhost_domain }}/public_html'
      state: directory
  - name: Create the site index.html
    ansible.builtin.template:
      src: ./templates/index.html.j2
      dest: '/var/www/vhosts/{{ vhost_domain }}/public_html/index.html'
  - name: Config virtual host for site
    ansible.builtin.template:
      src: ./templates/site-vhost.conf.j2
      dest: '{{ httpd_config_folder }}/sites-available/{{ vhost_domain }}.conf'

    notify:
      - Restart apache
  - name: Disables apache default vhost which conflicts with others
    ansible.builtin.command: "a2dissite 000-default.conf"
    args:
      removes: '{{ httpd_config_folder }}/sites-enabled/000-default.conf'
    notify:
      - Restart apache
  - name: Enable VHost for Domain {{ vhost_domain }}  in apache
    ansible.builtin.command: "a2ensite {{ vhost_domain }}.conf"
    args:
      creates: '{{ httpd_config_folder }}/sites-enabled/{{ vhost_domain }}.conf'
    notify:
      - Restart apache
  - name: Ensure Apache2 is Started
    ansible.builtin.service:
      name: apache2
      state: started
  handlers:
    - name: Restart apache
      ansible.builtin.service:
        name: apache2
        state: restarted

- name: Update db servers
  hosts: databases
  become: true

  tasks:
  - name: Ensure postgresql is at the latest version
    ansible.builtin.apt:
      name: postgresql
      state: latest
  - name: Ensure that postgresql is started
    ansible.builtin.service:
      name: postgresql
      state: started

This playbook do a lot of things:

  1. On webservers group install apache2 and make sure is updated
  2. Creates folders for a vhost site public html
  3. Creates a index.html for the vhost
  4. Configures the vhost using a template that uses some variables like proxy_timeout and the vhost_domain
  5. Disables the apache default vhost which conflicts with all others, if the files exists. When changed notify a handler (trigger), that will restart apache
  6. Enables the vhost site. When the symlink file is created in the enabled sites it will not run again. When changed notify a handler that will restart apache
  7. Ensure apache is started
  8. Declares a handler for apache restarts. Handlers are executed at the final of playbooks, only if the notification of some task is trigged
  9. Install postgresl on databases group and ensure it's updated
  10. Ensure postgresql server is started

The inventory for this groups of machines could be something like:

virtualmachines:
  hosts:
    vm01:
      ansible_user: vagrant
      ansible_host: 192.168.10.4
    vm02:
      ansible_user: vagrant
      ansible_host: 192.168.10.5
    vm03:
      ansible_user: vagrant
      ansible_host: 192.168.10.6
webservers:
  hosts:
    vm01:
    vm02:
databases:
  hosts:
    vm03:

And the templates are as follow:

index.html.j2 template

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <meta http-equiv="X-UA-Compatible" content="ie=edge">
    <title>Ansible Getting Started</title>
  </head>
  <body>
    <main>
        <h1>Welcome to {{ vhost_domain }}</h1>  
    </main>
  </body>
</html>

site-vhost.conf.j2 template:

<VirtualHost *:80>
    ServerName {{ vhost_domain }}
    Timeout {{ proxy_timeout }}
    KeepAliveTimeout {{ proxy_timeout }}
    MaxKeepAliveRequests 0

    ServerAlias www.{{ vhost_domain }}

    DocumentRoot /var/www/vhosts/{{ vhost_domain }}/public_html

    <Directory /var/www/vhosts/{{ vhost_domain }}/public_html>
        Options Indexes FollowSymLinks MultiViews
        AllowOverride All
    </Directory>



    CustomLog /var/log/apache2/{{ vhost_domain }}-access.log combined

    ErrorLog /var/log/apache2/{{ vhost_domain }}-error.log
    # Possible values include: debug, info, notice, warn, error, crit,

    # alert, emerg.

    LogLevel warn

</VirtualHost>

The .j2 extension are for Jinja templates which are the sintax for them.

If you run this playbook on the machines, you can check the html by adding a DNS local hostname line in your hosts file (/etc/hosts) like the one below

192.168.10.5 example.local

Any IP of webservers group should work.

You could check the ip with curl:

curl example.local

Ansible Modules and Collections

The module is a reusable binary that ansible copies and executes on each node (when needed) to accomplish the action defined in each task. The modules are specific, for example: administering users on specific database, managing VLAN interfaces on a network device. Each task invokes a single module, and with playbooks you invoke several different modules..

Ansible >= 2.0 recommends that you use the full qualified name of the module (for example ansible.builtin.copy), because of conflicts with single names.

The latest version of Ansible documentation talks about collections, which are a standard way to distribute playbooks, plugins, roles, and modules.

You can install collections with ansible-galaxy command or with a requirements file which you can distribute with your playbook codes.

The default installation of ansible comes with some collections pre-installed. You can check then with the command ansible-galaxy collection list

There is a lot you can do with the ansible collections, like referencing then in playbooks, importing playbooks distributed in collections and many more. Check the documentation: https://docs.ansible.com/ansible/latest/user_guide/collections_using.html#using-collections-in-a-playbook

We already saw a basic usage of the builtin collection in your previous playbook example.

To a list of all modules check this page: https://docs.ansible.com/ansible/latest/collections/index_module.html

Some interesting collections are:

  • amazon.aws - Manages AWS resources
  • community.aws - Manages AWS resources
  • ansible.builtin - The builtin collection with many usefull tools
  • ansible.posix - Tool to manage ACL, add SSH authorized keys, synchronize file with rsync, and more
  • azure.azcollection - Manages Azure resources
  • community.azure - Manages Azure resources
  • community.digitalocean - Manages digital ocean resources
  • community.dns - Manages DNS zones on some providers.
  • community.docker - Manages docker containers
  • community.general - Many community modules for tools like keycloak, influxdb, FreeIPA and others, Java
  • community.google - Manages Google cloud resources
  • google.cloud - Manages Google cloud resources (more complete)
  • community.mysql - Manages Mysql databases
  • community.postgresql - Manages Postgresql databases
  • community.rabbitmq - Manages RabbitMQ exchanges, bindings, queues and other parameters.
  • community.vmware - Manages VMware virtualized infrastructure.
  • kubernetes.core - Core functionality for Kubernetes clusters

And many more collections. You can also search in the galaxy repository: https://galaxy.ansible.com/

Handlers

In the previous playbook we saw a handler in action. It execute a restart in apache only when need, or in other words, only when one of the tasks changes and trigger the actions. We need to declare the handler and to notify then. Here is another example:

tasks:
- name: Template configuration file
  ansible.builtin.template:
    src: template.j2
    dest: /etc/foo.conf
  notify:
    - Restart apache
    - Restart memcached

handlers:
  - name: Restart memcached
    ansible.builtin.service:
      name: memcached
      state: restarted

  - name: Restart apache
    ansible.builtin.service:
      name: apache
      state: restarted

This example notify two handlers: Restart memcached and Restart apache

You can also name the handler different and listen to the same event (notify). In the follow example all two handlers will be executed:

tasks:
  - name: Restart everything
    command: echo "this task will restart the web services"
    notify: "restart web services"

handlers:
  - name: Restart memcached
    service:
      name: memcached
      state: restarted
    listen: "restart web services"

  - name: Restart apache
    service:
      name: apache
      state: restarted
    listen: "restart web services"

Controlling when handlers run

By default handlers run after all tasks in a particular play have been completed. With this approach handlers runs only once, regardless of how many tasks notify it.

If you need to run handlers before the end of the play, add a task to flush the handlers after the one you want using the meta module. Example:

tasks:
  - name: Some tasks go here
    ansible.builtin.shell: ...

  - name: Flush handlers
    meta: flush_handlers

  - name: Some other tasks
    ansible.builtin.shell: ...

For more information about handlers check the documentation: https://docs.ansible.com/ansible/latest/user_guide/playbooks_handlers.html

Roles and the recommend layout for playbooks

You can separate your playbooks into roles, and then apply roles to target nodes. This way you separate in another file the roles and can reuse them. Roles also store defaults, handlers, variables and tasks in separate directories, instead of a single long document. But first lest talk about the recommended layout to organize ansible playbooks and it will become more clear how roles works.

The directory layout recommended in the Ansible documentation is:

production                # inventory file for production servers
staging                   # inventory file for staging environment

group_vars/
   group1.yml             # here we assign variables to particular groups
   group2.yml
host_vars/
   hostname1.yml          # here we assign variables to particular systems
   hostname2.yml

library/                  # if any custom modules, put them here (optional)
module_utils/             # if any custom module_utils to support modules, put them here (optional)
filter_plugins/           # if any custom filter plugins, put them here (optional)

site.yml                  # master playbook
webservers.yml            # playbook for webserver tier
dbservers.yml             # playbook for dbserver tier

roles/
    common/               # this hierarchy represents a "role"
        tasks/            #
            main.yml      #  <-- tasks file can include smaller files if warranted
        handlers/         #
            main.yml      #  <-- handlers file
        templates/        #  <-- files for use with the template resource
            ntp.conf.j2   #  <------- templates end in .j2
        files/            #
            bar.txt       #  <-- files for use with the copy resource
            foo.sh        #  <-- script files for use with the script resource
        vars/             #
            main.yml      #  <-- variables associated with this role
        defaults/         #
            main.yml      #  <-- default lower priority variables for this role
        meta/             #
            main.yml      #  <-- role dependencies
        library/          # roles can also include custom modules
        module_utils/     # roles can also include custom module_utils
        lookup_plugins/   # or other types of plugins, like lookup in this case

    webtier/              # same kind of structure as "common" was above, done for the webtier role
    monitoring/           # ""
    fooapp/               # ""

Alternatively you can put each inventory file with group_vars and host_vars in a separated directory. This is useful if your group_vars and host_vars don't have hat much in common in different environments. The layout could look like this:

inventories/
   production/
      hosts               # inventory file for production servers
      group_vars/
         group1.yml       # here we assign variables to particular groups
         group2.yml
      host_vars/
         hostname1.yml    # here we assign variables to particular systems
         hostname2.yml

   staging/
      hosts               # inventory file for staging environment
      group_vars/
         group1.yml       # here we assign variables to particular groups
         group2.yml
      host_vars/
         stagehost1.yml   # here we assign variables to particular systems
         stagehost2.yml

library/
module_utils/
filter_plugins/

site.yml
webservers.yml
dbservers.yml

roles/
    common/
    webtier/
    monitoring/
    fooapp/

In the roles directory you separate tasks, default variables, templates, and handlers and them apply this using targets in your playbook. Lets change your example of installing apache to follow this best practice.

The folder structure will be:

inventories
  local-hyperv
    group_vars
      webservers.yaml  
    hosts.yaml
  production
    group_vars
      webservers.yaml
    hosts.yaml
roles
  postgresl
    tasks
      main.yaml
  webserver
    handlers
      main.yaml
    tasks
      main.yaml
    templates
      index.html.j2
      site-vhost.conf.j2
    vars
      main.yaml
databases.yaml
site.yaml
webservers.yaml

Inside your inventory folder we create two environments: local-hyperv and production. We will override some variables to customize the playbooks in the group_vars folder, that overrides variables for groups of servers (there is also a host_vars folder that could be used to customize one host). In this example we create a webservers.yaml which overrides the variables for the webservers group.

local-hyperv webservers.yaml:

vhost_domain: example.local
proxy_timeout: 60

production webservers.yaml:

vhost_domain: example-prod.local
proxy_timeout: 80

Your hosts.yaml configuration differs only in the ip address of the servers.

local-hyperv hosts.yaml:

virtualmachines:
  hosts:
    vm01:
      ansible_user: vagrant
      ansible_host: 192.168.10.4
    vm02:
      ansible_user: vagrant
      ansible_host: 192.168.10.5
    vm03:
      ansible_user: vagrant
      ansible_host: 192.168.10.6
webservers:
  hosts:
    vm01:
    vm02:
databases:
  hosts:
    vm03:

production hosts.yaml:

virtualmachines:
  hosts:
    vm01:
      ansible_user: vagrant
      ansible_host: 192.168.10.7
    vm02:
      ansible_user: vagrant
      ansible_host: 192.168.10.8
    vm03:
      ansible_user: vagrant
      ansible_host: 192.168.10.9
webservers:
  hosts:
    vm01:
    vm02:
databases:
  hosts:
    vm03:

We then create your roles folder, the postgresql roles just setup some tasks:

roles/postgresql/tasks/main.yaml

- name: Ensure postgresql is at the latest version
  ansible.builtin.apt:
    name: postgresql
    state: latest
- name: Ensure that postgresql is started
  ansible.builtin.service:
    name: postgresql
    state: started

The webserver role is a little more complete it separated in handlers, tasks, templates and vars:

roles/webserver/handlers/main.yaml

- name: Restart apache
  ansible.builtin.service:
    name: apache2
    state: restarted

roles/webserver/templates/index.html.j2

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <meta http-equiv="X-UA-Compatible" content="ie=edge">
    <title>Ansible Getting Started</title>
  </head>
  <body>
    <main>
        <h1>Welcome to {{ vhost_domain }}</h1>  
    </main>
  </body>
</html>

roles/webserver/templates/site-vhost.conf.j2

<VirtualHost *:80>
    ServerName {{ vhost_domain }}
    Timeout {{ proxy_timeout }}
    KeepAliveTimeout {{ proxy_timeout }}
    MaxKeepAliveRequests 0

    ServerAlias www.{{ vhost_domain }}

    DocumentRoot /var/www/vhosts/{{ vhost_domain }}/public_html

    <Directory /var/www/vhosts/{{ vhost_domain }}/public_html>
        Options Indexes FollowSymLinks MultiViews
        AllowOverride All
    </Directory>



    CustomLog /var/log/apache2/{{ vhost_domain }}-access.log combined

    ErrorLog /var/log/apache2/{{ vhost_domain }}-error.log
    # Possible values include: debug, info, notice, warn, error, crit,

    # alert, emerg.

    LogLevel warn

</VirtualHost>

roles/webserver/vars/main.yaml

httpd_config_folder: /etc/apache2
vhost_domain: example.local
proxy_timeout: 60

roles/webserver/tasks/main.yaml

- name: Ensure apache is at the latest version
  ansible.builtin.apt:
    name: apache2
    state: latest
- name: Creates the html directory for site
  ansible.builtin.file:
    dest: '/var/www/vhosts/{{ vhost_domain }}/public_html'
    state: directory
- name: Create the site index.html
  ansible.builtin.template:
    src: ./templates/index.html.j2
    dest: '/var/www/vhosts/{{ vhost_domain }}/public_html/index.html'
- name: Config virtual host for site
  ansible.builtin.template:
    src: ./templates/site-vhost.conf.j2
    dest: '{{ httpd_config_folder }}/sites-available/{{ vhost_domain }}.conf'
  notify:
    - Restart apache
- name: Disables apache default vhost which conflicts with others
  ansible.builtin.command: "a2dissite 000-default.conf"
  args:
    removes: '{{ httpd_config_folder }}/sites-enabled/000-default.conf'
  notify:
    - Restart apache
- name: Enable VHost for Domain {{ vhost_domain }}  in apache
  ansible.builtin.command: "a2ensite {{ vhost_domain }}.conf"
  args:
    creates: '{{ httpd_config_folder }}/sites-enabled/{{ vhost_domain }}.conf'
  notify:
    - Restart apache
- name: Ensure Apache2 is Started
  ansible.builtin.service:
    name: apache2
    state: started

As you can see, we just refactor your previous site.yaml into separated chuncks. Ansible understands this role layout and put things in order for us.

To finalize lets check your databases.yaml, webservers.yaml, and site.yaml

databases.yaml

---
- hosts: databases
  become: true
  roles:
    - postgresql

webservers.yaml

---
- hosts: webservers
  become: true
  roles:
    - webserver

site.yaml

- import_playbook: webservers.yaml
- import_playbook: databases.yaml

The databases.yaml file targets the hosts databases and apply the role database, the webservers.yaml file do the same for webservers group. Finally site.yaml file glues the entire playbook together importing other playbooks.

We can then run this playbook on the local-hyperv environment with the command:

ansible-playbook -i inventories/local-hyperv site.yaml

If we need to change environments, we just change the inventories folder.

Final Thoughts

In this tutorial we cover the basics of Ansible so you can get started automating and documenting your infrastructure in sharable, replicable playbooks. One of core advantages of Ansible is it's lower starter requirements and easy to understand semantics.

Ansible Tower and Ansible Automation Platform are two products by Red

Hat that covers some limitations in Ansible core, by example its distributed authentication. Is the responsible of user to have root access to all machines through SSH, this could be a pain to maintain and rotate credentials in a big team. Another limitation of ansible core is if you want control who has access to different machines and environments, for the same reasons as the later, and also track which playbook has been applied and who applied it.

But Ansible core will serve you well if you do not have this specific requirements.

There is of course, some alternatives and complements to Ansible. I can mentioning a few with a brief summary of what it does:

  • SaltStack - Main competitor of ansible, it has a different architecture using agent approach and a reactive event-driven infrastructure. The event drive means you can react to events on systems and proactively apply configurations. It also more performant than Ansible, it can run commands in thousands of systems in seconds. The agent approach means you need a master server, the advantage is that out of the box, Salt centralize the authentication of systems in the master node, so you don't have to mess up with distributed keys. Salt can also be used without the master-client using salt-ssh or salt-proxy. It has a lot of vocabulary to its components like: salt mine, salt minion, pillar, grains, wheel, etc. Which can be a little confuse at start. It's worth mention that it also has powerful tools in its ecosystem like the SaltStack Config that enables role based access control, multi master support, reporting, and others. More info in https://saltproject.io/
  • Terraform - Terraform has a different objective than Ansible. Terraform is focused on provision of infrastructure, Ansible is focused on the post provision of it. That means terraform is good to automate cloud infra resource allocation and configuration. Terraform uses a custom language to define its resources. It is complementary to Ansible. More info https://www.terraform.io/
  • Pulumi - A competitor for terraform, its main focus is also in provision of infrastructure, the main difference is that uses a couple of mainstream programming languages such as Javascript, Typescript, Python, Go, C#, Java and Yaml. It's complementary to Ansible. More info in https://www.pulumi.com/

All examples for this tutorial can be downloaded from https://github.com/giovannicandido/ansible-blog-post

References

https://docs.ansible.com/ansible-core/2.13/index.html

https://docs.ansible.com/ansible-core/2.13/getting_started/get_started_inventory.html

https://docs.ansible.com/ansible-core/2.13/user_guide/intro_adhoc.html#intro-adhoc

https://www.reddit.com/r/ansible/comments/92ds1w/accept_all_host_keys_one_time/

https://docs.ansible.com/ansible/2.8/user_guide/playbooks_best_practices.html