We want to hear from you! Help us gain insights into the state of the Ansible ecosystem.
Take the Ansible Project Survey 2024

Announcing the Community Ansible 3.0.0 Package

Announcing the Community Ansible 3.0.0 Package

Version 3.0.0 of the Ansible community package marks the end of the restructuring of the Ansible ecosystem. This work culminates what began in 2019 to restructure the Ansible project and shape how Ansible content was delivered. Starting with Ansible 3.0.0, the versioning and naming reflects the new structure of the project in the following ways: 

  1. The versioning methodology for the Ansible community package now adopts semantic versioning, and begins to diverge from the versions of the Ansible Core package (which contains the Ansible language and runtime)
  2. The forthcoming Ansible Core package will be renamed from ansible-base in version 2.10 to ansible-core in version 2.11 for consistency

First, a little history. In Ansible 2.9 and prior, every plugin and module was in the Ansible project (https://github.com/ansible/ansible) itself. When you installed the "ansible" package, you got the language, runtime, and all content (modules and other plugins). Over time, the overwhelming popularity of Ansible created scalability concerns. Users had to wait many months for updated content. Developers had to rely on Ansible maintainers to review and merge their content. These obvious bottlenecks needed to be addressed. 

During the Ansible 2.10 development cycle, the Ansible community successfully migrated most modules and plugins into Collections. Collections, and the modules and plugins within them, could now be developed, updated and released independently of Ansible itself. When the migration was done, what remained in the core project started shipping as ansible-base, and the Ansible Community Team created a new Ansible community package. The Ansible 2.10 community package included ansible-base 2.10 plus all the Collections that contain modules and plugins that were migrated out of the original repository. As of Ansible 2.10, community users had two options:  continue installing everything with the Ansible community package, or install ansible-base and then add selected Collections individually.

Today, there are 3 distinct artefacts in the Ansible open source world:

  • Ansible Core - A minimal Ansible language and runtime (soon to be renamed from ansible-base)
  • Ansible Collections on Galaxy (community supported)
  • Ansible community package - Ansible installation including ansible-base/core plus community curated Collections

Now that these artefacts are managed separately, their versions are diverging as well. Moving forward, Ansible Core will maintain its existing numbering scheme (similar to the Linux Kernel). The next version of Ansible Core after ansible-base 2.10 will be ansible-core 2.11. The Ansible community package (Ansible Core + community Collections) is adopting semantic versioning. The next version of the Ansible community package after 2.10 is 3.0.0.

How the package is maintained and created has changed, but when you install the Ansible community package, you still get the functionality that existed in Ansible 2.9, with newer versions of modules and plugins. Ansible 3.0.0 includes more than 85 Collections containing thousands of modules and other plugins.

With so many changes happening at once for the Ansible community, we thought it was a good idea to put together a Q&A that can be found on our blog.




Ansible 3.0.0 Q&A

Ansible 3.0.0 Q&A

The Ansible community team has announced the release of Ansible 3.0.0 and here are the questions about the release that we've heard from community members so far. If you have a question that is not answered below, let us know on the mailing lists or IRC.

How can I stay up to date with changes in the Ansible community?

Subscribe to the ansible-announce mailing list for release announcements and to the Bullhorn newsletter for community news. The Bullhorn is distributed every two weeks with key dates and updates. You may also consider registering for the Ansible contributor summit on March 9, 2021.

About the Ansible community package and ansible-base/ansible-core

Are there any changes to the Ansible language in 3.0.0?

There are no significant changes since the Ansible 3.0.0 package depends on the same version of ansible-base as Ansible 2.10.x.

Why are the versions of ansible-base/ansible-core packages diverging from the Ansible package?

When the Ansible Community Team set out to restructure the Ansible project, Ansible was split into the following components: 

  • The core engine, modules and plugins
  • Community and partner supported Ansible Collections of modules and plugins

The former became known as ansible-base, soon to be ansible-core. The latter became additions on top of the core, available either ad-hoc or as part of the Ansible community package, which includes a set of curated and maintained Collections.

Semantic versioning of the Ansible package will let us signal backwards-compatibility as well as breaking changes in the included Collections independently of the core engine.

Because these are different components and different things, it is appropriate for them to be versioned independently from each other.

Will ansible-base/ansible-core also adopt semantic versioning?

No, the team managing ansible-core does not currently plan to adopt semantic versioning.

What is the correlation between Ansible 3.0.0 and ansible-base 2.10.x?

Ansible 3.0.0 is a package that includes over 85 Ansible Collections. It doesn't include ansible-base: it depends on it and specifies a required version range such as ansible-base>=2.10.6,<2.11 so that the appropriate core package gets installed automatically. For Ansible 4.0.0, this dependency will shift to ansible-core>=2.11,<2.12 instead.

ansible-base 2.10.x (as well as ansible-core in the near future) will continue to be available as a standalone package for users that prefer installing only the Collections they need.

How is the range of included Collection versions established?

The release build tooling queries the latest version of included Collections and determines the upper-limit based on that version.

For example, if a collection's version is 1.5, the range would be >=1.5,<2.0. If the collection's version is 2.3, the range would be >=2.3,<3.0.

The general idea is to keep Collections within a single major version throughout the lifecycle of a single Ansible package major version.

What version will ansible --version return?

ansible --version will return the version of ansible-base, not the version of the Ansible package, because ansible-base is the one providing the ansible command.

Installing and upgrading

How can I install Ansible 3.0.0?

The Ansible 3.0.0 Community package is released to PyPI and can be installed with pip install ansible==3.0.0.

Can I upgrade to Ansible 3.0.0 from previous versions of Ansible? If so which ones?

  • To upgrade to Ansible-3.0 from Ansible-2.10: pip install --upgrade ansible.
  • To upgrade to Ansible-3.0 from Ansible-2.9 or earlier: pip uninstall ansible; pip install ansible. This is due to a limitation in pip.

Yes, but the command to upgrade is different depending on the version you have now.

Ansible 3.0.0 is based on ansible-base 2.10, so playbook syntax remains the same between Ansible-2.10 and Ansible-3.0. However, there may be incompatibilities in some modules and plugins as Ansible-3.0.0 allows backwards-incompatible changes in Collections.

Will I be able to upgrade to Ansible 4.0.0 from Ansible 3.0.0?

Yes, but you will have to uninstall and reinstall again, due to the renaming of ansible-base to ansible-core: pip uninstall ansible; pip install ansible.

Ansible 4.0.0 will be based on ansible-core 2.11, so playbook syntax in Ansible 4.0.0 may include backwards incompatible changes (ansible-core does not use semantic versioning, so updates to the minor version can contain backwards incompatible changes).  When Ansible 4.0.0 is ready to start its pre-release cycle, porting guides will be available to help guide you through those changes.

Release cadence and scope

What is the release cadence moving forward?

Minor version releases of the Ansible package (such as 3.1.0, 3.2.0) are planned for every three weeks.  These releases will include new backwards-compatible features, modules and plugins as well as bug fixes.

Major version releases of the Ansible package (such as 4.0.0, 5.0.0) will happen after new releases of ansible-core. The Ansible 4.0.0 release is planned for May 2021, soon after the release of ansible-core 2.11 in April. After 4.0.0, a six month release cycle for major versions will become the normal cadence, with 5.0.0 releasing in November, trailing the planned 2.12 release of ansible-core.

How much change will each minor and major version of Ansible contain?

Each minor release of the Ansible community package will accept only backwards-compatible changes in included Collections. Collections must also use semantic versioning, so the Collection version numbers will reflect this rule. For example, if Ansible 3.0.0 releases with community.general 2.0.0, then all minor releases of Ansible 3.x (such as Ansible 3.1.0 or Ansible 3.5.0) would include a 2.x release of community.general (such as 2.8.0 or 2.9.5).

Each major release of the Ansible community package will accept the latest released version of each included Collection and may include the latest released version of ansible-core. Major releases of the Ansible community package can contain breaking changes in the modules and other plugins within the included Collections and/or in core features.

What changes will each patch release contain, given the use of semantic versioning here?

Patch releases will be used only when bugs are discovered that warrant a targeted fix for with a quick turnaround.  For instance, if a packaging bug is discovered in our release of 3.1.0 that prevents Debian packages from being built, a 3.1.1 release may occur the next day that fixes that issue. No new features are allowed in patch releases.

Packaging

Will Ansible 3.0.0 be made available as an upstream RPM?

No. RPM-based Linux distros, such as Fedora, have been creating superior RPM packages of Ansible for a while now. So we decided for Ansible-2.10 and ansible-base-2.10, the Ansible project would no longer provide pre-built RPMs.

Will Ansible 3.0.0 be available on Ubuntu Launchpad?

Yes. The Ansible Community Team is catching up to the changes in how the Ansible content is packaged but plan to have releases in the PPA soon.  The team is currently testing a new GitHub action to build the debs for the PPA.

Terminology

  • The ansible package

An all-in-one software package (Python, deb, rpm, etc) that provides backwards compatibility with Ansible 2.9 by including modules and plugins that have since been migrated to Ansible Collections.

The Ansible package depends on ansible-base (soon ansible-core). So when you do pip install ansible, pip installs ansible-base automatically.

Ansible 3.0.0 contains more Collections thanks to the wider Ansible community reviewing Collections against the community checklist. This list of what's included can be found at ansible-build-data.

  • Collection

A packaging format for bundling and distributing Ansible content: plugins, roles, modules, playbooks, documentation and more. Can be released independent of other Collections or ansible-base so features and bug fixes can be made available sooner to users. Installed from source repositories, from galaxy.ansible.com via ansible-galaxy collection install <namespace.collection> or using a requirements.yml file.

  • ansible-base

New for 2.10. The codebase that is now contained in github.com/ansible/ansible for the Ansible 2.10 release. It contains a minimal amount of modules and plugins and allows other Collections to be installed. Similar to Ansible 2.9 though without any content that has since moved into a Collection.

Renamed to ansible-core in the devel branch of Ansible and will be released under that name from version 2.11 onwards.

  • Red Hat Ansible Automation Platform

The commercially available enterprise offering from Red Hat, combining multiple Ansible focused projects, including ansible-core, awx, galaxy_ng, Collections and various Red Hat tools focused on an integrated Ansible user experience.







Using New Ansible Utilities for Operational State Management and Remediation

Using New Ansible Utilities for Operational State Management and Remediation

Comparing the current operational state of your IT infrastructure to your desired state is a common use case for IT automation.  This allows automation users to identify drift or problem scenarios to take corrective actions and even proactively identify and solve problems.  This blog post will walk through the automation workflow for validation of operational state and even automatic remediation of issues.

We will demonstrate how the Red Hat supported and certified Ansible content can be used to:

  • Collect the current operational state from the remote host and convert it into normalised structure data.
  • Define the desired state criteria in a standard based format that can be used across enterprise infrastructure teams.
  • Validate the current state data against the pre-defined criteria to identify if there is any deviation.
  • Take corrective remediation action as required.
  • Validate input data as per the data model schema

Gathering state data from a remote host

The recently released ansible.utils version 1.0.0 Collection has added support for ansible.utils.cli_parse module, which converts text data into structured JSON format.  The module has the capability to either execute the command on the remote endpoint and fetch the text response, or read the text from a file on the control node to convert it into structured data.  This module can work with both traditional Linux servers as well as vendor appliances, such as network devices that don't have the ability to execute Python, and the module relies on well-known text parser libraries for this conversion. The current supported CLI parser sub plugin engines are as below:

  1. ansible.utils.textfsm Uses textfsm python library
  2. ansible.utils.ttp Uses ttp python library
  3. ansible.netcommon.native Uses netcommon inbuilt parser engine
  4. ansible.netcommon.ntc_templates Uses ntc_templates python library
  5. ansible.netcommon.pyats Uses pyats python library
  6. ansible.utils.xmlUses xmltodict python library 
  7. ansible.utils.json

state assessment blog

The examples described in this blog uses Cisco network switch, NXOS version 7.3(0)D1(1), as the remote endpoint and Ansible version 2.9.15 running on the control node and requires ansible.utils, ansible.netcommon and cisco.nxos Collections to be installed on the control node.

The below Ansible snippet fetches the operational state of the interfaces and translates it to structured data using ansible.netcommon.pyats parser. This parse requires pyats library to be installed on the control node.

---
- hosts: nxos
  connection: ansible.netcommon.network_cli
  gather_facts: false
  vars:
    ansible_network_os: cisco.nxos.nxos
    ansible_user: "changeme"
    ansible_password: "changeme"

  tasks:
  - name: "Fetch interface state and parse with pyats"
    ansible.utils.cli_parse:
      command: show interface
      parser:
        name: ansible.netcommon.pyats
    register: nxos_pyats_show_interface

  - name: print structured interface state data
    ansible.builtin.debug:
      msg: "{{ nxos_pyats_show_interface['parsed'] }}"

The value of the command option in ansible.utils.cli_parse task is the command that should the executed on the remote host, alternatively, the task can accept a text option that accepts the value directly in string format and can be used in case the response of the command is already prefetched. The name option under the parser parent option can be any one of the above-discussed parser sub plugins.

After running the playbook, the output of ansible.utils.cli_parse task for the given host is as shown for reference:

ok: [nxos] => {
   "changed": false,
   "parsed": {
       "Ethernet2/1": {
           "admin_state": "down",
           "auto_mdix": "off",
           "auto_negotiate": false,
           "bandwidth": 1000000,
           "beacon": "off"
           <--snip-->
       },
       "Ethernet2/10": {
           "admin_state": "down",
           "auto_mdix": "off",
           "auto_negotiate": false,
           "bandwidth": 1000000,
           "beacon": "off",
           <--snip-->
       }
   }

Notice the value of admin_state key for some of the interfaces is down, for the complete output refer here.

Defining state criteria and validation

The ansible.utils Collection has added support for the ansible.utils.validate module, which validates the input JSON data with the provided criteria based on the validation engine. The currently supported validation engine is jsonschema, and in future support for additional validation, engines will be added on a need basis. 

In the above section, we fetched the interface state and converted to structured JSON data. Suppose if we want the desired admin state for all the interfaces to always be in up state the criteria for jsonschema will look like:

$cat criterias/nxos_show_interface_admin_criteria.json
{
        "type" : "object",
        "patternProperties": {
                "^.*": {
                        "type": "object",
                        "properties": {
                                "admin_state": {
                                        "type": "string",
                                        "pattern": "up"
                                }
                        }
                }
        }

After the criteria for the desired state of the resource is defined, it can be used with the ansible.utils.validate module to check if the current state of the resource matches with the desired state as shown in the below task.

- name: validate interface for admin state
  ansible.utils.validate:
    data: "{{ nxos_pyats_show_interface['parsed'] }}"
    criteria:
      - "{{ lookup('file',  './criterias/nxos_show_interface_admin_criteria.json') | from_json }}"
    engine: ansible.utils.jsonschema
  ignore_errors: true
  register: result

- name: print the interface names that does not satisfy the desired state
  ansible.builtin.debug:
    msg: "{{ item['data_path'].split('.')[0] }}"
  loop: "{{ result['errors'] }}"
  when: "'errors' in result"

The data option of ansible.utils.validate task accepts a JSON value and in this case, it is the output parsed from ansible.utils.cli_parse module as discussed above. The value of engine option is the sub plugin name of the validate module that is ansible.utils.jsonschema, and it identifies the underlying validation library to be used; in this case, we are using jsonschema library. The value of the criteria option can be a list and should be in a format that is defined by the validation engine used. For the above to run jsonschema, installing a library is required on the control node. The output of the above task run will be a list of errors indicating interfaces that do not have admin value in up state. The reference output can be seen here (note: the output will vary based on the state of the interfaces on the remote host).

TASK [validate interface for admin state] ***********************************************************************************************************
fatal: [nxos02]: FAILED! => {"changed": false, "errors": [{"data_path": "Ethernet2/1.admin_state", "expected": "up", "found": "down", "json_path": "$.Ethernet2/1.admin_state", "message": "'down' does not match 'up'", "relative_schema": {"pattern": "up", "type": "string"}, "schema_path": "patternProperties.^.*.properties.admin_state.pattern", "validator": "pattern"}, {"data_path": "Ethernet2/10.admin_state", "expected": "up", "found": "down", "json_path": "$.Ethernet2/10.admin_state", "message": "'down' does not match 'up'", "relative_schema": {"pattern": "up", "type": "string"}, "schema_path": "patternProperties.^.*.properties.admin_state.pattern", "validator": "pattern"}], "msg": "Validation errors were found.\nAt 'patternProperties.^.*.properties.admin_state.pattern' 'down' does not match 'up'. \nAt 'patternProperties.^.*.properties.admin_state.pattern' 'down' does not match 'up'. \nAt 'patternProperties.^.*.properties.admin_state.pattern' 'down' does not match 'up'. "}
...ignoring
TASK [print the interface names that does not satisfy the desired state] ****************************************************************************
Monday 14 December 2020  11:05:38 +0530 (0:00:01.661)       0:00:28.676 *******
ok: [nxos] => {
   "msg": "Ethernet2/1"
}
ok: [nxos] => {
   "msg": "Ethernet2/10"
}

PLAY RECAP ******************************************************************************************************************************************
nxos                       : ok=4    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=1

As seen from the above output, the interface Ethernet2/1 and Ethernet2/10 are not in the desired state as per the defined criteria.

Remediation

Based on the output of the ansible.utils.validate task, a number of remediation actions can be taken using Ansible modules for configuration management and/or reporting. In our case, we will be using the cisco.nxos.nxos_interfaces resource module to configure the given interfaces in admin up state as shown in the below snippet.

- name: Configure interface with drift in admin up state
  cisco.nxos.nxos_interfaces:
    config:
    - name: "{{ item['data_path'].split('.')[0] }}"
      enabled: true
  loop: "{{ result['errors'] }}"
  when: "'errors' in result"

This remediation task will be executed only when the validation from the earlier task fails and will run only for those interfaces whose admin state is not up.

Data validation

It is often required to validate the data before giving it as an input to the task to ensure the input data structure is  per the expected data model.  This allows us to validate data model adherence prior to pushing configuration to the network device. This use case is explained in the data validation blog from Ivan Pepelnjak.

state assessment blog two

The blog uses command-line tools to validate the input data, however with the support of the ansible.utils.validate module, this functionality can now be added in the Ansible Playbook itself as shown in the below snippet.

- name: validate bgp data data with jsonschema bgp model criteria
  ansible.utils.validate:
    data: "{{ hostvars }}"
    criteria:
      - "{{ lookup('file', './bgp_data_model_criteria.json') |  from_json }}"
    engine: ansible.utils.jsonschema
  register: result

The criteria structure stored in bgp_data_model_criteria.json file locally can be referred here  (modified example from the original blog post) and the sample host_vars file as below:

$cat host_vars/nxos.yaml
---
bgp_as: 0
description: Unexpected

The output of the above task run can be seen as below:

TASK [validate bgp data data with jsonschema bgp model criteria] *******************************************************************************************
fatal: [nxos]: FAILED! => {"changed": false, "errors": [{"data_path": "nxos.bgp_as", "expected": 1, "found": 0, "json_path": "$.nxos.bgp_as", "message": "0 is less than the minimum of 1", "relative_schema": {"maximum": 65535, "minimum": 1, "type": "number"}, "schema_path": "patternProperties..*.properties.bgp_as.minimum", "validator": "minimum"}], "msg": "Validation errors were found.\nAt 'patternProperties..*.properties.bgp_as.minimum' 0 is less than the minimum of 1. "}