Ansible Contributor Summit, Durham 2023

Ansible Contributor Summit, Durham 2023

Ansible contributor summit logo

The Ansible Contributor Summit is a full day working session for community contributors to interact with one another and meet with the Ansible development teams behind the projects like AWX, Galaxy NG, Molecule, Ansible Lint and Event-Driven Ansible. We will discuss important issues affecting the Ansible Community and help shape the future of collaboration.

We are happy to have the opportunity to do a second Contributor Summit this year, and this time it will be part of DjangoCon US 2023 in Durham, NC. Our previous experience co-locating the Contributor Summit with another related event was in February in Ghent, Belgium as part of CfgMgmtCamp 2023. It was so successful, we wanted to do it again with another great match.

Hello, Durham!

We will be meeting in the "Bull City", the home of Ansible itself and the inspiration for our beloved mascot, Ansibull. In case you didn't know, the Ansible office overlooks the Durham Bulls Athletic Park, and this mixed with Ansible word play is why you might see mentions of bulls and Ansibulls in Ansible land.

If you can't attend the event in person in Durham, worry not! Ansible Contributor Summit is a hybrid event, so you will be able to join online. Details will follow later on how to do it, but we will have both streaming for the presentations and communication channels to participate in the hackathons or sprints. Check how to register for the online event below.

Make the Contributor Summit yours

What would you like to hear about at the Ansible Contributor Summit? Are there topics you'd like to discuss or parts of the project you'd like to hack on? Propose your ideas in the following topic in the new Ansible Community Forum. Please do it by Friday, October 6, so we can prepare the agenda for the event. 

Provide the following information:

  • Name of the speaker/facilitator
  • Topic
  • Details or abstract
  • Type: Presentation, Discussion, Hacking or Other
  • Estimated duration

There is no need to have a presentation or slides ready at submission time. If you have an idea, propose it! We would love to hear about it.

How to attend: Register now, it's free!

Ansible Contributor Summit will be part of DjangoCon US 2023

For this new edition we are joining forces with DjangoCon US, so you can expect some extra discussions and topics around Python and Django in the Ansible projects. If you are interested in contributing to AWX (upstream of automation controller, formerly Red Hat Ansible Tower) or Galaxy NG (upstream of Ansible automation hub), this might be a great opportunity. 

To join in-person:

  • Date: Thursday, October 19, 2023
  • Location: Durham Convention Center, NC 27701
  • Room: Meeting room 1-2 (sessions), Meeting room 3-4 (lunch)
  • Registration: Please register hereand select the ticket for Ansible Contributor Summit + Sprint (Thursday, October 19). The Ansible Contributor Summit ticket is free.
  • Let the community know you are participating by replying to this Forum topic

To join online:

COVID-19 Policy

Please refer to DjangoCon's COVID-19 In-Person Policy. As we are part of DjangonCon's Sprints, we will adhere to the same policy. DjangoCon will provide masks and test kits on-site if you are unable to obtain them in advance.

Code of Conduct

For Ansible community events, we adhere to the Ansible Community Code of Conduct




New Ansible Galaxy

New Ansible Galaxy

For awhile, the Red Hat Ansible team behind the components Ansible automation hub and Ansible cloud automation hub at console.redhat.com have been on a special ops mission to enhance the galaxy_ng code base that serves the aforementioned components to also serve galaxy.ansible.com, with the intention of replacing galaxy.ansible.com with a fresh code base.

Galaxy, a legacy far far away…

The current Galaxy service has been running at galaxy.ansible.com for many years and is hugely successful in the community. It drives and nurtures Ansible adoption by sharing prebuilt Ansible content that solves many automation challenges.

One of the statistics we are most proud of are the contributions of 33,965 individual automation answers by the community in either Ansible Content Collections or Ansible Roles. Some of the top ranking automation content includes AWS, VMware, Linux, and Windows. Community users are able to download content for free, self-supported and interact with authors via GitHub for any further help or enhancements.

  • We are excited to announce that the galaxy.ansible.com code base is being updated with a host of exciting new features that the Ansible community can look forward to. Brought to you by the Red Hat Ansible team behind Ansible Automation Hub and Ansible Cloud Automation Hub on console.redhat.com, this new version will enhance the galaxy_ng code base that also serves the above listed components. As galaxy.ansible.com ages, the frameworks it sits on requires consistent maintenance for security vulnerabilities, including frequent patching. The team is committed to keeping Galaxy secure and high-functioning, so we set out to enable Galaxy with an automation hub, otherwise known as the galaxy_ng codebase.

New Generation

The galaxy_ng code base was always started with the intent to replace the original Galaxy at some point, and also serve as the same code base for the Ansible automation hub component found in Red Hat Ansible Automation Platform. Clearly maintaining one code base for more than one presentation has engineering and architecture benefits for Red Hat and the community, but also the community benefits from more QE, more usage of the code base and adoption of the various uses by organizations. This is a win for everyone.

It's been a long road to get here. We have had to continue with the progression of an automation hub at the same time as delivering community-only features into the code base, likesocial authentication for GitHub being required in Galaxy but not in the automation hub.

We are now at the point where we should make the switch and start the new journey on the galaxy_ng code base, together.

The Switch

The new galaxy service has been running since the beginning of the year on a URL known as beta-galaxy.ansible.com. This URL has hosted many community members and some institutions, who have helped interact with us on the service. The old galaxy.ansible.com service has been running as per usual; we have a sync between the two to make sure the beta-galaxy.ansible.com site is up to date.

We already have a URL called old-galaxy.ansible.com pointing to a read-only version of the galaxy.ansible.com service as it is today.

On September 30, we will switch the IP records used by galaxy.ansible.com to use the records that beta-galaxy.ansible.com uses.

  • Summary: https://old-galaxy.ansible.com/ will point to old Galaxy
  • https://galaxy.ansible.com/ will point to new galaxy_ng
  • https://beta-galaxy.ansible.com/ will point to new galaxy_ng

This means if the switch is not successful, we can easily switch back or ask the community to try the older service. But we have tested a lot, so everything should go to plan.

After the Switch

@rochacbruno has written up this awesome table to show what was in Galaxy vs the new beta Galaxy site. You will notice not everything has been reimplemented into galaxy_ng due to time constraints, but we are open to discussions on forum.ansible.com if something should come back, and more importantly, we wish for the community to help design new features to be introduced.

A good example of features that have not yet been reformulated is the current scoring system. We are retaining the old scores, so nobody loses out on the kudos they have earned already, however, we feel that a new scoring system needs to be implemented,and we’d love to begin soliciting community contributions for how we can improve it.

Feature Galaxy Beta Galaxy Reason
Sign in/up via GitHub ✔️ ✔️
API Key ✔️ ✔️
E-mail/App Notifications ✔️ Low usage
Following authors and content ✔️ Low usage
Multiple e-mail address per account ✔️ Low usage
Search with filters ✔️ ✔️ Improved on Beta
Download Count ✔️ ✔️
Popular Tags ✔️ TBD
Platform mark ✔️ Feature replaced by tags
Content Score ✔️ TBD Read only data will be kept,
scoring system will be redesigned
Content list and search ✔️ ✔️ Improved on beta
Content docs ✔️ ✔️ Improved on beta
Upload new collections ✔️ ✔️ Improved on beta
Import logs ✔️ ✔️ Improved on beta
List my namespaces ✔️ ✔️ Simplified on beta
Admin namespace access for contributors ✔️ ✔️ Simplified on beta
Add multiple provider namespaces for collections ✔️ Was not being used
UI to Import Role from Github ✔️ Recommendation is to publish collections instead,
existing roles can still be maintainer via CLI
Task Management ✔️ User can watch tasks spawned,
useful for watching status of CLI imports

The Future

With the new service running on galaxy_ng, we have a lot of new capabilities we can look to introduce going forward. Here are some to think about.

Zero downtime for Galaxy service with A-B deployments Execution environment image hosting, serving. Execution environment image introspection. Improved NetFlix style dashboards for content. Image building from content Notification integrations Global searches The single code base means we can be more agile, more responsive and more receptive to the community around the Galaxy service. We hope you'll enjoy it.

Call to Action

Saturday September 30th 2023 is the switch over date.

If you wish to participate in future enhancements by way of discussions or pull requests, you can find galaxy_ng here. We warmly invite all contributors to provide their valuable feedback on forum.ansible.com, utilizing the galaxy tag.

If you are having issues with the new Ansible Galaxy, please check out this forum topic for known issues, links for fixes, where to track updates, how to ask for help, and more.




Welcome to the new Ansible Community Forum

Welcome to the new Ansible Community Forum

Today, we're delighted to announce the launch of the new Ansible Community Forum - a single starting point for questions and help, development discussions, events, and much more. Everyone is invited, whether you are an Ansible user, contributor or developer, we are all community! Register here to join us!

Hello Discourse!

Here is a screenshot of the forum's main page:

screenshot

For those who are familiar with forums, we hope you'll feel right at home. For those who may be new, please don't worry! We have a list of tips & tricks here, and you're always welcome to check the guides and post in the feedback section to help us shape our online community.

Forums are only successful if they are used. To make that happen, the Ansible Community Team is looking to make this the real home of the Ansible Community - a place for users to get help, to find an event or local meetup, and a jumping-off point for development and contribution discussions. That means we need you to come and participate! Tell us what you're up to, post your thoughts or your questions, sign up for an event or two. 

The Ansible Community is global, and we are proud to include a section in the forum aimed at supporting our international communities who would like to have a space where they can write in their own language. If you speak any languages other than English, you are welcome to join and collaborate in your own language. We think this section will be particularly useful for those who do local meetups, so if you are part of one, please make use of it to communicate.

This new community space is based on Discourse, which is fast becoming the industry standard for web forums - you need only look to Python, Kubernetes, Fedora, and other open source communities. It's powerful enough to meet our needs, flexible enough to deal with the wider variety of projects and teams we have, has solid accessibility and translations, works well on mobile devices, and (if you prefer) can be interacted with by email. 

Dive In!

We in the Ansible Community Team are very excited about this, and we hope you are too. So sign up, head to this post to introduce yourself and tell us your thoughts. See you there! 




Ansible Community Day, Berlin 2023

Ansible Community Day, Berlin 2023

Ansible community day logo 

The Ansible Community Day is a new initiative by the Ansible Community Team at Red Hat to connect with the people using, contributing to, and developing the Ansible project worldwide. This new event complements the Ansible Contributor Summit, to put the users of Ansible in all their shapes and forms front and center.

In the last Ansible Community Day in Boston, the day right before AnsibleFest 2023, the community had the opportunity to meet in-person and get to know each other, learn a few things and share their knowledge using Ansible. It was such a great experience that we couldn't wait to have another. And here it is!

Guten Tag Berlin!

After two very successful Ansible Community Day events this year, the first in Pune, India in February and the next in Boston in May, our third event for 2023 will be held in Berlin, Germany!

Registration for Ansible Community Day Berlin 2023 is now open! If you'd like to attend, please check out the following Eventbrite page for specific details and registration. 

The event is set for Wednesday, September 20, 2023. We will meet in c-base (Rungestraße 20, Berlin, Germany)

What can you expect from (or bring to!) the upcoming Ansible Community Day?

  • Ansible community recap of previous announcements, status and what's new!
  • User stories: Sharing of achievements and challenges using Ansible
  • Live (and recorded) demos: It's showtime! 
  • Ansible Content Collections: how to create, contribute or use them to automate everything!
  • Better together: Integrating Ansible with other tools or platforms (e.g. Kubernetes, Terraform, Dynatrace, Grafana, Jenkins, etc.)
  • Learn about the Ansible ecosystem (e.g. AWX, devtools, Event-Driven Ansible) and how you can contribute to these open source projects to jumpstart your career
  • And last but not least: community networking!

If any of those above caught your attention and you are thinking "I could talk about that", check the Call for Proposals below!

Call for Proposals (CFP)

We are looking for speakers!

If you have something to share with the Ansible community of users and contributors, you can submit your proposal before September 4 at 23:59 (UTC).

Presentations don't need to be ready upon submission, we are only requesting a title and abstract at this time. Any of the topics above are valid submissions, but that's not an exhaustive list. Feel free to submit any idea or proposal involving Ansible, and we will let you know if it works for the current event.

Have questions? Doubts? You can reach out to us in our Social Room (#social:ansible.com) in the Ansible Matrix space or email the Ansible Community Team at ansible-community@redhat.com.

Code of Conduct

For Ansible community events, we adhere to the Ansible Community Code of Conduct.




Ansible data manipulation with a Filter

This year at Summit, an attendee posed a question about how to work with setting facts and changing data in Ansible. Many times we’ve come across people using task after task to manipulate data, to turn items into lists, filter our options, trying to do heavy data manipulation and to turn data from one source into another. Trying to make these programmatic changes using a mixture of YAML and Jinja inside of roles and playbooks is a headache of its own. While many of these options will work, they aren’t very efficient or easy to implement. Ansible Playbooks were never meant for programming.

One solution that is usually overlooked is to do the manipulation in Python inside of a module or a filter. This article will detail how to create a filter to manipulate data. In addition, a repository for all code referenced in this article has been created.

This example was first developed as a module. However after review, it was determined that these data transformations are best done as filters. Filters can take multiple data inputs, do the programmatic operations, and then can be used in line where they are used as input or set as a fact. In addition, this runs locally and not at the host level, so it can be faster and avoid unnecessary connections.

Starting Point

To begin, we need a dataset to work on. For this we used data from the automation controller API, workflows; it gives nested data on the nodes in each workflow to loop around. The variable file used in this case can be found in the repo.

The goal is to find what is being used in the automation controller looping over nested lists. While this is not a very practical example, it does give a starting point for creating a filter to manipulate any dataset.

Filter Basics

The bones of this filter were taken from ansible.netcommon.pop_ace. The start of every filter has some required options, such as FilterModule, and in addition AnsibleFilterError is good for troubleshooting.

from ansible.errors import AnsibleFilterError

The class invocation sets the code as a filter, and invokes the function to use for the filter. This sets the filter called "used" in the playbook, and the function to invoke. Note that the function and the filter name do not need to match.

class FilterModule(object):
    def filters(self):
        return {"example_filter": self.workflow_manip}

Then there is the documentation section: This can contain inputs, examples and other metadata. This is also how the ansible-docs are populated.

EXAMPLES = r"""
    - name: Transform Data
        ansible.builtin.set_fact:
        data_out: "{{ workflow_job_templates | example_filter }}"
    """

For the most part this should be standard information. While the documentation field is not required for filters, it is best practice to include it. While not shown here, the linked example also includes commented out expected output, which is great for going back and troubleshooting in the future.

Setting things up

The first thing to do is set the filter arguments for data coming in. In our case the variable data_in, and that the input is of type dict. It is also best to set the return variable as empty here and any other defaults that need defined.

def example_filter(self, data_in: dict):
        workflow_data = {}
        workflow_data["workflows"] = []
        workflow_data["job_templates"] = []
        workflow_data["inventory_sources"] = []
        workflow_data["approval_nodes"] = []

The next step is to do the actual data manipulation.

In the thick of it

This is where we get to what we actually want to do, take data from a source, loop through it, and extract the data needed. As the data is contained in nested lists, there is an inner and outer loop to go through.

for workflow in data_in:
        workflow_data["workflows"].append(workflow["name"])
        for node in workflow["related"]["workflow_nodes"]:
            if node["unified_job_template"]["type"] == "inventory_source":
                workflow_data["inventory_sources"].append(
                        node["unified_job_template"]["name"]
                )
            elif node["unified_job_template"]["type"] == "job_template":
                workflow_data["job_templates"].append(
                    node["unified_job_template"]["name"]
                )
            elif node["unified_job_template"]["type"] == "workflow_approval":
                workflow_data["approval_nodes"].append(
                    node["unified_job_template"]["name"]
                )
            else:
                raise AnsibleFilterError(
                    "Failed to find valid node: {0}".format(workflow)
                )

The first loop is to find the workflow name field and append it to the workflow list. The next loop goes through each workflow node, finds what type it is, and appends it to the appropriate list.

At the end is the error message, which should not be hit with valid data, however it is a useful bit of code to insert elsewhere when building or troubleshooting modules to force output to console in order to figure out what is going on. At the end of our manipulations, return with the result variable. The alternative would be three tasks, of which two would use loops, to achieve the same results. By using an actual programming language, its available libraries, and internalized loops, it simplifies the playbook, and provides better logic then what could be cobbled together using YAML and Jinja2 alone.

Summary

Hopefully this article provides a starting point for creating filters and simplifying tasks in playbooks. Just like everything in Ansible, there is not a single solution, there are 10 options to choose from. Not every solution fits the situation at hand. Hopefully this provides another better option to work with.




Welcome to the Ansible Lightspeed with IBM Watson Code Assistant Technical Preview

Welcome to the Ansible Lightspeed with IBM Watson Code Assistant Technical Preview

At Red Hat Summit and AnsibleFest 2023, we announced Ansible Lightspeed with IBM Watson Code Assistant, a new generative AI service for Ansible automation. Today, we are thrilled to announce the Ansible Lightspeed technical preview launch.

In this blog, we'll walk through the steps to access the Ansible Lightspeed with IBM Watson Code Assistant technical preview service and get it up and running in your Visual Studio Code environment. Then we'll share more about what you can expect from the experience and how to generate your first Ansible tasks with generative AI.

This is exciting stuff, so let's dive right in.

Technical Preview: Empowering Ansible Users with AI

Ansible Lightspeed with IBM Watson Code Assistant is a purpose-built generative AI tool that aims to streamline the creation of Ansible content. This capability is natively integrated into your VS Code editor via the Ansible VS Code extension. The AI capabilities are powered by Watson Code Assistant, a foundation model trained on Ansible Galaxy, GitHub, and other open sources of data.

The technical preview is open and available, free of charge, to all Ansible users. As more users engage with Ansible Lightspeed, the model recommendations will continuously improve, thanks to the valuable input and engagement from the community.

Getting Connected: Installation and Configuration

You'll need Visual Studio Code and Ansible installed on your workstation and a GitHub account to access the Ansible Lightspeed service. Let's get started!

  • Install the Ansible VS Code extension from the Visual Studio Code Marketplace by searching for "ansible" and selecting the extension published by Red Hat.
  • Enable the Ansible Lightspeed service within the extension by accessing the "Extension Settings" via the gear icon.
  • In the settings, enable both "Ansible Lightspeed enabled" and "Enable Ansible Lightspeed with Watson Code Assistant inline suggestions" checkboxes.

Note: You can enable Ansible Lightspeed in the "[User]" or "[Workspace]" settings, based on your preference. More information on VS Code User and Workspace settings can be found in their documentation.

Installing the Ansible Visual Studio Code extension. Installing the Ansible Visual Studio Code extension

  • Click on the Ansible "A" in the VS Code activity bar on the left-hand side of your editor to open the extension.
  • Click "Connect" and follow the prompts to log into GitHub using your credentials.

Log in using your GitHub credentials. Log in using your GitHub credentials

  • Read the Ansible Lightspeed technical preview terms and conditions and click "Agree".
  • Next, authorize Ansible Lightspeed for VS Code by clicking "Authorize".
  • Follow the browser prompts to redirect you back to VS Code, and, finally, click "Open" in the VS Code confirmation dialog box.

Authorize Ansible Lightspeed. Authorize Ansible Lightspeed

Congratulations! You've successfully configured Ansible Lightspeed in VS Code.

You can confirm that Ansible Lightspeed is enabled by checking the VS Code status bar at the bottom of the editor window.

Please ensure a Python environment is selected and your Ansible YAML files are associated with the Ansible language. More information on VS Code languages can be found in their documentation.

Ansible Lightspeed status. Ansible Lightspeed status

A Quick Tour of Ansible Lightspeed: Generating Your First Task

Now that you are connected to Ansible Lightspeed, it's time to experience its AI-enhanced content creation experience.

Let's use an example Playbook to walk through asking Ansible Lightspeed for AI-generated task suggestions and also highlight some of what you can expect in the technical preview release. The example Playbook installs Cockpit on a Red Hat Enterprise Linux system.

Note: As more users engage with Ansible Lightspeed, the breadth, depth, and quality of the recommendations generated by the model will improve. Therefore, the Ansible task suggestions in the examples below may differ from your results.

How do I generate an Ansible Lightspeed suggestion?

Let's use our first Playbook task in the deploy_monitoring.yml example Playbook to demonstrate asking Ansible Lightspeed for an AI suggestion.

  • Move your cursor to the end of the  "- name: Include redhat.rhel_system_roles.cockpit" task description.
  • Press "ENTER" to generate a suggestion.
  • Press "TAB" to accept the suggestion.

Generating an Ansible task. Generating an Ansible task

In this suggestion, we asked Ansible Lightspeed to include the "cockpit" Role, part of the  Red Hat Enterprise Linux System Roles Certified Content Collection. The suggestion used the Fully Qualified Collection Name (FQCN): ansible.builtin.include_role.

Using FQCNs is a recommended best practice and an example of the many unique post-processing capabilities we've baked into the Ansible Lightspeed service.

Let's move on to the next task.

Ansible best practices. We've got you covered.

Ansible Lightspeed best practices example. Ansible Lightspeed best practices example

This Playbook task copies cockpit.conf to the target host. Note that the recommendation included the "mode:" module argument and set the Linux file permissions to 0*644*.

Ansible Lightspeed provided a robust example of setting file permissions for the ansible.builtin.copy module, another recommended best practice.

We'll continue to expand on these natively integrated best practices as the service matures.

Finalizing the Playbook

Let's ask Ansible Lightspeed to generate suggestions for the remaining two Playbook tasks. The first task restarts the Cockpit service to apply our custom cockpit.conf configuration file and the second task permits Cockpit service traffic through the firewall.

Generate remaining Ansible tasks. Generate remaining Ansible tasks

Ansible Lightspeed with Watson Code Assistant and context

Generating contextually aware, accurate Ansible content suggestions saves you time and helps you create efficiently. One of Ansible Lightspeed's superpowers is context.

Ansible Lightspeed uses the Ansible task description and YAML file content to generate suggestions suited to what you're automating. Let's use an example to illustrate this.

Imagine we want to set module defaults for the ansible.posix.firewalld module in the last Ansible task. Specifically, always making the firewall rule changes permanent. We can accomplish this by using the module_defaults Playbook keyword, illustrated below.

module_defaults:
  ansible.posix.firewalld:
    permanent: true

Ansible Playbook module_defaults section

The module defaults section tells Ansible to always add "permanent: true" to every "ansible.posix.firewall" task in the Playbook. Let's ask Ansible Lightspeed for an updated suggestion with the module defaults.

Ansible Lightspeed context. Ansible Lightspeed context

Note that it used the full Playbook context and provided a revised recommendation that excludes "permanent: true". You can also apply this to other Playbook keywords, such as "vars".

Transparency and openness. Ansible Lightspeed Content Source Matching

Last, and certainly not least, is Ansible Lightspeed Content Source Matching.

Ansible Lightspeed Content Source Matching. Ansible Lightspeed Content Source Matching

We transparently share the potential source, Author, and content license of the training data used for the recommendation. Building trust in the community and supporting the relationships between authors and contributors is part of Red Hat's DNA.  These suggestions came from the Ansible community; we don't want to hide that.

Wrap-up

Congratulations! You have successfully configured Ansible Lightspeed in VS Code and experienced its generative AI capabilities with just a few simple steps.

We encourage you to share your feedback on the technical preview experience and stay updated on the project by joining the Ansible Lightspeed Matrix room to ask questions and get the latest news. Please also visit the Ansible Lightspeed landing page.

We'll update you with new resources to help you get the most out of your Ansible Lightspeed with Watson Code Assistant experience.

Happy automating...with AI!




What's New with Cloud Automation with amazon.aws 6.0.0

What's New with Cloud Automation with amazon.aws 6.0.0

When it comes to Amazon Web Services (AWS) infrastructure automation, the latest release of the certified amazon.aws Collection for Red Hat Ansible Automation Platform brings a number of enhancements to improve the overall user experience and speed up the process from development to production.

This blog post goes through changes and highlights what's new in the 6.0.0 release of this Ansible Content Collection. We have included numerous bug fixes, features, and code quality improvements that further enhance the amazon.aws Collection. Let's go through some of them!

Forward-looking Changes

New boto3/botocore Versioning

The amazon.aws Collection has dropped support for botocore<1.25.0 and boto3<1.22.0. Most modules will continue to work with older versions of the AWS Software Development Kit (SDK), however, compatibility with older versions of the AWS SDK is not guaranteed and will not be tested. When using older versions of the AWS SDK, a warning will be displayed by Ansible. Check out the module documentation for the minimum required version for each module. 

New Python Support Policy

On July 30, 2022, AWS announced that the AWS Command Line Interface (AWS CLI) v1 and AWS SDK for Python (boto3 and botocore), will no longer support Python 3.6. To continue supporting Red Hat's customers with secure and maintainable tools, we will be aligning with these deprecations and we will deprecate support for Python versions less than 3.7 by this collection. However, support for Python versions less than 3.7 by this collection will be removed in release 7.0.0. Additionally, support for Python versions less than 3.8 is expected to be removed in a release after 2024-12-01 based on currently available schedules.

Removed Features

The following features have been removed from this collection release.

  • ec2_vpc_endpoint_info - Support for the query parameter has been removed. The amazon.aws.ec2_vpc_endpoint_info module now only queries for endpoints. Services can be queried using the amazon.aws.ec2_vpc_endpoint_service_info module.
  • s3_object - Support for creating and deleting S3 buckets using the amazon.aws.s3_object module has been removed. S3 buckets can be created and deleted using the amazon.aws.s3_bucket module.

Deprecated Features

This collection release also introduces some deprecations. For consistency between the collection and AWS documentation, the boto3_profile alias for the profile option has been deprecated. Please use profile instead.

The amazon.aws.s3_object and amazon.aws.s3_object_info modules have also undergone several deprecations.

  • Passing contemporarily dualstack and endpoint_url has been deprecated. The dualstack parameter is ignored when endpoint_url  is passed. Support will be removed in a release after 2024-12-01 .
  • Support for passing values of overwrite other than always, never, different or last has been deprecated. Boolean values should be replaced by the strings always or never. Support will be removed in a release after 2024-12-01.

Code quality and CI improvement

Part of the effort in this release was dedicated to improving the quality of the collection's code. We have adopted several linting and formatting tools that help enforce coding conventions and best practices, with all code following the same style and standards. The linting tools help detect and flag code that may not be optimal, such as unused variables or functions, unnecessary loops or conditions, detect security vulnerabilities, and other inefficiencies. Formatting tools help to automatically format and style the code to ensure consistency and readability. 

Overall, this code quality improvement initiative aims to lead to more reliable, efficient and maintainable software that provides a better user experience and ultimately benefits both developers and end users. In addition, several plugins have undergone refactoring (e.g., removing duplicate code, simplifying complex logic and using design patterns where appropriate) to make the code more efficient and maintainable. We have also extended the coverage of unit tests so the code behaves as expected.

This initiative does not stop here. We have also decided to move to GitHub actions for CI from Zuul. This decision helps us simplify the CI pipeline as it is natively integrated with GitHub and improves scalability, collaboration, workflow management and the efficiency of the development process.

Because improving code quality is a continuous process that requires ongoing effort and attention, this work is ongoing and will be reflected in future releases.

Renamings

As naming might be generally tedious, a misleading module or option's name may complicate the user experience.

We decided to rename the amazon.aws.aws_secret lookup plugin in this collection release. This decision is a follow up of the renaming initiative started in release 5.0.0 of this collection. Therefore, the amazon.aws.aws_secret module has been renamed to amazon.aws.secretsmanager_secret

We have also decided to rename the amazon.aws.aws_ssm lookup plugin to amazon.aws.ssm_parameter.

However, aws_secret and aws_ssm remain as aliases and they will be deprecated in the future. 

For consistency amongst our plugins and modules, we renamed the following options:

  • aws_profile renamed to profile (aws_profile remains as an alias)
  • aws_access_key renamed to access_key (aws_access_key remains as an alias)
  • aws_secret_key renamed to secret_key (aws_secret_key remains as an alias)
  • aws_security_token renamed to security_token (aws_security_token remains as an alias)

These changes should not have observable effect for users outside the module/plugin documentation.

New Modules

This release brings a number of new base supported modules that implement AWS Backup capabilities. 

AWS Backup is a fully managed backup service that enables you to centralize and automate the backup of data across AWS services and on-premises applications,  eliminating the need for custom scripts and manual processes. 

With AWS Backup, you can create backup policies that define backup schedules and retention periods for your AWS resources, including Amazon EBS volumes, Amazon RDS databases, Amazon DynamoDB tables, Amazon EFS file systems, and Amazon EC2 instances. 

The following table highlights the functionalities covered by these new Red Hat supported modules:

  • backup_restore_job_info - Get detailed information about AWS Backup restore jobs initiated to restore a saved resource.
  • backup_vault - Manage AWS Backup vaults.
  • backup_vault_info - Get detailed information about an AWS Backup vault.
  • backup_plan - Manage AWS Backup plans.
  • backup_plan_info - Get detailed information about an AWS Backup Plan.
  • backup_selection - Manages AWS Backup selections.
  • backup_selection_info - Get detailed information about AWS Backup selections.
  • backup_tag - Manage tags on an  AWS backup plan, AWS backup vault, AWS recovery point.
  • backup_tag_info - List tags on AWS Backup resources.

Automate backups of your AWS resources with the new AWS Backup supported modules

In this example, I show you how to backup an RDS instance tagged backup: "daily". This example can be extended to all currently supported resource types (e.g., EC2, EFS, EBS, DynamoDB) which are tagged with backup: "daily". The following playbook shows the the steps necessary to achieve that:

- name: Automated backups of your AWS resources with AWS Backup
  hosts: localhost
  gather_facts: false


  tasks:
  - name: Create a mariadb instance tagged with backup; daily
     amazon.aws.rds_instance:
       id: "demo-backup-rdsinstance"
       state: present
       engine: mariadb
       username: 'test'
       password: 'test12345678'
       db_instance_class: 'db.t3.micro'
       allocated_storage: 20
       deletion_protection: true
       tags:
         backup: "daily"
     register: result


   - name: Create an IAM Role that is needed for AWS Backup
     community.aws.iam_role:
       name: "backup-role"
       assume_role_policy_document: '{{ lookup("file", "backup-policy.json") }}'
       create_instance_profile: no
       description: "Ansible AWS Backup Role"
       managed_policy:
         - "arn:aws:iam::aws:policy/service-role/AWSBackupServiceRolePolicyForBackup"
     register: iam_role


   - name: Create an AWS Backup vault for the plan to target
     # The AWS Backup vault is the data store for the backed up data.
     amazon.aws.backup_vault:
       backup_vault_name: "demo-backup-vault"


   - name: Get detailed information about the AWS Backup vault
     amazon.aws.backup_vault_info:
       backup_vault_names:
         - "demo-backup-vault"
     register: _info


   - name: Tag the AWS backup vault
     amazon.aws.backup_tag:
       resource: "{{ _info.backup_vaults.backup_vault_arn }}"
       tags:
           environment: test


   - name: Create an AWS Backup plan
     # A backup plan tells AWS Backup service to backup resources each day at 5 oclock in the morning. In the backup rules we specify the AWS Backup vault to target for storing recovery points.
     amazon.aws.backup_plan:
       backup_plan_name: "demo-backup-plan"
       rules:
         - rule_name: daily
           target_backup_vault_name: "demo-backup-vault"
           schedule_expression: "cron(0 5 ? * * *)"
           start_window_minutes: 60
           completion_window_minutes: 1440
     register: backup_plan_create_result


   - name: Get detailed information about the AWS Backup plan
     amazon.aws.backup_plan_info:
       backup_plan_names:
         - "demo-backup-plan"
     register: backup_plan_info_result


   - name: Create an AWS Backup selection
     # AWS Backup selection supports tag-based resource selection. This means that resources that should be backed up by the AWS Backup plan needs to be tagged with backup: daily and they are then automatically backed up by AWS Backup.
     amazon.aws.backup_selection:
      selection_name: "demo-backup-selection"
      backup_plan_name: "demo-backup-plan"
      iam_role_arn: "{{ iam_role.iam_role.arn }}"
      list_of_tags:
         - condition_type: "STRINGEQUALS"
           condition_key: "backup"
           condition_value: "daily"
     register: backup_selection_create_result


   - name: Get detailed information about the AWS Backup selection
     amazon.aws.backup_selection_info:
       backup_plan_name: "demo-backup-plan"

Once this playbook has finished the execution, AWS Backup will start to create daily backups of the resources tagged with backup=daily. You can monitor the status of the backup service demo on the AWS console. If we go to Jobs, we see some backup jobs that have already been completed. A backup job is the result of an AWS Backup plan rule and resource selection. It will attempt to backup the selected resources, within the time window defined in the backup plan rule.

screenshot

If we're taking a look at the AWS Backup vault we created, you can see it contains the recovery points of the RDS instance. A recovery point is either a snapshot or a point-in-time recovery backup. The data inside a recovery point cannot be edited. Tags and retention period can be changed if the backup vault allows it. You can use the recovery point to restore data.

screenshot

An AWS Backup restore job is used to restore data from backups taken with AWS Backup service. This release does not include the module that enables you to create an AWS backup restore job, but we are planning to include this feature in the future. However, in this release, we have included the amazon.aws.backup_restore_job_info module to get information about the restore job.

- name: Get detailed information about the AWS Backup restore job
  amazon.aws.backup_restore_job_info:
    restore_job_id: "{{ restore_job_id }}"



Event-Driven Ansible is Here

As you may recall, we introduced Event-Driven Ansible in developer preview last fall at AnsibleFest. Since that time, much work has been done across the community, the Red Hat development teams, customers, and last but not least, Red Hat partners. Today, we are pleased to announce that Event-Driven Ansible will be concluding its developer preview and will become generally available as part of Red Hat Ansible Automation Platform 2.4.

If you are new to Event-Driven Ansible, check out the developer preview blog I wrote last fall to learn the basics, and you may also be interested in this video on Ansible Rulebooks, as well as others in this playlist.

Transform your work with Event-Driven Ansible

For many IT teams, there is too much work to do and not enough time to get it all done. Event-Driven Ansible can help your team work smarter, not harder. How often are you doing routine tasks that get in the way of key priorities? How often are you needing to "drop everything" to respond to a ticket enrichment request or handle a user administration issue? Have you had to wake up at night to remediate an issue? How often are you adjusting applications and underlying technologies to support fluctuating workloads?

You will be happy to know there is a better way, and it is event-driven automation. Many pieces of recurring operational logic and processes can be automated by capturing them in Ansible Rulebooks, including issue remediation, fact gathering for service tickets, user administration tasks, and many more. But what are Ansible Rulebooks? Based on YAML, they are the basis of Event-Driven Ansible and contain conditional "if-this-then-that" logic.

Event-Driven Ansible can also be used with scalability logic, or using rulebooks to codify scalability actions for rapid and seamless response, such as adding capacity or adjusting buffer pool size when an application or workload calls for it, or scaling out hybrid-cloud solutions when certain conditions are met, and so on.

Event-driven patterns of automation make it faster to act on recurring events and also provide a simple way of distributing operational or scalability knowledge as an easy to read and verifiable structure. Event-Driven Ansible is accessible enough to be used by IT domain experts to solve a range of needs across use cases including infrastructure, networking, security, cloud and others.

When your organization adopts event-driven automation techniques, your entire team can execute in a consistent and accurate way. You gain new levels of efficiency and can better focus on the innovations that give your business an edge.

New features and enhancements

What can you expect from Event-Driven Ansible as part of this release? Several new components and features have been added. These include:

  • Event-Driven Ansible controller, which enables orchestration of multiple rulebooks and provides a single interface to manage and audit all responses across all event sources. These event sources are often third party monitoring and observability tools, but can be any source that provides intelligence about your IT environment.

  • Integration with automation controller in Ansible Automation Platform, which allows you to call existing workflows that you’ve already built using the run_job_template action, thus extending existing, trusted automation into event-driven automation scenarios. This is an optional way to specify actions from within rulebooks. You can also call an existing Ansible Playbook within your rulebooks, if you prefer.

  • Event throttling, which allows you to handle "event storms" using either a reactive approach with the once_within

Event-Driven Ansible ecosystem integrations

An ecosystem of Ansible Content Collections is important for Event-Driven Ansible because it works on the intelligence of changing IT conditions that come from event sources such as third party monitoring and observability tools. Ansible Content Collections are a variety of assets that help you jumpstart new automation projects. In the Event-Driven Ansible case, these assets typically are source plug-ins and rulebooks, but may also include other types of useful content. Red Hat Ansible Certified Content Collections are supported by Red Hat and/or partners and typically focus on the "how-to" of some type of automation. Ansible validated content focuses more on "what-to-do" scenarios, including best practices.

There has been extensive work done across the Event-Driven Ansible ecosystem in terms of new content, by both the community and third party Red Hat partners. The following is an overview of the work that has been done and what is to come:

Certified and validated content

The initial list of partners who are or will be certifying or validating content includes: Cisco ThousandEyes, CrowdStrike, CyberArk, Dynatrace, F5, IBM, Palo Alto Networks, and Zabbix and there are more to come. Red Hat has also developed key integrations including Apache Kafka, webhooks, Red Hat Insights, Red Hat OpenShift, Cisco NX-OS and Model-Driven Telemetry, AWS and more. Refer to the image below. More integrations are coming soon, including ServiceNow, Microsoft Azure, Google Cloud Platform and others.

Certified Content for Event-Driven Ansible generally is certified event source plugins, written in python, which connect an event source to Ansible Rulebooks. Validated Content for Event-Driven Ansible is generally Ansible Rulebooks which have been validated and contains best practices for common use cases.

Community- and custom-developed content

Community and custom content is available either upstream or through private customer sources. Community-developed integrations have included gcp pubsub and syslogd, among others.

Whether you have homegrown monitoring tools or need a specific solution immediately, you can build your own plug-ins for Event-Driven Ansible. Once you build your plug-in, consider whether this can be contributed to the Ansible community.

Getting Involved with Event-Driven Ansible

Ready to start exploring Event-Driven Ansible? There are a number of ways to do this. Visit Red Hat's Event-Driven Ansible page where you will find a series of free, self-paced interactive labs, information and analyst research.

You can also join a getting started with Event-Driven Ansiblewebinar on June 20, 2023.

Additional resources




YOU are the community

YOU are the community

New year, new role, new strategy...2023 is officially the year when I return to my roots. Back in 2014, I officially became part of the Ansible community. Admittingly, back then my focus was solely on figuring out how to best demonstrate to my customers the power of having a OpenStack private cloud. Anyone who has ever stood up or experimented with OpenStack knows that this is a tall order. Imagine having to stand up that platform over and over again on a daily basis. My focus was to find a way---a tool---that could help me do that, so I could focus on helping solve the customers' true challenges. Fast forward to now, and the decision to do it with Ansible still stands as the best choice hands down.

Many of you have stories just like mine. You are seeking out a way to simplify your daily tasks, so you can focus on the business. Just like me, you have decided that Ansible is the tool to do it. Before I started in this new role, I did some reflecting on my experience as part of the community. I have so many encouraging, positive, and fun stories I could share. Our community is truly amazing. The level of collaboration and desire to help are out of this world. Even with all those positive vibes, at times I would question whether or not I was giving back on the appropriate level. 

Looking back now, I understand better the dynamics of a community and understand that those of us who are consumers are just as valuable as the contributors. This realization sparked some further ideas about inclusiveness---the feeling of truly belonging. I asked myself the questions "Who makes up our community?" and "Do they know they are a pivotal part of the community?"

Hence the title of our blog today, YOU are the community. Each and every one of us who love and use Ansible are an integral part of the community, and I'd like to share a vision for our shared future. 

That vision consists of:

  • A strong and focused mission

Rally the community around desired work streams by creating a centralized space to attract more community involvement and accelerate their impact.

  • Cultivation of inclusiveness

Harvest the wide knowledge of the community to further create an environment of independence and assist in crafting ways of working that resolve disagreements and dissolve roadblocks quickly.

  • Inspiration, creativity, and collaboration

Encourage sustained focus and shared problem-solving, remove boundaries, and support all ideas (no idea is too impossible).

  • An understanding that failure is not fatal

Try new approaches, create new capabilities, and identify inflection points quickly---then pivot when needed.

  • More frequent recognition of success

Call out, reward, and celebrate the wins of the community members often.

We formed the strategy for the community from these tenets...big shout out to Greg Sutcliffe and Carol Chen for leading that charge. This strategy will help all of us stay focused to deliver the capabilities needed to uphold the vision. Over the next months, expect to see new tooling introduced to the community to foster better collaboration, increase overall upstream participation, and put the well-deserved spotlight on those of you who are doing great work on a daily basis. 

While these capabilities are being put in place, it feels like a good time to remind each of you that your voice matters. We absolutely need and want you to have a seat at the table. Some of you may be asking yourself "How can I help?" or may even be doubting if you are actually part of the community. So I want to debunk any doubts early, right here...right now. YOU are the community...our community consists of many different personas, such as contributors, users (which very much includes customers), partners and vendors. Each persona brings different value.

Contributors

You know who you are. These are our keyboard warriors who are actively contributing to any of the many Ansible-focused projects. Whether it is a simple PR to suggest better execution output or something very complex like fixing a Core runtime error, you are all contributors. Learn more about becoming a contributor.

Users

I was, and still very much am, in this category. There is no shame in that at all! For those of us who are using Ansible on a daily basis or rely heavily on it to solve some sort of business need, we are Ansible users. There is so much value in what users can offer to a contributor and vice versa. Red Hat Ansible Automation Platform users are also very much part of the active user community. Please know this, and I challenge you to connect with the contributors in the community. I think you will find a lot of benefit in that as you both are working through complex automation use cases. Learn how you can help.

Partners and vendors

These are our long-standing and foundational members of the community. They have brought multiple levels of much-needed integration into the Ansible project. They also assist with customers to help drive automation adoption---and truly innovate. To our partners and vendors: your participation in the community brings a huge impact to the current community, and you open the pathway for those joining us in the future. 

No matter which category you count yourself in, know that YOU are part of the Ansible community. Thank you for helping make Ansible what it is. I can't wait to see what the rest of 2023 has in store for all of us.

Also, be sure to subscribe to the Bullhorn for live updates and our progress on developing a new community web presence. No matter what, know that we are all in this together. Let's "Automate as One." 




Learn about Edge Automation at Red Hat Summit and AnsibleFest 2023

Learn about Edge Automation at Red Hat Summit and AnsibleFest 2023

As you may have heard, AnsibleFest will be taking place at Red Hat Summit in Boston May 23-25. This change will allow you to harness everything that Red Hat technology has to offer in a single place and will give you even more tools to address your automation needs. Join Ansible and automation-focused audiences to hear from Red Hat and Ansible leaders, customers, and partners while getting the latest on the Red Hat Ansible Automation Platform product roadmap, community projects, and what's coming in IT automation.

Across every industry, automation at the edge is enabling emerging use cases by helping organizations drive the next wave of innovation as they explore and execute digital transformation initiatives. Organizations are looking to extend a consistent automation experience across cloud, datacenter, and edge with the ability to scale in heterogeneous environments. Red Hat Ansible provides a common platform where organizations can build, run, and manage the entirety of their highly distributed systems, even to remote locations where network connectivity may be intermittent.

Because we understand how important edge automation is to teams looking to automate their entire IT landscape with a single platform, we have lined up some great sessions at AnsibleFest and Red Hat Summit:

Do you have questions about edge automation with Ansible? Bring them to AnsibleFest and Red Hat Summit, and take advantage of the experts at the Ansible booth, the Edge booth, and the Ask the Expert area.

We will also be running different labs throughout the event, so this is the perfect opportunity to get hands-on experience while being able to ask questions in real time to Ansible experts. These labs include:

Please refer to the session catalog for the most up to date room assignments; sessions subject to change.

Looking for more edge automation content? We have you covered. Check out these resources to learn more about how network automation with Ansible can help you: