Getting Started with Event-Driven Ansible

Getting Started with Event-Driven Ansible

As one technology advances, it expands the possibilities for other technologies and offers the solutions of tomorrow for the challenges we face today. AnsibleFest 2022 brings us new advances in Ansible automation that are as bright as they are innovative. I am talking about the Event-Driven Ansible developer preview.

Automation allows us to give our systems and technology speed and agility while minimizing human error. However, when it comes to trouble tickets and issues, we are often left to traditional and manual methods of troubleshooting and information gathering. We inherently slow things down and interrupt our businesses. We have to gather information, try our common troubleshooting steps, confirm with different teams, and eventually, we need to sleep.

The following image illustrates a support lifecycle with many manual steps and hand-offs:

support lifecycle diagram

One application of Event-Driven Ansible is to remediate technology issues before near real-time, or at least trigger troubleshooting and information collection in an attempt to find the root cause of an outage while your support teams handle other issues.

The following image illustrates how event-driven automation is used in the support lifecycle: fewer steps, faster Mean-Time-To-Resolution.

Event-Driven Ansible in the support lifecycle

Event-Driven Ansible has the potential to change the way we respond to issues and illuminates many new automation possibilities. So, how do you take the next step with Event-Driven Ansible?

Let’s get started!

Event-Driven Ansible is currently in developer preview, however there is nothing stopping us from installing ansible-rulebook, which is the CLI component of Event-Driven Ansible, and building our first rulebook. Event-Driven Ansible contains a decision framework that was built using Drools. We need a rulebook to tell the system what events to flag and how to respond to them. These rulebooks are also created in YAML and are used like traditional Ansible Playbooks, so this makes it easier to understand and build the rulebooks we need. One key difference between playbooks and rulebooks is the If-this-then-that coding that is needed in a rulebook to make an event driven automation approach work.

A rulebook is comprised of three main components:

  • Sources define which event source we will use. These sources come from source plugins which have been built to accommodate common use cases. With time, more and more sources will be available. There are some source plugins that are available already, including: webhooks, Kafka, Azure service bus, file changes, and alertmanager.

  • Rules define conditionals we will try to match from the event source. Should the condition be met, then we can trigger an action.

  • Actions trigger what you need to happen should a condition be met. Some of the current actions are: run_playbook, run_module, set_fact, post_event, and debug.

getting-started-with-event-driven-ansible

Now, let's install ansible-rulebook and start with our very first event.

To install ansible-rulebook, we can install our Galaxy Collection, which has a playbook to install everything we need.

ansible-galaxy collection install ansible.eda

Once the Collection is installed, you can run the install-rulebook-cli.yml playbook. This will install everything you need to get started with ansible-rulebook on the command line. This is currently supported for Mac and Fedora.

Note: Now, you could also skip this method above and install ansible-rulebook with pip, followed by installing the ansible.eda collection. Java 11+ is required if you use this method and we suggest using openjdk. (This step is not required if you used the previous install method.)

pip install ansible-rulebook

ansible-galaxy collection install ansible.eda

If you want to contribute to ansible-rulebook, you can also fork the following GitHub repository. This repository also contains instructions for setting up your development environment and how to build a test container.

Let's build an example rulebook that will trigger an action from a webhook. We will be looking for a specific payload from the webhook, and if that condition is met from the webhook event, then ansible-rulebook will trigger the desired action. Below is our example rulebook:

---
- name: Listen for events on a webhook
 hosts: all

 ## Define our source for events

 sources:
   - ansible.eda.webhook:
       host: 0.0.0.0
       port: 5000

 ## Define the conditions we are looking for

 rules:
   - name: Say Hello
     condition: event.payload.message == "Ansible is super cool!"

 ## Define the action we should take should the condition be met

     action:
       run_playbook:
         name: say-what.yml

If we look at this example, we can see the structure of the rulebook. Our sources, rules and actions are defined. We are using the webhook source plugin from our ansible.eda collection, and we are looking for a message payload from our webhook that contains "Ansible is super cool". Once this condition has been met, our defined action will trigger, which in this case is to trigger a playbook.

One important thing to take note of ansible-rulebook is that it is not like ansible-playbook which runs a playbook and once the playbook has been completed it will exit. With ansible-rulebook, it will continue to run waiting for events and matching those events. It will only exit upon a shutdown action or if there is an issue with the event source itself, for example if a website you are watching with the url-check plugin stops working.

With our rulebook built, we will simply tell ansible-rulebook to use it as a ruleset and wait for events:

root@ansible-rulebook:/root# ansible-rulebook --rules webhook-example.yml -i inventory.yml --verbose

INFO:ansible_events:Starting sources
INFO:ansible_events:Starting sources
INFO:ansible_events:Starting rules
INFO:root:run_ruleset
INFO:root:{'all': [{'m': {'payload.message': 'Ansible is super cool!'}}], 'run': <function make_fn.<locals>.fn at 0x7ff962418040>}
INFO:root:Waiting for event
INFO:root:load source
INFO:root:load source filters
INFO:root:Calling main in ansible.eda.webhook

Now, ansible-rulebook is ready and it's waiting for an event to match. If a webhook is triggered but the payload does not match our condition in our rule, we can see it in the ansible-rulebook verbose output:

…
INFO:root:Calling main in ansible.eda.webhook
INFO:aiohttp.access:127.0.0.1 [14/Oct/2022:09:49:32 +0000] "POST /endpoint HTTP/1.1" 200 158 "-" "curl/7.61.1"
INFO:root:Waiting for event

But once our payload matches what we are looking for, that’s when the magic happens, so we will simulate a webhook with the correct payload:

curl -H 'Content-Type: application/json' -d "{\"message\": \"Ansible is super cool\"}" 127.0.0.1:5000/endpoint

INFO:root:Calling main in ansible.eda.webhook
INFO:aiohttp.access:127.0.0.1 [14/Oct/2022:09:50:28 +0000] "POST /endpoint HTTP/1.1" 200 158 "-" "curl/7.61.1"
INFO:root:calling Say Hello
INFO:root:call_action run_playbook
INFO:root:substitute_variables [{'name': 'say-what.yml'}] [{'event': {'payload': {'message': 'Ansible is super cool'}, 'meta': {'endpoint': 'endpoint', 'headers': {'Host': '127.0.0.1:5000', 'User-Agent': 'curl/7.61.1', 'Accept': '*/*', 'Content-Type': 'application/json', 'Content-Length': '36'}}}, 'fact': {'payload': {'message': 'Ansible is super cool'}, 'meta': {'endpoint': 'endpoint', 'headers': {'Host': '127.0.0.1:5000', 'User-Agent': 'curl/7.61.1', 'Accept': '*/*', 'Content-Type': 'application/json', 'Content-Length': '36'}}}}]
INFO:root:action args: {'name': 'say-what.yml'}
INFO:root:running Ansible playbook: say-what.yml
INFO:root:Calling Ansible runner

PLAY [say thanks] **************************************************************

TASK [debug] *******************************************************************
ok: [localhost] => {
    "msg": "Thank you, my friend!"
}

PLAY RECAP *********************************************************************
localhost                  : ok=1    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

INFO:root:Waiting for event

We can see from the output above, that the condition was met from the webhook and ansible-rulebook then triggered our action which was to run_playbook. The playbook we defined is then triggered and once it completes we can see we revert back to "Waiting for event".

Event-Driven Ansible opens up the possibilities of faster resolution and greater automated observation of our environments. It has the possibility of simplifying the lives of many technical and sleep-deprived engineers. The current ansible-rulebook is easy to learn and work with, and the graphical user interface EDA-Server will simplify this further.

What can you do next?

Whether you are beginning your automation journey or a seasoned veteran, there are a variety of resources to enhance your automation knowledge:




Introducing the Event-Driven Ansible developer preview

Introducing the Event-Driven Ansible developer preview

Today at AnsibleFest 2022, Red Hat announced an exciting new developer preview for Event-Driven Ansible. Most customers are on a journey toward full end-to-end automation and there are many paths you take along this journey.  Event-Driven Ansible is a new way to enhance and expand automation. It improves IT speed and agility, while enabling consistency and resilience. 

By fully automating necessary but routine tasks, you and your team will have more time to focus on interesting engineering challenges and new innovations. For example, what if you no longer needed to pause critical work to manually add technical detail to  a service ticket?  Or address a user password reset request? Or reset a router as a first troubleshooting step? With Event-Driven Ansible, the friction in your day can be dramatically reduced, leaving more time to work on important projects, with some added work-life balance.  

Why a developer preview?

The Event-Driven Ansible technology was developed by Red Hat and is available on GitHub as a developer preview. Community input is essential. Since we are building a solution to best meet your needs, we're providing an opportunity for you to advocate for those needs. We ask that both technology providers and end users give it a try and tell us what you think. There are several ways you can give feedback - via comments in GitHub, during our office hours on November 16, 2022, or via the event-driven-automation@redhat.com email. 

Event-driven automation is part of an ecosystem

Any event-driven solution must be able to work within multi-vendor environments. So, we ask technology partners to not only try the Event-Driven Ansible developer preview, but also begin building Ansible Content Collections so that our solutions complement each other and make it faster and easier for joint customers to use them.

Designed for simplicity

Event-Driven Ansible is designed for simplicity and flexibility, much like we offer today in Red Hat Ansible Automation Platform.  What do we mean by this for Event-Driven Ansible? 

Until now, most event driven and "self-healing" automation projects have been complex and time-consuming to deliver because much of the solution is custom developed to meet a singular need. For example, automatically shut down network firewalls when certain activity patterns occur, then notify responsible teams.  This is a great and essential solution--for this one need. 

Event-Driven Ansible is designed to be more flexible with faster and more cost-effective ways to stand up new automation projects across any use case.  By writing an Ansible Rulebook (similar to Ansible Playbooks, but more oriented to "if-then"  scenarios) and allowing Event-Driven Ansible to subscribe to an event listening source, your teams can more quickly and easily automate a variety of tasks across the organization.

Think of it like a crescent wrench: a single tool that is easy to adjust to different size bolts.  Same idea here - a single automation tool that addresses a broad variety of IT automation needs. 

What is Event-Driven Ansible?

Event-Driven Ansible is a highly scalable, flexible automation capability that works with event sources such as other software vendors'  monitoring tools. In an automatic remediation use case, these vendor tools watch your IT solutions and identify "events," such as an outage. Event-Driven Ansible documents your team's technical information on how you want to act on the identified event (an outage in our example) as rules in Ansible Rulebooks. When this event (outage) occurs, Event-Driven Ansible  matches the rule to the "event" (the outage), and automatically implements the documented changes or response in the rulebook to handle the event. In our outage example, this may be an action such as resetting or rebooting the non-responding asset.

EDA diagram

There are three major building blocks in the Event-Driven Ansible model, sources, rules and actions that play key roles in completing the workflow described above:

  • Sources are third party vendor tools that provide the events. They define and identify where events occur, then pass them to Event-Driven Ansible.  Current source support includes Prometheus, Sensu, Red Hat solutions, webhooks and Kafka, as well as custom "bring your own" sources.
  • Rules document your desired handling of the event via Ansible Rulebooks. They use familiar YAML-like structures and follow an "if this then that" model.  Ansible Rulebooks may call Ansible Playbooks or have direct module execution functions.
  • Actions are the result of executing the Ansible Rulebook's instructions when the event occurs.

More about integrations

Event-Driven Ansible allows you to subscribe to sources, listen for events and then act on those events. Currently we have a number of source plugins that have been created and can be used. 

We are enabling events from partner technologies by providing event source plugins for webhooks and for Kafka. Many partner tools can utilize Kafka and webhooks for integration into the Event-Driven Ansible ecosystem. Once Event-Driven Ansible receives events from these sources, it can allocate rules against them from the instructions you have specified in Ansible Rulebooks. Technology providers can also develop event source plugins, which more directly integrate their tools with Event-Driven Ansible and distribute them via Content Collections.

Open source plugins are also supported. These plugins enable Event-Driven Ansible to process a number of different events. They include: 

  • Kafka for event streams
  • webhooks
  • watchdog, a file system watcher
  • url_check to check the status of a url 
  • Range, an event generation plugin
  • File, which loads facts from YAML
  • Roadmap integrations support processing from the cloud service providers

In addition to all these integrations that enable events to prompt action, it is important to note that Red Hat Ansible Automation Platform does not require an agent to be present on a target solution receiving an automated action.  This is convenient and ideal for technologies that cannot host an agent, such as an edge device or network router--- and it makes Event-Driven Ansible a simpler solution to deploy.

Starting small, thinking big: recommended use cases

Red Hat often recommends a "start small, think big" approach to growing your automation maturity and Even-Driven Ansible is no exception. We think IT service management is a great place to start and we suggest you look for simple tasks that get repeated very often to see the most benefit.  

You can use Event-Driven Ansible Rulebooks to enhance service tickets, do basic remediation of tickets and issues and also manage the variety of end-user requests that you receive everyday, like password resets.  

Additionally, you can automate use cases across all of the common areas where you automate today -- infrastructure, network, cloud, security, and edge -- for service management and other tasks.  Once you get the basics down, growing the number, scope and sophistication of your Ansible Rulebooks is easy.

Getting started and sharing feedback

Start by reviewing this web page where you will find more details on Event-Driven Ansible and can access additional resources such as a self-paced lab, how-to-video and more details about this solution. You will also find a registration link to our first Office Hours event, where you can ask questions and learn tips and techniques.  

Once you have some familiarity, use the developer preview code found here. In summary, your basic steps will be to download and install Event-Driven Ansible from the GitHub repository, configure sources of events so Event-Driven Ansible is subscribed, write your Ansible Rulebook(s) and start listening to events.

As a community project, we ask for your feedback through GitHub comments, on our Office Hours, or via email at event-driven-automation@redhat.com

Looking ahead for Event-Driven Ansible

While this technology is a community project, we have bigger ideas to shape this capability to meet your needs.  In addition, we hope to integrate Event-Driven Ansible as a component in Red Hat Ansible Automation Platform in the future. With Red Hat Ansible Automation Platform, you could gain access to all the platform has to offer, including RBAC and other controls, and the ability to use a single automation platform even more flexibly for both manually-initiated automation via Ansible Playbooks and also for your fully automated actions via Ansible Rulebooks.   

I hope this has provided a good overview of Event-Driven Ansible.










Using Ansible and Packer, From Provisioning to Orchestration

Using Ansible and Packer, From Provisioning to Orchestration

Red Hat Ansible Automation Platform can help you orchestrate, operationalize and govern your hybrid cloud deployments.  In my last public cloud blog, I talked about "Two Simple Ways Automation Can Save You Money on Your AWS Bill" and similarly to Ashton's blog "Bringing Order to the Cloud: Day 2 Operations in AWS with Ansible", we both wanted to look outside the common public cloud use-case of provisioning and deprovisioning resources and instead look at automating common operational tasks.  For this blog post I want to cover how the Technical Marketing team for Ansible orchestrates a pipeline for demos and workshops with Ansible and how we integrate that with custom AMIs (Amazon Machine Images) created with Packer.  Packer is an open source tool that allows IT operators to standardize and automate the process of building system images.

For some of our self-paced interactive hands-on labs on Ansible.com, we can quickly spin up images in seconds.  In an example automation pipeline we will:

  1. Provision a virtual instance.
  2. Use Ansible Automation Platform to install an application; in my case, I am literally installing our product Ansible Automation Platform (is that too meta?).
  3. After the application install, set up the lab guides, pre-load automation controller with some job templates, create inventory and credentials and even set up SSL certificates.  

While this is fast, it might take a few minutes to load, and web users are unlikely to be patient.  The Netflix era means that people want instant gratification!  Installing automation controller might take five to 10 minutes, so I need a faster method to deploy.

cloud automation pipeline diagram

What I can do is combine our normal Ansible automation pipeline with Packer and pre-build the cloud instances so they already have the application installed, and are configured and ready to go as soon as it boots.  Packer will provision a specific machine image on my public cloud (Azure, AWS, GCP), run the commands and changes I need, and then publish a new image with all the changes I made to the base image.  In my case I use Ansible the same way.  In my packer HCL (HashiCorp Configuration Language ) file I have an Ansible provisioner:

 provisioner "ansible" {
      command = "ansible-playbook"
      playbook_file = "pre_build_controller.yml"
      user = "ec2-user"
      inventory_file_template = "controller ansible_host={{ .Host }} ansible_user={{ .User }} ansible_port={{ .Port }}"
      extra_arguments = local.extra_args

    }

Red Hat Ansible Tech Marketing Example can be found on Github

This simple provisioner plugin is executing the Ansible Playbook pre_build_controller.yml.  I can also use Ansible Automation Platform to orchestrate the whole process by kicking off Packer and then continuing on.  Anything that I can do ahead of time, I can pre-build into the image.  This means there is less automation I need to do at boot time (or what is sometimes referred to as "automation just in time").  The new process looks like this diagram:

create pre-built image diagram

These two processes, building images and serving a demo environment, are actually independent of each other.  Depending on how often a pre-built image needs to be executed, we can schedule that in automation controller, or even generate them on-demand via webhooks. On-demand generation means as soon as someone changes an Ansible Playbook relevant to anything pre_build, we can have Ansible Automation Platform create the new image immediately, and even test it!

Sharing and copying cloud instances

Once we create a pre_built AMI, we need to make sure we can use it in multiple regions, and on other accounts. With public marketplace instances you can use cool automation tricks like using the ec2_ami_info module for dynamic lookups, but we have now essentially created private AMIs we can copy to other regions, or share to other AWS accounts so they have access to these pre_built images.  To solve this problem we can use automation, and I have created an Ansible Content Collection for ansible_cloud.share_ami.  

This Collection currently has two roles available that will assist cloud administrators, copy and share.

Copy

This role will copy an AMI from one region, to any other specified regions.  This means you can use Packer to create it just once, and have Ansible take care of copying it to any other regions and return you with a list of new AMIs per region.

- name: copy ami
    include_role:
      name: ansible_cloud.share_ami.copy
    vars:
      ami_list: "{{ my_ami_list }}"
      copy_to_regions: "{{ my_copy_to_regions }}"

Where your variable file looks like this:

my_ami_list:
  ap-northeast-1: ami-01334example
  ap-southeast-1: ami-0b3f3example
  eu-central-1: ami-03a5732example
  us-east-1: ami-01da94de9cexample
my_copy_to_regions:
  - us-west-1
  - us-east-2

In this case, there will be four AMIs copied to us-west-1 and us-east-2 with a new AMI identifier returned to your terminal window or the automation controller console.

Share

This role will share an AMI from one account and region to another account (in the same region).  This allows you to share your pre_built AMIs to as many accounts as you want really quickly.

- name: share ami
  include_role:
    name: ansible_cloud.share_ami.share
  vars:
    user_id_list: "{{ account_list }}"
    ami_list: "{{ my_ami_list }}"

Where your variable file looks like this:

my_ami_list:
  ap-northeast-1: ami-01334example
  ap-southeast-1: ami-0b3f3example
  eu-central-1: ami-03a5732example
  us-east-1: ami-01da94de9cexample
  us-east-2: ami-009f8b2c6dexample
account_list:
  - "11463example"
  - "90073example"
  - "71963example"
  - "07923example"

This would share these five AMIs to the four accounts listed.  There are also two optional variables for share AMI, new_ami_name and new_tag which will name (e.g. add the tag name: "whatever you put") and add a hard coded ansiblecloud tag (e.g. add the tag ansiblecloud: "whatever you put").  This could be further customized to add as many tags as you want to your AMIs to help keep track of them.

new_ami_name: "RHEL 8.6 with automation controller"
new_tag: "my test"

Now you can see one of the many ways that Ansible Automation Platform and Packer can easily and seamlessly work together to accomplish cloud automation tasks.  If you want more blogs on Ansible and Packer or Ansible and Terraform, please let us know!