C2B2 logo icon

Automating JBoss Fuse Deployment in the Cloud - PART 1

The first of a three part series covering the steps to automating a JBoss Fuse deployment in the cloud with Amazon EC2 & Ansible - how to provision Amazon EC2 instances using Ansible.

Welcome to the first of my three-part series about how to automate a JBoss Fuse deployment on the Cloud.

This series is designed to provide a comprehensive walk-through of the steps involved in automating a JBoss Fuse deployment on the cloud - from the creation of EC2 instances, and the installation and configuration of the JBoss Fuse Fabric cluster across those instances, through to adding Fabric containers to an existing Fabric Ensemble. 


PART 1: How to provision Amazon EC2 instances using Ansible

To start this blog series, I will be demonstrating how you can create the EC2 instances using the Ansible EC2 module.  

WORKSTATION & PREREQUISITES 

For this project, I used a CentOS 7 virtual machine running on AWS, which I configured as the Ansible host to be used for creating and running the Ansible Playbooks.

To run the playbook on that Ansible Host, the following must be in place:

  • An AWS account with IAM roles (this will allow you create and destroy EC2 instances)
  • An AWS Region to use 
  • An AMI ID for Centos 7.1 image (this is used to create the instances)
  • Ansible must be installed
  • boto Python client API for Amazon Web Services installed on Ansible Controller (https://pypi.python.org/pypi/boto)
  • key pair configured for accessing the account (boto client configured with AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY)
  • An EC2 Security Group configured to allow access to JBoss Fuse ports

For the purposes of this blog, we shall assume you have an AWS account with all the correct permissions, that you know which AWS region you will be using and that you have an AMI ID you want to use for the Centos Image.

Install and connect

We will begin by installing the required dependencies for Ansible and AWS. To do this, run the following commands on the Ansible host:

sudo pip install --upgrade pip
sudo pip install boto
sudo yum install ansible

This has now installed the Python client and Ansible.

To connect to AWS, you will need to get the AWS_ACCESS_KEY_ID and AWS_SECRET_KEY for a user that has the correct privileges to create EC2 instances.  These can be obtained by logging onto AWS, navigating to “Identity and Access Management”, selecting the required user, and clicking “Create Access Key”.  This generates values like the following:

Access Key ID: APIAJG5FJWPDDAPXDFEB
Secret Access Key: CAmgJ37s8IGLgVvQfJn7SEeCND/rjWhKJKsHCTLN

Download when asked and save to a safe location.

Then, add these values to the file ~/.boto, located in the user home directory that you will use to run the Ansible Playbook from.  The file should look something like this:

aws_access_key_id=APIAJG5FJWPDDAPXDFEB
aws_secret_access_key=CAmgJ37s8IGLgVvQfJn7SEeCND/rjWhKJKsHCTLN

CREATING THE PLAYBOOK

Now that we have fulfilled our prerequisites, we can move on to the next steps for creating the playbook.

We will start by running through the folder structure (see right) and adding the inventory variables and play source files for the playbook.

Inventory and Variables

It is important that the playbook can be used across different environments. So, the first task is to define the inventory and data structure required to model the variables used by the playbook, allowing for the multiple environments.

The environments folder is used to store the inventory and configuration information used by the playbook.  The configuration for each environment is then defined in a subfolder and contains the inventory and variables for that specific environment.  When the playbook is executed, the -i command line switch is used to reference the inventory folder for the target environment.

The hosts that are to be created are defined in the hosts file ec2-fabric-inventory, located in the environments/ec2 folder. Create the hosts file and add the following to create the EC2 instances:

[fabric-master]
fabricserver01 container_name=root1

[fabric-servers]
fabricserver02 container_name=root2
fabricserver03 container_name=root3

[broker-containers]
fusebroker01 container_name=broker1
fusebroker02 container_name=broker2

[service-containers]
fuseintegration01 container_name=integration1
fuseintegration02 container_name=integration2

This creates four groups: ‘fabric-master’, ‘fabric-servers’, ‘broker-containers’ and ‘service-containers’, and maps the EC2 instances to a group. A host variable is defined for the container name that will be used when installing JBoss Fuse, which will be covered in Part 2 of this series.

The group_vars folder contains the files used to define group specific variables and will have the same name as the groups defined in the hosts file.  This playbook does not define variables at a group level and instead uses ‘all’ to define common variables across the groups.

Create the environments/ec2/group_vars/all file and add the following contents to define the common variables used by all plays:

nexus_repo_url: "http://<Nexus Server : port>/nexus/content/repositories/home/"

fabric_mvn_repo: "http://<Nexus Server : port>/nexus/content/groups/FuseFabric/"

fuse_group_id: "jboss"
fuse_artifact_id: "jboss-fuse-karaf"
fuse_version: "6.3.0.redhat-187"

fuse_base: "/opt/fuse"
fuse_home: "{{fuse_base}}/jboss-fuse-{{fuse_version}}"
fuse_user: "fuse"
fuse_group: "fuse"

fabric_user: "admin"
fabric_pwd: "admin"
fuse_client: "{{fuse_home}}/bin/client -r 30 -d 10 -u {{fabric_user}} -p {{fabric_pwd}}"

zookeeper_pwd: "zookeeper" 

broker_containers: "broker1,broker2"
broker_group: "manchester"
broker_name: "defaultBroker"

wait_for_containers: 60

ec2_keypair: "<user>"
ec2_security_group: "sg-d3d70daa"
ec2_instance_type: "t2.small"
ec2_image: "ami-7abd0209"
ec2_subnet_ids: ['subnet-d290608b']
ec2_region: "eu-west-1"
ec2_ssh_user: "centos"
ec2_ssh_private_key_file: "~/.ssh/<user>.pem"
ec2_volumes:
  - device_name: /dev/sda1
    device_type: gp2
    volume_size: 25 # size of the root disk

wait_for_containers: 60

For this, we have used a nexus server that stores the Fuse images that we will retrieve later.

Plays

The plays folder contains the main Playbook files that are used to define the tasks and roles that need to be run to create the EC2 instances.  Create the playbook file plays/aws/ec2host.yml and add the following content:

- name: launch ec2 instances
  ec2:
    keypair: "{{ ec2_keypair }}"
    group_id: "{{ ec2_security_group }}"
    instance_type: "{{ ec2_instance_type }}"
    image: "{{ ec2_image }}"
    vpc_subnet_id: "{{ ec2_subnet_ids|random }}"
    region: "{{ ec2_region }}"
    instance_tags:
      Name: "{{inventory_hostname}}"
      ansible_group: "{{ hostvars[inventory_hostname].group_names[0] }}"
      environment: "{{ec2_env}}"
      root_container: "{{container_name}}"
    volumes: "{{ ec2_volumes }}"
    assign_public_ip: yes
    wait: true
    count: 1
  register: ec2
 
- pause:
    seconds: 30  

This play calls the EC2 Module, which creates the EC2 instance and reads the variables defined in the filegroup_vars/all to set the instances properties. The tags ‘Name’, ‘ansible_group’, ‘environment’ and ‘root_container’ are added to the EC2 instance. These tags are used as filters when dynamically discovering the public ip addresses of the EC2 instances to install JBoss Fuse. 

Next, create the playbook file plays/aws/ec2_provision.yml and add the following content:

---
- hosts: fabric-master, fabric-servers, broker-containers, service-containers
  connection: local 
  become: False
  gather_facts: no

  tasks:
    - include: ec2_host.yml

This play runs the ec2_host.yml play for the hosts defined in the fabric-master, fabric-servers, broker-containers and service-containers groups. The play is run locally on the Ansible host (connection: local) which uses the configured boto client to connect to AWS when creating the EC2 instances.

PROVISIONING THE PLAYBOOK

Finally, to provision the EC2 instances, run the following command:

ansible-playbook -i environments/ec2/ec2-fabric-inventory -e "ec2_env=Test" plays/aws/ec2_provision.yml

CONCLUSION

In this blog I ran through how to set up an Ansible host and defined a basic playbook to create EC2 instances using the EC2 module and boto client. From here, if you wanted speed up the provisioning of infrastructure you just need to create a new environment; this would only require the creation of another set of hosts and group_vars files to describe the environment. 

The process I have covered in this post demonstrates the principles of modelling Infrastructure as Code (IaC), which allows you to build consistent environments in a repeatable manner, and provides the base we need for Part 2, where we will be installing and configuring a JBoss Fuse Fabric Ensemble in these EC2 instances. 

Comment below if you have any questions!