Ansible | Cloud Academy Blog https://cloudacademy.com/blog/category/ansible-2/ Thu, 29 Sep 2022 14:59:44 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.1 Top 20 Open Source Tools for DevOps Success https://cloudacademy.com/blog/top-20-open-source-tools-for-devops-success/ https://cloudacademy.com/blog/top-20-open-source-tools-for-devops-success/#respond Tue, 09 Jul 2019 12:58:28 +0000 https://cloudacademy.com/?p=33731 Open source tools perform a very specific task, and the source code is openly published for use or modification free of charge. I’ve written about DevOps multiple times on this blog. I reiterate the point that DevOps is not about specific tools. It’s a philosophy for building and improving software...

The post Top 20 Open Source Tools for DevOps Success appeared first on Cloud Academy.

]]>
Open source tools perform a very specific task, and the source code is openly published for use or modification free of charge. I’ve written about DevOps multiple times on this blog. I reiterate the point that DevOps is not about specific tools. It’s a philosophy for building and improving software value streams, and there are three principles: flow, feedback, learning.

The philosophy is simple: Optimize for fast flow from development to production, integrate feedback from production into development, and continuously experiment to improve that process. These principles manifest themselves in software teams as continuous delivery (and hopefully deployment), highly integrated telemetry, and learning and experimentation drive the culture. That said, certain tools make achieving flow, feedback, and learning easier. You don’t have to shell out big bucks to third party vendors though. You can build a DevOps value stream with established open source tools.

Let’s start with the principle flow and what the open source community has to offer for supporting continuous delivery. In this article, we’ll cover the top 20 open source tools to achieve DevOps success. But to dive deeper in deployment pipelines and the role different tools, check out Cloud Academy’s DevOps – Continuous Integration and Continuous Delivery (CI/CD) Tools and Services  Learning Path. DevOps Playbook

Open Source Continuous Delivery

1. Gitlab is a great project for source control management, configuring continuous integration, and managing deployments. Gitlab offers a unified interface for continuous integration and deployment branded as “Auto DevOps.” Team members can trigger deploys or automatically created dedicated environments for a pull-request and see test results all within the same system.

2. Kubernetes and Docker are associated tools like docker-compose to make it easy to maintain development environments and work with any language or framework. Kubernetes is the go-to container orchestration platform today, so look here first for deploying containerized applications to production (and dev, test, staging, etc).

3. Spinnaker is designed for continuous delivery. Spinnaker removes grunt work from packaging and deploying applications. It has built-in support for continuous delivery practices like canary deployments, blue-green deploys, and even percentage based rollouts. Spinnaker abstracts away the underlying infrastructure so you can build a continuous delivery pipeline on AWS, GCP, or even on your own Kubernetes cluster.

Infrastructure-as-Code

The underlying infrastructure must be created and configured regardless of it being on a cloud provider or container orchestration. Infrastructure-as-code is the DevOps way.

4. Terraform (from Hashicorp) is the best tool for open source infrastructure-as-code. It supports AWS, GCP, Azure, Digital Open, and more using a declarative language. Terraform handles the underlying infrastructure such as EC2 instances, networking, and load balancers. It’s not intended to configure software running on that infrastructure. That’s where configuration management and immutable infrastructure tools have a role to play.

5. Packer (also from Hashicorp) is a tool for building immutable infrastructure. Packer can build Docker images, Amazon Machine Images, and other virtual machine formats. Its flexibility makes an easy choice for the “package” step in cloud based deployment processes. You can even integrate Packer and Spinnaker for golden image deployments.

6-9. Ansible, Chef, Puppet, and SaltStack are configuration management tools. Each vary slightly in design intended uses. They’re all intended to configure mutable state across your infrastructure. The odds are you’ll end up mixing Terraform, Ansible, and Packer for a complete infrastructure-as-code solution. Cloud Academy’s Cloud Configuration Management Tools Learning Path gives you an overview of configuration management, and then introduces you to three of the most common tools used today: Ansible, Puppet, and Chef. Cloud Academy’s Ansible Learning Path, developed in partnership with Ansible, teaches configuration management and application deployment. It demonstrates how Ansible’s flexibility can used to solve common DevOps problems.

DevOps: Open Source Tools & Cloud Configuration Management Learning Path

Open Source Telemetry

The SDLC really starts when code enters production. The DevOps principle of feedback calls for using production telemetry to inform development work. Or in other words: use real time operational data such as time series data, logs, and alerts to understand the reality and act accordingly. The FOSS community supports multiple projects to bring telemetry into your day-to-day work.

10. Prometheus is a Cloud Native Computing Foundation (CNCF) project for managing time series data and alerts. It’s integrated into Kubernetes, another CNCF project, as well. In fact, many of the CNCF projects prefer Prometheus for metric data. Support is not limited to CNCF projects either. Prometheus is a strong choice for many different infrastructures because it uses an open API format, includes alert support, and integrates with many common components.

11. Statsd is a Prometheus alternative for time series data. Prometheus uses a pull approach. This is good for understanding if a monitored system is unavailable but requires registering new systems with Prometheus. Statsd on the other hand uses a push model. Any system can push data into a statsd server but data is sent over UDP. Statsd, unlike Prometheus, only support time series data, so you’ll need another tool to manage alerts.

12. Grafana is for data visualization. Projects like Prometheus and statsd only handle data collection. They rely on other tools for visualization. There is where Gafana comes in. Grafana is a flexible visualization system with integrations for popular data sources like Promotheus, Statsd, and AWS Cloudwatch. Grafana dashboards are just text files which makes it a natural fit for infrastructure-as-code practices.

13. The Elastic Stack is a complete solution for time series data and logs. The Elastic Stack uses ElasticSearch for time series data and log storage paired with Kibana for visualization. Log stash connects and transforms logs from various components like web server logs or redis server logs into a standard format.

14. Flutend is another CNCF telemetry project. It acts like a unified logging layer for ingress, transformation, and routing. Data steams may be forwarded to multiple sources like statsd for real time interactions or sent to S3 for archiving. Fluentd supports many data sources and data outputs. Projects like Fluentd are especially useful for connecting disparate to a standard set of upstream tools.

15. Jaeger is a distributed request tracing project compatible with Open Tracing. Traces track individual interactions within a system across all instrumented components with latency and other metadata. This is a must for micro service and other distributed architectures since engineers can pinpoint where, what, and when.

Expanding Out

The third way of DevOps calls for continuous improvement through experimentation and learning. Once the continuous delivery pipeline is established along with telemetry to improve velocity, quality, and customer satisfaction. Here are some projects that help teams improve different aspects of their process.

16. Chaos Monkey is a project by Netflix to introduce chaos into running systems. The idea is to introduce faults into system to increase reliability and durability. This is part of the principles of chaos engineering and further described in Release It! and Google’s Site Reliability Engineering book. The idea that willingly breaking your production environment may sound foreign but doing so will reveal unknowns and train teams to design away possible failure scenarios. You don’t have to go all in at once either. You can rules and restrictions so you don’t destroy your production environment until you’re ready.

17. Vault by Hashicorp is a tool for securing, storing, and controlling access to tokens, passwords, certificates, encryption keys and other sensitive data using a UI, CLI, or HTTP API. It’s great for info-sec minded teams looking for a better solution than text files or environment variables.

Building and deploying software

You’ll encounter some of these tools building and deploying software. This list isn’t exhaustive by any means.

18.Nomad is light-weight Kubernetes alternative.

19.GoCD is another deployment pipeline and CI option.

20.The serverless framework opens the door into an entirely new architecture. Just consider the list of CNCF projects. You’re likely to uncover tools for scenarios you never considered. DevOps-focused teams will assuredly use a mix of FOSS and proprietary software when building their systems. Engineers must understand how the different projects fit into their overall architecture and leverage them for best effect.

Also keep in mind these projects are not infrastructure specific. You can use them for your on-premises infrastructure, AWS, GCP, or Azure systems. Cloud Academy’s Terraform Learning Path teach students to achieve DevOps success with Terraform and infrastructure-as-code, covering AWS and Azure. Engineers can learn these tools and keep skills portable across different setups.

Don’t get lost in tooling though. You can achieve DevOps success irrespective to the underlying tools if the right culture is in place — check out the DevOps Playbook – Moving to a DevOps Culture. The secret is to build on the philosophy that values flow, feedback, and learning and realizes their practices via tools. Learn the ideas, build a culture, and the rest will sort itself out.

The post Top 20 Open Source Tools for DevOps Success appeared first on Cloud Academy.

]]>
0
What is Ansible? https://cloudacademy.com/blog/what-is-ansible/ https://cloudacademy.com/blog/what-is-ansible/#respond Mon, 04 Mar 2019 22:30:37 +0000 https://cloudacademy.com/blog/?p=17618 What is Ansible? Ansible is an open-source IT automation engine, which can remove drudgery from your work life, and will also dramatically improve the scalability, consistency, and reliability of your IT environment. We’ll start to explore how to automate repetitive system administration tasks using Ansible, and if you want to...

The post What is Ansible? appeared first on Cloud Academy.

]]>
What is Ansible? Ansible is an open-source IT automation engine, which can remove drudgery from your work life, and will also dramatically improve the scalability, consistency, and reliability of your IT environment. We’ll start to explore how to automate repetitive system administration tasks using Ansible, and if you want to learn more, you can go much deeper into how to use Ansible with Cloud Academy’s new Introduction to Ansible learning path.

What is Ansible and what can it automate?

You can use Ansible to automate three types of tasks:

  • Provisioning: Set up the various servers you need in your infrastructure.
  • Configuration management: Change the configuration of an application, OS, or device; start and stop services; install or update applications; implement a security policy; or perform a wide variety of other configuration tasks.
  • Application deployment: Make DevOps easier by automating the deployment of internally developed applications to your production systems.

Ansible can automate IT environments whether they are hosted on traditional bare metal servers, virtualization platforms, or in the cloud. It can also automate the configuration of a wide range of systems and devices such as databases, storage devices, networks, firewalls, and many others.

The best part is that you don’t even need to know the commands used to accomplish a particular task. You just need to specify what state you want the system to be in and Ansible will take care of it. For example, to ensure that your web servers are running the latest version of Apache, you could use a playbook similar to the following and Ansible would handle the details.

---
- hosts: webservers
  vars:
    http_port: 80
    max_clients: 200
  remote_user: root
  tasks:
  - name: ensure apache is at the latest version
    yum: name=httpd state=latest
  - name: write the apache config file
    template: src=/srv/httpd.j2 dest=/etc/httpd.conf
    notify:
    - restart apache
  - name: ensure apache is running (and enable it at boot)
    service: name=httpd state=started enabled=yes
  handlers:
    - name: restart apache
      service: name=httpd state=restarted

The line in the above playbook that actually installs or updates Apache is “yum: name=httpd state=latest”. You just specify the name of the software package (httpd) and the desired state (latest) and Ansible does the rest. The other tasks in the playbook update the Apache config file, restart Apache, and enablea Apache to run at boot time. Take a read at one of our previous blog posts on how to build Ansible playbooks.

Why Ansible?

There are many other IT automation tools available, including more mature ones like Puppet and Chef, so why would you choose Ansible? The main reason is simplicity. Michael DeHaan, the creator of Ansible, already had a lot of experience with other configuration management tools when he decided to develop a new one. He said that he wanted “a tool that you could not use for six months, come back to, and still remember.”

DeHaan accomplished this by using YAML, a simple configuration language. Puppet and Chef, on the other hand, use Ruby, which is more difficult to learn. This makes Ansible especially appealing to system administrators.

DeHaan also simplified Ansible deployment by making it agentless. That is, instead of having to install an agent on every system you want to manage (as you have to do with Puppet and Chef), Ansible just requires that systems have Python (on Linux servers) or PowerShell (on Windows servers) and SSH.

What is Ansible? A New Learning Path

Although Ansible is easier to learn than many of the other IT automation engines, you still need to learn a lot before you can start using it. To help you with this, Cloud Academy has released its Introduction to Ansible learning path.

This learning path includes three video courses:

  • What is Configuration Management?: A high-level overview of configuration management concepts and software options.
  • Getting Started With Ansible: Covers everything from Ansible components to writing and debugging playbooks in YAML.
  • Introduction to Managing Ansible Infrastructure: An overview of Ansible Tower (Red Hat’s proprietary management add-on to Ansible) and Ansible Galaxy (a place to find and share Ansible content).

Hands-on practice is critical when learning a new technology, so we have included two labs in the learning path:

Finally, you can test your knowledge of Ansible by taking the quizzes.

Watch this short video, taken from the Getting Started with Ansible Course, where we take a look at the most common Ansible use cases.

Conclusion

Whether you need to make your life easier by automating your administration tasks or you’re interested in becoming a DevOps professional, Ansible is a good place to start. Learn how to streamline your IT operations with Introduction to Ansible.
Introduction to Ansible

 

The post What is Ansible? appeared first on Cloud Academy.

]]>
0
4 New Webinars for July 2016: Ansible, AWS Lambda, A/B Testing Algorithms in the Cloud, and Office Hours https://cloudacademy.com/blog/cloud-training-webinars-ansible-aws-lambda-cloud-computing/ https://cloudacademy.com/blog/cloud-training-webinars-ansible-aws-lambda-cloud-computing/#respond Thu, 07 Jul 2016 17:44:29 +0000 https://cloudacademy.com/blog/?p=15604 Hello! This is Stefano. It’s been a while since my last post on our blog. I’ve been busy working with our great team at Cloud Academy, but I would like to use this article today to talk about something we’ve been really enjoying these last few months: doing cloud training webinars with you guys!...

The post 4 New Webinars for July 2016: Ansible, AWS Lambda, A/B Testing Algorithms in the Cloud, and Office Hours appeared first on Cloud Academy.

]]>
Hello! This is Stefano. It’s been a while since my last post on our blog. I’ve been busy working with our great team at Cloud Academy, but I would like to use this article today to talk about something we’ve been really enjoying these last few months: doing cloud training webinars with you guys!

In June alone we had almost 2000 of you following our live webinars at Cloud Academy!
The great part? There are two, actually: they are completely free (yes, free!) and you can rewatch all of them here in our library! Joining us live is even better, as we usually have questions and answers with our expert team at the end of each session.

Who runs these webinars? Our team, of course! Everyone from our content team to our engineering and product team participate to make them as valuable as possible. It’s not a secret we are in love with cloud technologies and we use webinars to teach to our customers about new experiments, tools and technologies we implement here at Cloud Academy or that we enjoy using.

This month, we have some great topics to cover. Be sure to register as soon as possible! Last time, we maxed out all the available seats in a few days after our email, but this time we have more available – although they are going quickly 🙂 Thank you for helping us building this incredible series with more and more feedback every time, and feel free to send us suggestions or ask questions the comments below.
Ready? Let’s talk about our 4 upcoming cloud webinars!

1. AWS Lambda Coding Session with our Sr. Engineer Alex Casalboni – July 14th – Register now for free!

AWS Lambda
This is Alex Casalboni taking us through AWS Lambda with a live coding session. Lambda became incredibly popular in the cloud industry and it contributed to create the Serverless movement, something we’ve covered in our last webinar with Austin the creator of the Serverless framework.

Coding Session with AWS Lambda will be live on July 14th at 11AM Pacific Time – Register now to reserve your free seat!

2. Introduction to Ansible with our DevOps Engineer Ben Lambert – July 18th – Register now for free!

ansible-webinar-cloudacademy-devops-training
You’ve heard about Ansible, right? If not, go and check out our free Introduction to DevOps course at Cloud Academy, and you will learn why Ansible is one of the most important tools today in the DevOps industry and definitely one that has been very popular in the last 12 months. Our DevOps Engineer and Trainer, Ben Lambert, will introduce you to Ansible with a very guided approach, explaining how and why we should use it in our infrastructure and what kind of benefits we can get out of it.

Introduction to Ansible will be live on July 18th at 11AM (Pacific Time). Register now for free!

3. A/B Testing Data-Driven Algorithms in the Cloud with our Sr. Data Scientist, Roberto Turrin – July 25th – Register now for free!

algorithms-ab-testing-cloud-computing-cloud-academy
Let’s take a look at some of our daily challenges at Cloud Academy building A/B tests for data-driven algorithms. Our Sr. Data Scientist, Roberto, will guide us through the idea and the objective behind this to understand how we used the cloud to A/B test our algorithms.

A/B Testing Data-Driven Algorithms in the Cloud will be live on July 25th at 11AM (Pacific Time). Register now for free!

4. Cloud Academy Office Hours – July 28th – Register now for free!

Cloud Academy Office Hours
Have a question about your cloud training or careers in the industry? Our expert instructors are here to answer any questions you might have. It’s first come first serve, so be sure to sign up ASAP and get there on time to submit your question. We look forward to meeting you!

Cloud Academy Office Hours will be live on July 28th at 11AM (Pacific Time). Register now for free!

 

The post 4 New Webinars for July 2016: Ansible, AWS Lambda, A/B Testing Algorithms in the Cloud, and Office Hours appeared first on Cloud Academy.

]]>
0
Deploy Web Applications on IaaS with Ansible https://cloudacademy.com/blog/deploy-web-applications-on-iaas-with-ansible/ https://cloudacademy.com/blog/deploy-web-applications-on-iaas-with-ansible/#respond Thu, 03 Mar 2016 11:21:10 +0000 https://cloudacademy.com/blog/?p=12554 This article explains how to easily deploy a web application on IaaS platforms using Ansible. We’ll see the big picture and then study the case of deploying a Symfony application. What is Ansible? Ansible is an automation framework written in Python. An Ansible script is basically a list of tasks...

The post Deploy Web Applications on IaaS with Ansible appeared first on Cloud Academy.

]]>
This article explains how to easily deploy a web application on IaaS platforms using Ansible. We’ll see the big picture and then study the case of deploying a Symfony application.

What is Ansible?

Ansible is an automation framework written in Python. An Ansible script is basically a list of tasks written in YAML files. They are grouped in directories called “roles.” Each role has a purpose such as “install and configure MySQL.” The classic way of using Ansible is to run your Ansible’s script on your machine and Ansible will remotely execute each task through an SSH connection on the server you are targeting.

If you don’t know how Ansible works, you should have a look at this previous post.

Development and production environment

For every DevOps project, you need a development and a production environment that meets the following conditions:

  • The development environment is as similar to the production environment as possible.
  • The development and the production environment can be recreated automatically.
  • The application code can also be deployed in an automated manner (continuous delivery/deployment).

The development environment

https://cloudacademy.com/going-deeper-into-ansible-playbooks/
In order to create an ISO prod environment, we’ll use VirtualBox, Vagrant, and Ansible. VirtualBox is virtualization software that allows us to launch VMs on our machine. VirtualBox is a bit complicated so I suggest using Vagrant to start, which offers a simple interface to manage our VMs.

The first step is creating a VM on our development machine. To do this, describe the VM configuration in a Vagrantfile. Choose which OS you want to use, the IP of the VM and the path of the folder containing the code of the application you need to synchronize between the VM and our development machine. Here is an example of a Vagrantfile.yml:

$HOSTNAME = "myapp.dev"
$BOX = "ubuntu/trusty64"
$IP = "10.0.0.10"
$MEMORY = ENV.has_key?('VM_MEMORY') ? ENV['VM_MEMORY'] : "2048"
$CPUS = ENV.has_key?('VM_CPUS') ? ENV['VM_CPUS'] : "2"
$EXEC_CAP = ENV.has_key?('VM_EXEC_CAP') ? ENV['VM_EXEC_CAP'] : "100"
Vagrant.configure("2") do |config|
  config.vm.hostname = $HOSTNAME
  config.vm.box = $BOX
  config.vm.network :private_network, ip: $IP
  config.ssh.forward_agent = true
  config.vm.synced_folder "./myapp", "/var/www/myapp/current", type: "nfs"1
  config.vm.provider "virtualbox" do |v|
    v.name = "myapp_vagrant"
    v.customize ["modifyvm", :id, "--cpuexecutioncap", $EXEC_CAP]
    v.customize ["modifyvm", :id, "--memory", $MEMORY]
    v.customize ["modifyvm", :id, "--cpus", $CPUS]
  end
end

The command to create the VM using vagrant is: vagrant up
Your VM should now be created on your machine, the next step is to provision it with your Ansible playbook.

Provision the VM with Ansible

To create your own Ansible playbook, you should have a look at Ansible Galaxy: there are many Ansible roles available. You often don’t need to rewrite a role from scratch. You can use generators such as this one. To have an idea of what can be the best practices to write a playbook for small web applications I wrote an article about it. In the end, your Ansible directory should be something like:


├── group_vars
│   ├─ prod
│   ├─ staging
│   └─ vagrant
├── hosts
│   ├─ prod
│   ├─ staging
│   └─ vagrant
├── roles
│   ├─ composer
│   ├─ ubuntu-apt
│   ├─ ubuntu-mysql
│   ├─ ubuntu-php
│   └─ ubuntu-symfony-nginx
├── vars
│   └─ main.yml
└─ playbook.yml

You have to configure your playbook for your vagrant.
First, in the hosts/vagrant file, you have to specify the IP of your vagrant:

[vagrant]
10.0.0.10 ansible_ssh_user=vagrant

If you want to choose vars that will only be used by your vagrant, you have to put them in the group_vars/vagrant file. Basically, all your database password should be stored in this file as you want different passwords for each server:

#Example of group_vars file
# List of databases to be created
postgresql_databases:
  - name: myapp
    uuid_ossp: yes
# List of users to be created
postgresql_users:
  - name: myapp
    password: myapp
postgresql_user_privileges:
  - name: myapp
    db: myapp
    priv: "ALL"
    role_attr_flags: "SUPERUSER"
postgresql_listen_addresses:
  - "*"
dev_env: true

If you have some variables that should be applied on all your servers, they should be placed in the vars/main.yml file:

#Example of vars/main.yml file
timezone: Europe/Paris
port: 80
php_date_timezone: "UTC"
php_packages:
  - php5
  - php5-fpm
  - php5-mcrypt
  - php5-cli
  - php5-common
  - php5-curl
  - php5-dev
  - php5-gd
  - php5-ldap
  - php-apc
  - php5-apcu
  - php5-pgsql
  - php5-intl
  - php5-mysql
  - php5-mongo

Finally, your playbook.yml should call all the roles you need to make your VM able to run your application:

#Example of a playbook for a Symfony application
- name: Provisioning myapp
  hosts: all
  become: true
  vars_files:
    - vars/main.yml
  roles:
    - ubuntu-apt
    - create-www-data-user
    - ssh-keys # need create-www-data-user
    - ubuntu-php
    - composer
    - ubuntu-symfony-nginx
    - ubuntu-postgresql
    - blackfire
    - newrelic-php
    - nodejs

Ansible needs to have SSH access to the Vagrant so you need to add your SSH key in the vagrant:

  1. Copy your public key cat ~/.ssh/id_rsa.pub
  2. Log in the vagrant with `vagrant ssh`
  3. Add your key in the authorized_keys file: vi .ssh/authorized_keys
  4. Exit the VM and try to log in with ssh vagrant@10.0.0.10

If it’s OK, you can now provision the VM using your ansible playbook: ansible-playbook playbook.yml -i hosts/vagrant. If your playbook has no errors, you will see your project on your browser at the IP of the Virtual Machine. If you see it, congratulations, your development environment is ready.

The production environment

IAAS with Ansible
The production environment is easier to setup as the server is already created by the IAAS. You need to perform these steps:

  1. Add your SSH key to your server and modify the host/prod file so Ansible can find it
  2. Update the group_vars/prod file with your prod parameters
  3. Use Ansible to provision your instance in the cloud: ansible-playbook playbook.yml -i hosts/prod
  4. Then you should use a deployment tool like Capistrano to deploy the application’s files to the host.

Your application has been now fully deployed in production.

Conclusion

Ansible is a great tool for provisioning. Its main advantage is simplicity. Ansible playbooks are easy to understand while remaining powerful. There are a number of articles on Ansible and Ansible playbooks and I have tried to add something valuable to the discussion by focusing on the deployment of web applications on IAAS with Ansible in a step-by-step manner.
If you have any comment or feedback you can start or join a discussion below. You can also find me on Twitter.

The post Deploy Web Applications on IaaS with Ansible appeared first on Cloud Academy.

]]>
0
Ansible and AWS: Cloud IT Automation Management https://cloudacademy.com/blog/ansible-aws/ https://cloudacademy.com/blog/ansible-aws/#comments Wed, 21 Oct 2015 09:01:17 +0000 https://cloudacademy.com/blog/?p=9157 With things moving a bit more slowly through the holiday season, we’re going to re-run some of our most popular posts from 2015. Enjoy! The kinds of virtual infrastructures that define the cloud computing ecosystem demand a high level of automation. As the number of virtual servers used for individual...

The post Ansible and AWS: Cloud IT Automation Management appeared first on Cloud Academy.

]]>
With things moving a bit more slowly through the holiday season, we’re going to re-run some of our most popular posts from 2015. Enjoy!

The kinds of virtual infrastructures that define the cloud computing ecosystem demand a high level of automation. As the number of virtual servers used for individual deployments grows, the complexity of that widely distributed automation grows, too. Ansible, a relative newcomer to the IT automation and orchestration market, offers some unique and compelling features.

The goal, as with all such open source tools like CFEngine, Chef, and Puppet, is straightforward: to simplify IT infrastructure management and make organizations comfortable with their ever growing IT portfolio. Chef and Puppet have already established themselves by being used to manage giants like Facebook, Google, and eBay. Ansible, especially considering their very recent acquisition by Red Hat, is increasingly seen as a major player, too.

Besides resource provisioning and configuration management, Ansible can also orchestrate complex sequences of events like rolling upgrades and zero-downtime provisioning in a simple or multi-tier application environment. The power of Ansible is not limited to managing servers: it also can manage network switches, firewalls, and load balancers. Ansible has been designed to work seamlessly within cloud environments like AWS, VMWare, and Microsoft Azure.

Ansible’s unique feature set:

  1. Based on an agent-less architecture (unlike Chef or Puppet).
  2. Accessed mostly through SSH (it also has local and paramiko modes).
  3. No custom security infrastructure is required.
  4. Configurations (playbooks, modules etc.) written in the easy-to-use YML format.
  5. Shipped with more than 250 built-in modules.
  6. Full configuration management, orchestration, and deployment capability.
  7. Ansible interacts with its clients either through playbooks or a command-line tool.

Ansible Architecture

Ansible runs as a server on just about anything: even a humble PC or laptop. It has an inventory of hosts, modules, and playbooks that define various automation tasks. Individual modules are pushed to manage entities via SSH and, once a result is returned, the modules are deleted – which is why server-based agent installations are unnecessary.

Installation

There are multiple ways to install Ansible. You can use yum or apt-get from a Linux repository, git checkout, or PIP. Here is a sample installation command, run through yum:

#yum install Ansible
===========================================================
Package                    Arch            Version                Repository                  Size
===========================================================
Installing:
ansible                    noarch          1.9.2-1.el6            epel                       1.7 M
Installing for dependencies:
PyYAML                     x86_64          3.10-3.1.el6           public_ol6_latest          157 k
libyaml                    x86_64          0.1.3-4.el6_6          public_ol6_latest           51 k
python-babel               noarch          0.9.4-5.1.el6          public_ol6_latest          1.4 M
python-crypto2.6           x86_64          2.6.1-2.el6            epel                       513 k
python-httplib2            noarch          0.7.7-1.el6            epel                        70 k
python-jinja2              x86_64          2.2.1-2.el6_5          public_ol6_latest          465 k
python-keyczar             noarch          0.71c-1.el6            epel                       219 k
python-pyasn1              noarch          0.0.12a-1.el6          public_ol6_latest           70 k
python-simplejson          x86_64          2.0.9-3.1.el6          public_ol6_latest          126 k
Transaction Summary
===========================================================
Install      10 Package(s)

As you will notice, this has installed Ansible-1.9.2 on the server. The default host inventory file should be located at /etc/ansible/hosts. You can configure your managed servers in this file. For example, a group called test-hosts might be configured as follows:

[test-hosts]
3.3.86.253
3.3.86.254

Ansible assumes you have SSH access from the Ansible server to the machines behind the above two IP addresses. To run a sample Ansible command, you can run the following:

[root@ansible opt]# ansible -m ping test-hosts --ask-pass -u "ansibleadmin"
SSH password:
3.3.86.254 | success >> {
"changed": false,
"ping": "pong"
}
3.3.86.253 | success >> {
"changed": false,
"ping": "pong"
}
[root@ansible opt]#

As you can see, Ansible is able to communicate with the listed servers without installing any agents as such. Assuming you have generated and placed ssh-keys in their required locations, everything works over SSH.

Ansible components

Inventory

The “inventory” is a configuration file where you define the host information. In the above /etc/ansible/hosts example, we declared two servers under test-hosts.

Playbooks

In most cases – especially in enterprise environments – you should use Ansible playbooks. A playbook is where you define how to apply policies, declare configurations, orchestrate steps and launch tasks either synchronously or asynchronously on your servers. Each playbook is composed of one or more “plays”. Playbooks are normally maintained and managed in a version control system like Git. They are expressed in YAML (Yet Another Markup Language).

Plays

Playbooks contain plays. Plays are essentially groups of tasks that are performed on defined hosts to enforce your defined functions. Each play must specify a host or group of hosts. For example, using:

 – hosts: all

…we specify all hosts. Note that YML files are very sensitive to white spaces, so be careful!

Tasks

Tasks are actions carried out by playbooks. One example of a task in an Apache playbook is:

- name: Install Apache httpd

A task definition can contain modules such as yum, git, service, and copy.

Roles

A role is the Ansible way of bundling automation content and making it reusable. Roles are organizational components that can be assigned to a set of hosts to organize tasks. Therefore, instead of creating a monolithic playbook, we can create multiple roles, with each role assigned to complete a unit of work. For example: a webserver role can be defined to install Apache and Varnish on a specified group of servers.

Handlers

Handlers are similar to tasks except that a handler will be executed only when it is called by an event. For example, a handler that will start the httpd service after a task installed httpd. The handler is called by the [notify] directive. Important: the name of the notify directive and the handler must be the same.

Templates

Templates files are based on Python’s Jinja2 template engine and have a .j2 extension. You can, if you need, place contents of your index.html file into a template file. But the real power of these files comes when you use variables. You can use Ansible’s [facts] and even call custom variables in these template files.

Variables

As the name suggests, you can include custom-made variables in your playbooks. Variables can be defined in five different ways:

1. Variables defined in the play under vars_files attribute:

vars_files:
- "/path/to/var/file"

2. Variables defined in <role>/vars/apache-install.yml
3. Variables passed through the command line:

# ansible-playbook apache-install.yml -e "http-port=80"

4. Variables defined in the play under vars

vars: http_port: 80

5. Variables defined in group_vars/ directory

Sample Playbook:

## PLAYBOOK TO INSTALL AND CONFIGURE APACHE HTTP ON Servers
- hosts: all
  tasks:
   - name: Install Apache httpd
     yum: pkg=httpd state=installed
     notify:
       - Start Httpd
  handlers:
    - name: Start httpd
      service: name=httpd state=started

You run the playbook from your Ansible Server. The following command has been run from within the path where our apache-playbook.yml playbook is stored:

# ansible-playbook apache-install.yml  --ask-pass -u "ansibleadmin"

Ansible and AWS: Provisioning and Installation

Let’s try to provision an AWS EC2 instance using both the Ansible EC2 module and a playbook. An full example is beyond the scope of this document, however there’s plenty of great documentation available. The ec2_module documentation can be seen here.

Step-1: Install python-boto on your Ansible host:

#yum install python-boto

Step-2:  Install argparse (in case you need it):

#yum install python-argparse.noarch

Create AWS_ACCESS_KEY and AWS_SECRET_ACCESS_KEY environment variables (either export in a shell or place in your ~/.bashrc file)

Step-3: Add a local inventory to the /etc/ansible/hosts file:

[local]
localhost

Step-4: Run the following command (if you are running in adhoc mode):

# ansible localhost -m ec2 -a "image=ami-d44b4286 ec2_region=ap-southeast-1 instance_type=m3.medium count=1  keypair=ansible-key group=ansible-ws  wait=yes" -c  local

As I mentioned, you can also do the same thing through a playbook:

- hosts: localhost
  connection: local
  gather_facts: False
  tasks:
     - name: Provision a set of instances
      ec2:
         key_name: ansible-key
         group: ansible-ws
         instance_type: m3.medium
         ec2_region: ap-southeast-1
         image: "ami-d44b4286"
         wait: true
         count: 1
         instance_tags:
            Name: Demo
      register: ec2

Conclusion

If you stayed with me so far, you should now have a fair idea of how Ansible and AWS can work together and especially how well it integrates with Amazon’s EC2. Ansible provides a great IT automation and orchestration tool for the cloud environment, and with so much portability in its command syntax, it’s easy to create either playbooks or out-of-the-box modules.

The post Ansible and AWS: Cloud IT Automation Management appeared first on Cloud Academy.

]]>
6
Building Ansible Playbooks Step-by-Step https://cloudacademy.com/blog/building-ansible-playbooks-step-by-step/ https://cloudacademy.com/blog/building-ansible-playbooks-step-by-step/#comments Tue, 27 Jan 2015 14:48:48 +0000 https://cloudacademy.com/blog/?p=3579 Update 2019: We’ve recently developed a Learning Path, Introduction to Ansible, which will help you to get you started using Ansible to automate common IT tasks, you will learn about Configuration Management and you’ll be able to practice your knowledge on Ansible through a series of hands-on labs. Guy Hummel,...

The post Building Ansible Playbooks Step-by-Step appeared first on Cloud Academy.

]]>
Update 2019: We’ve recently developed a Learning Path, Introduction to Ansible, which will help you to get you started using Ansible to automate common IT tasks, you will learn about Configuration Management and you’ll be able to practice your knowledge on Ansible through a series of hands-on labs. Guy Hummel, our expert cloud trainer, has recently written an introductory post on What is Ansible?.


Learn to build Ansible playbooks with our guide, one step at a time

In our previous posts, we introduced Ansible fundamentals and dived deeper into Ansible playbooks. Now let’s learn to create an Ansible playbook step by step. Working with a playbook, we’ll go from deploying a simple HTML website to a complete LAMP stack.

Deploying Simple HTML Page

To deploy a simple HTML page, we need to ensure that apache is installed and configured on our host machine. So, therefore, in this section we will:

  • install Apache
  • start the Apache service
  • deploy a static webpage with images – This static webpage will leverage Ansible templates where it will display the text “Thank you for reading this post. My IP Address is <ip-address-of-instance>” and cloudacademy logo. To fetch the IP address of the host, it will rely on Ansible Fact
  • restart Apache once the deployment is over

Before we move forward, let’s have a look at the high-level structure of this simple Ansible playbook.

site.yml – starting point of our ansible playbook
hosts – carrying hosts information
roles/ - defining what each type of server has to perform
       webservers/
              tasks/ - tasks performed on webservers
                     main.yml
              handlers/ - running tasks under particular events
                     main.yml
              templates/ - configuration files which can reference variables
                     index.html.j2
              files/ - files to be copied to webservers
                     cloud.png

Let’s go through the configuration file line by line and see how configuration works.
hosts – points to Ansible hosts. Here’s a possible syntax:

[webservers]
10.0.0.156

site.yml – the starting point for executing our Ansible playbook. Includes information about hosts and roles associated with them.

---
- name: install and configure webservers
hosts: webservers
remote_user: ec2-user
sudo: yes
roles:
   - webservers

If we want to log into our host machines using a different username and with sudo privileges, we need to use the “remote_user” and “sudo: yes” parameter in our site.yml file. There can be additional parameters too, but they’re not needed right now. Here, we have also defined roles granted to hosts in the [webservers] group.

main.yml (Tasks) – This configuration file defines tasks to be executed on hosts that have webservers roles granted. It looks like:

---
# This task installs and enables apache on webservers
- name: ensure apache is installed
yum: pkg=httpd state=latest
- name: ensure apache is running
service: name=httpd state=running enabled=yes
- name: copy files to document root
copy: src=cloud.png dest=/var/www/html/cloud.png
- name: copy application code to document root
template: src=index.html.j2 dest=/var/www/html/index.html
notify: restart apache

Since YAML files are so intuitive, we can easily see that this will install and run Apache on host instances and copy certain files and templates to the host’s document root.
main.yml (handlers) – This configuration file defines the action to be performed only upon notification of tasks or state changes. In main.yml (tasks), we defined notify: restart apache handler which will restart Apache once the files and templates are copied to hosts.

---
- name: restart apache
service: name=httpd state=restarted

index.html.j2 (template) – a file you can deploy on hosts. However, template files also include some reference variables which are pulled from variables defined as part of an Ansible playbook or facts gathered from the hosts. Our index.html.j2 file looks like a regular html webpage with a referenced variable.

<html>
<head>
    <title>CloudAcademy Ansible Demo</title>
</head>
<body>
    <h1>
        Thank you for reading this post.
        My IP Address is {{ ansible_eth0.ipv4.address }}
    </h1>
    <br/><br/><br/>
    <p>
        <img src="cloud.png" alt="CloudAcademy Logo"/>
    </p>
</body>
</html>

We have declared a reference variable “{{ ansible_eth0.ipv4.address }}” which will print the IP address of the host on which this Ansible playbook is executed.
cloud.png (files) – The regular image file to be copied to hosts.

Once we have all the files created and present, we can execute an ansible-playbook command and configure our hosts.

build# ansible-playbook site.yml -i hosts
PLAY [install and configure webservers] ***************************************
GATHERING FACTS ***************************************************************
ok: [10.0.0.156]
TASK: [webservers | ensure apache is installed] *******************************
changed: [10.0.0.156]
TASK: [webservers | ensure apache is running] *********************************
changed: [10.0.0.156]
TASK: [webservers | copy files to document root] ******************************
changed: [10.0.0.156]
TASK: [webservers | copy application code to document root] *******************
changed: [10.0.0.156]
NOTIFIED: [webservers | restart apache] ***************************************
changed: [10.0.0.156]
PLAY RECAP ********************************************************************
10.0.0.156                 : ok=6   changed=5   unreachable=0   failed=0

That’s it. We have installed Apache and deployed our webpage using host-based files. On browsing our host’s IP address, we will see our static webpage with the referenced variables value defined.

Deploying a PHP webpage configured to work with a MySQL database

So until now, we’ve installed and started Apache, deployed a static webpage, and restarted Apache using handlers. Now we will upgrade the functionality of our existing Ansible playbook by adding additional features. Specifically, we’ll:

  • install PHP and related packages
  • install MySQL server
  • create databases in MySQL server
  • grant privileges to databases
  • deploy a PHP web page which will list the names of all the databases in our MySQL server and print certain facts about our host.

This will modify the structure of our existing Ansible playbook:

site.yml – starting point of our ansible playbook
hosts – carrying hosts information
group_vars
       all – carrying variables for groups
roles/ - defining what each type of server has to perform
       webservers/
              tasks/ - tasks performed on webservers
                     main.yml
              handlers/ - running tasks under particular events
                     main.yml
              templates/ - configuration files which can reference variables
                     index.php.j2
              files/ - files to be copied to webservers
                     cloud.png
       dbservers
              tasks/
                     main.yml

all (group_vars) : contains group-specific variables. Currently, we have only one group i.e., all.

dbuser: ansible
dbpassword: 12345

hosts : We have to update our hosts file if the webserver and database server is configured on the same host.

[all]
10.0.0.156

site.yml : Once we have updated our hosts file with a new group “all”, we have to update our site.yml file which will grant the webserver and dbserver role to the “all” host group.

---
- name: install and configure webservers
hosts: all
remote_user: ec2-user
sudo: yes
roles:
   - webservers
   - dbservers

main.yml (tasks for webservers): This YAML file will now install additional PHP related packages.

---
# These task installs and enables apache on webservers
- name: ensure apache,php related packages are installed
yum: name={{ item }} state=present
with_items:
   - httpd
   - php
   - php-mysql
- name: ensure apache is running
service: name=httpd state=running enabled=yes
- name: copy files to document root
copy: src=cloud.png dest=/var/www/html/cloud.png
- name: copy application code to document root
template: src=index.php.j2 dest=/var/www/html/index.php
notify: restart apache

index.php.j2 (templates): Instead of an HTML file, we’ve moved to index.php which includes application code to print names of all databases and other operating systems related information:

<html>
<head>
       <title>CloudAcademy Ansible Demo</title>
</head>
<body>
    <h3>
        Thank you for reading this post. My IP Address is {{ ansible_eth0.ipv4.address }}.
        This is {{ ansible_system }} OS with {{ ansible_userspace_architecture }} architecture
    </h3>
    <p>
        <strong>List of Databases:</strong> <br/>
    <?php
    //Spoiler: don't do this at home!
    $dbobj = mysql_connect('{{ ansible_lo.ipv4.address }}', '{{ dbuser }}', '{{ dbpassword }}');
    if (!$dbobj) { die('Could not connect: ' . mysql_error()); }
    $result = mysql_query("SHOW DATABASES");
    while ($res = mysql_fetch_assoc($result)){
        echo $res['Database'] . "<br/>";
    }
    ?>
    </p>
    <br/>
    <p><img src="cloud.png" alt="CloudAcademy Logo"></p>
</body>
</html>

main.yml (tasks for dbservers) : This configuration file will install the MySQL-server, and MySQL python packages, create databases and create database users.

---
# These task installs and enables apache on webservers
- name: ensure mysql is installed
yum: name={{ item }} state=present
with_items:
   - mysql-server
   - MySQL-python
- name: ensure mysql is running
service: name=mysqld state=running enabled=yes
- name: create application database
mysql_db: name={{ item }} state=present
with_items:
   - ansible_db01
   - ansible_db02
- name: create application user
mysql_user: name={{ dbuser }} password={{ dbpassword }} priv=*.*:ALL state=present

That’s it. Our Ansible playbook to deploy a LAMP stack is now ready. We built up a playbook that will install Apache, PHP, MySQL-server, create a MySQL user and databases and deploy our application code which prints information about Ansible’s host and list of databases.

To execute this Ansible playbook on the host, we will use the ansible-playbook command:

#ansible-playbook site.yml -i hosts
PLAY [install and configure webservers] ***************************************
GATHERING FACTS ***************************************************************
ok: [10.0.0.156]
TASK: [webservers | ensure apache,php related packages are installed] *********
changed: [10.0.0.156] => (item=httpd,php,php-mysql)
TASK: [webservers | ensure apache is running] *********************************
changed: [10.0.0.156]
TASK: [webservers | copy files to document root] ******************************
changed: [10.0.0.156]
TASK: [webservers | copy application code to document root] *******************
changed: [10.0.0.156]
TASK: [dbservers | ensure mysql is installed] *********************************
changed: [10.0.0.156] => (item=mysql-server,MySQL-python)
TASK: [dbservers | ensure mysql is running] ***********************************
changed: [10.0.0.156]
TASK: [dbservers | create application database] *******************************
changed: [10.0.0.156] => (item=ansible_db01)
changed: [10.0.0.156] => (item=ansible_db02)
TASK: [dbservers | create application user] ***********************************
changed: [10.0.0.156]
NOTIFIED: [webservers | restart apache] ***************************************
changed: [10.0.0.156]
PLAY RECAP *******************************************************************
10.0.0.156                 : ok=10   changed=9   unreachable=0   failed=0

Browsing to our host IP address will display:
Screenshot 2015-01-25 01.00.06
There’s lots more to learn about Ansible in future posts!

The post Building Ansible Playbooks Step-by-Step appeared first on Cloud Academy.

]]>
1
Going Deeper into Ansible Playbooks https://cloudacademy.com/blog/going-deeper-into-ansible-playbooks/ https://cloudacademy.com/blog/going-deeper-into-ansible-playbooks/#comments Mon, 12 Jan 2015 19:37:05 +0000 https://cloudacademy.com/blog/?p=3437 Update 2019: We’ve recently developed a Learning Path, Introduction to Ansible, which will help you to get you started using Ansible to automate common IT tasks, you will learn about Configuration Management and you’ll be able to practice your knowledge on Ansible through a series of hands-on labs. Guy Hummel,...

The post Going Deeper into Ansible Playbooks appeared first on Cloud Academy.

]]>
Update 2019: We’ve recently developed a Learning Path, Introduction to Ansible, which will help you to get you started using Ansible to automate common IT tasks, you will learn about Configuration Management and you’ll be able to practice your knowledge on Ansible through a series of hands-on labs. Guy Hummel, our expert cloud trainer, has recently written an introductory post on What is Ansible?.


In a previous blog post, we introduced some basic Ansible fundamentals, installation procedures, and a guide to ad-hoc mode.

Ansible can be used in either Ad-Hoc or Playbook mode. As covered in our previous post, the ad-hoc mode allows direct management of your hosts by executing single line commands and leveraging Ansible modules. Ad-hoc mode is useful when you plan to perform a quick and simple activity like shutting down your hosts or checking connectivity between your Ansible server and hosts using ping. But when you plan to manage host configurations and deployments, Ansible playbooks become more attractive.

According to Ansible documentation, “Playbooks are Ansible’s configuration, deployment, and orchestration language. They can describe a policy you want your remote systems to enforce or a set of steps in a general IT process.”

Playbooks are written in human-readable YAML format and can be created either by placing everything in a single file or by following a structure model. Each Ansible playbook contains one or more plays to help you to perform functions on different hosts. The goal of a play is to map roles to hosts and perform tasks under a role on different hosts. Tasks are nothing but modules called on hosts.

Ansible Playbook Structure

Here is sample Ansible playbook:

---
- hosts: webservers
vars:
   http_port: 80
   max_clients: 200
remote_user: root
tasks:
- name: ensure apache is at the latest version
   yum: pkg=httpd state=latest
- name: write the apache config file
   template: src=/srv/httpd.j2 dest=/etc/httpd.conf
   notify:
   - restart apache
- name: ensure apache is running
   service: name=httpd state=started
handlers:
   - name: restart apache
     service: name=httpd state=restarted in more than one way. You

In the above Ansible playbook, we created an entire configuration as one single file. This is fine if you’re writing a playbook for a simple deployment or configuration. However, once you decide to implement complex deployment scenarios, it is better to use a structured model, adding re-usability.
The structure of an Ansible playbook:

site.yml
hosts
group_vars/
      group1
      group2
host_vars/
      hostname1
      hostname2
roles/
      common/
            files/
            templates/
            tasks/
            handlers/
            vars/
            defaults/
            meta/
      webservers/
            …
            …
      applicationservers/
            …
            …
      databaseservers/
            …
            …

Let’s analyze this structure.

  • site.yml – site.yml is our master YAML playbook file. It contains information about the rest of the playbook.
  • Hosts

Ansible contains information about the hosts and groups of hosts to be managed in the hosts’ file. This is also called an inventory file. One can also divide hosts files into files with different environment names, i..e, instead of “hosts”, you can create two different files called “production” and “staging”. Apart from mentioning hosts and group of hosts information, you can also include host-specific information (ssh port, db parameters etc) in a hosts file.

Here is a sample hosts file:

[webservers]
prod-web01.example.com
prod-web02.example.com
[databaseservers]
prod-db01.example.com
prod-db02.example.com
  • group_vars and hosts_vars

Like the hosts’ inventory file, you can also include hosts and groups of hosts configuration variables in a separate configuration folder like group_vars and hosts_vars. These can include configuration parameters, whether on the application or operating system level, which may not be valid for all groups or hosts. This is where having multiple files can be useful: inside group_vars or hosts_vars, you can create a group or host in more than one way, allowing you to define specific configuration parameters.

A sample group_vars configuration:

---
# file: group_vars/webservers
apacheMaxRequestsPerChild: 3000
apacheMaxClients: 900

If you look at the above configuration, the apacheMaxRequestsPerChild or apacheMaxClients configuration parameters are only valid for the webservers hosts group. They don’t apply to the database or application hosts group.

If there is some configuration which you want to apply to all groups, this can be easily done:

---
# file: group_vars/all
ntp: ntp-boston.example.com
backup: backup-boston.example.com
  • Roles

As you add more and more functionality to your Ansible playbooks, it becomes difficult to manage it as a single file. Roles allow you to prepare a minimal Ansible playbook the defines how a server is supposed to perform rather than specifying the steps to get a server to act in a specific way.

According to Ansible documentation, “Roles in Ansible build on the idea of include files and combine them to form clean, reusable abstractions – they allow you to focus more on the big picture and only dive down into the details when needed.”

To correctly use roles with Ansible, you need to create a roles directory in your working Ansible directory, and then any necessary sub-directories.

The Ansible roles structure:

roles
|__ defaults
        |__ main.yml
|__ files
|__ templates
|__ tasks
        |__ main.yml
|__ meta
        |__ main.yml
|__ vars
        |__ main.yml
|__ handlers
        |__ main.yml

There are two ways to build the roles directory format, manually or by using Ansible-Galaxy. Ansible galaxy is a free site for finding, reusing and sharing community developed roles. To create a role using ansible galaxy, use the ansible-galaxy command :

# ansible-galaxy init webservers

 To understand more about roles, it is important to understand the roles directory structure.

  • Defaults – In the defaults directory, we need to have a file called main.yml which includes information about default variables used by this role. For example, this can be a default directory to deploy your configuration or to set up a default port for your application, etc.
---
webservers_dir: /var/www/html
webservers_port: 80
  • Files – the files directory contains files which need to be deployed to your hosts without any modification. These are simply copied over to your hosts. This could be your web application source code or some scripts.
  • Templates – templates are like files but allow modification. You can pass configuration variables to templates and those modified templates will be placed on your hosts. Ansible allows you to reference variables in your playbooks using the Jinja2 templating system.

 Sample template definition :

template: src=httpd.conf.j2 dest=/etc/httpd/conf/httpd.conf
  • Tasks – Each play can contain multiple tasks and each task can perform a variety of actions. Tasks basically execute modules with specific arguments. These arguments can be variables defined above. Along with modules, tasks also reference files and templates from other directories without providing a complete path. Tasks are executed in order against all hosts matching a particular host pattern. Tasks are written in main.yml or in other .yml files in the same directory that are referenced by main.yml.

 Sample task definition:

---
# These tasks install http and the php modules.
- name: Install http and php etc
yum: name={{ item }} state=present
with_items:
- httpd
- php
- php-mysql
- git
- libsemanage-python
- libselinux-python
  • Meta – meta contains files that describe environment (operating system, version etc), author, and licensing attributes, and establishes role dependencies. If a role written above depends upon some other role, those dependencies are resolved under meta section (main.yml).

 Sample meta definition:

---
dependencies:
- { role: apt }
  • Vars – vars is identical to defaults, i.e., it is used to store variables in a YAML file. However, variables defined under vars have higher priority than variables defined under defaults. These defined variables are found in configuration files for tasks, templates and others.
  • Handlers – If you refer to the above sample playbook, you’ll find a “notify” section under tasks. Notify definitions point to handlers. Handlers are also tasks which run only under particular events, and are executed only if notified by multiple tasks or state changes. For example, “if a new application code is deployed on hosts, then restart Apache. If not, don’t restart Apache.”

 Sample handlers definition:

---
# file: roles/common/handlers/main.yml
- name: restart ntpd
service: name=ntpd state=restarted

Executing Ansible Playbook

To execute an Ansible playbook, you use the ansible-playbook command.

# ansible-playbook -i hosts site.yml

If you have a different Ansible inventory file for production and staging, you can execute a playbook on a production environment by referencing the production inventory file.

# ansible-playbook –i production site.yml

The post Going Deeper into Ansible Playbooks appeared first on Cloud Academy.

]]>
1
Get Started with Ansible on the Cloud https://cloudacademy.com/blog/get-started-with-ansible-on-the-cloud/ https://cloudacademy.com/blog/get-started-with-ansible-on-the-cloud/#comments Mon, 29 Dec 2014 10:09:47 +0000 https://cloudacademy.com/blog/?p=3328 Update 2019: We’ve recently developed a Learning Path, Introduction to Ansible, which will help you to get you started using Ansible to automate common IT tasks, you will learn about Configuration Management and you’ll be able to practice your knowledge on Ansible through a series of hands-on labs. Guy Hummel,...

The post Get Started with Ansible on the Cloud appeared first on Cloud Academy.

]]>
Update 2019: We’ve recently developed a Learning Path, Introduction to Ansible, which will help you to get you started using Ansible to automate common IT tasks, you will learn about Configuration Management and you’ll be able to practice your knowledge on Ansible through a series of hands-on labs. Guy Hummel, our expert cloud trainer, has recently written an introductory post on What is Ansible?.


In a previous blogpost, we have seen the 5 best tools for AWS deployment. One of the tools which we covered was Ansible. In this blog post, we will see how to install this software and will learn the basics of it, to help you to get started with Ansible.

Get Started with Ansible: Introduction

Ansible is one of the youngest and fastest growing configuration management, deployment and orchestration engine. Released in 2012, it is one of the most popular GitHub projects already.
Some of the biggest pros of using Ansible are its agent-less architecture, the use of SSH protocol for communication and the use of YAML syntax for its configuration files. It only requires Python packages installed on client nodes. Agent-less architecture removes the burden of upgrading packages at each new release, while the SSH protocol makes the communication between server and clients very secure. Further, YAML is very easy to read and understand, making the use of Ansible a lot simpler.

Ansible is available in two versions: Ansible Tower (paid one – free up to 10 nodes) and Ansible Open-Source (free).

Basic System Requirements

  • Control Machine – This pretty much acts as the Ansible server where all the playbooks and configuration files are located. Control machine can be configured on the most common Linux Distribution, OS X or any BSDs. It requires Python 2.6 or greater. The Windows operating system is not supported by the Control Machine.
  • Managed Node – These are the nodes managed by the Control Machine. In a server-client architecture, managed nodes are clients where configuration or application is deployed. Any operating system (Linux, Windows or Mac) with python 2.4 or greater installed can act as Managed Node.

Glossary

  • Inventory – Ansible holds the information about nodes and group of nodes to be managed in a simple INI format inventory file. The default ansible host inventory file is located at /etc/ansible/hosts. The sample inventory files look like :
staging.example.com
[webservers]
prod-web01.example.com
prod-web02.example.com
[databaseservers]
prod-db01.example.com
prod-db02.example.com

Apart from information about nodes and group of nodes, the inventory file also holds information about host specific variables (e.g.: ssh ports, DB parameters), group variables (e.g.: defining some system level parameters or default interpreter) and a group of variables.

You can also pull information about your dynamic inventory using an external inventory system. Plugins are available to fetch inventory from your cloud provider (AWS, GCE, Rackspace, Openstack, etc), LDAP or Cobbler.

  • Modules – Ansible modules are independent pieces of code which can be used to alter and manage your infrastructure. These modules can be executed independently (ad-hoc way) or with ansible playbooks (described below). At a beginner level, modules can help you install, start, stop and restart services, execute commands, copy files, etc on your hosts. There are around 250+ modules available which help you to perform a wide range of tasks on your infrastructure. As per Ansible documentation, Modules are idempotent, that is: they will only execute if the state change is required.

 Example: The service module is the easiest way to restart your webservers (apache)

ansible webservers –m service –a “name=httpd state=started”

 In this case, if a service is already running, it won’t restart the service. This is what modules idempotence means.

  • Playbooks – Playbooks are the heart of Ansible. As mentioned above, modules can run in an ad-hoc way but when you are looking to orchestrate your configuration and execute a series of complex commands in order, playbooks comes into the picture. Playbooks are written in YAML format. In a playbook, you can include multiple modules and perform tasks synchronously or asynchronously.

A playbook is broken into multiple parts:

  1. hosts: where you want to deploy your configuration
  2. remote_user: execution of steps as a defined user
  3. tasks: execute modules with specific variables
  4. handlers: triggering a specific execution only once if notified by multiple tasks or system state changes. It depends upon notifying block under tasks.

 A sample playbook is here below:

---
- hosts: webservers
vars:
   http_port: 80
   max_clients: 200
remote_user: root
tasks:
- name: ensure apache is at the latest version
   yum: pkg=httpd state=latest
- name: write the apache config file
   template: src=/srv/httpd.j2 dest=/etc/httpd.conf
   notify:
   - restart apache
- name: ensure apache is running
   service: name=httpd state=started
handlers:
   - name: restart apache
     service: name=httpd state=restarted

We will dig deeper into Ansible playbooks in the next posts of this series.

Remote Connection Methods

As discussed above, the beauty of Ansible is that it is agent-less and relies on the SSH protocol to communicate with hosts. However, there are multiple ways you can connect to hosts or execute Ansible playbooks:

  • OpenSSH – This is the most recommended procedure for connecting to hosts. If you are using the new control machine, it will support ControlPersist, Kerberos and other options in the SSH config file. It is default from Ansible 1.3.
  • Paramiko – Paramiko is the Python implementation of OpenSSH. It is used for old EL6 operating systems where ControlPersist feature on OpenSSH is not supported.
  • Local – Local is used if you wish to execute playbooks on your control machine itself. If you want to make an API call to AWS or Rackspace, there is no need to execute these commands on a remote host. You can pretty well run ansible-playbook in local mode.

Installation Options

One of the other features making Ansible very easy to use is the variety of installation procedures.

  • OS Package Manager – It is the recommended way to install Ansible on your control machine. Ansible is packaged and available in the archives of the most important distribution, so check your distro’s documentation to find out how to install it. If you are using RHEL, CentOS or Amazon Linux, you will need to enable EPEL:
  1. Enable EPEL repository on Amazon Linux – Go to your yum.repos.d folder and enable epel.repo (modify enabled=0 to enabled=1).
  2. Install ansible
# yum install ansible

1. If you are on RHEL or Centos, install the EPEL repository as well

2. Enable the RHEL Optional Repository [Only applicable for RHEL]

3. Install ansible

#yum install ansible
  • PIP – You can also install Ansible using a Python package manager like PIP. For installing ansible using pip, ensure pip is installed on your system. If not, please install it using your distro’s package manager or esay_install. Once done,  you can use pip to install Ansible:
#pip install ansible
  •  GIT – You can also clone the Ansible repository and install it from source. Before installing ansible from GIT, make sure pip and additional python related dependencies are installed. Then, you can install Ansible from source:
# git clone git://github.com/ansible/ansible.git –recursive
#cd ./ansible
#source ./hacking/env-setup

Building Inventory

To build the inventory, you need to put your managed nodes information in your inventory file. For demonstration purpose, we have launched two fresh EC2 Amazon Linux instances and put down their private DNS in inventory file (/etc/ansible/hosts).

Using Ansible in Ad-Hoc Mode

As discussed above, Ansible can be used in Ad-Hoc mode or playbook mode. For this blog post, we will demonstrate using ansible in ad-hoc mode to ping, install apache and start apache service on webservers mentioned in the inventory section. To connect to hosts, you should use ssh-agent.

To ping webservers instances group:

# ansible webserver –m ping –u ec2-user

To install apache on the webservers instances group :

# ansible webservers –m yum –u ec2-user --sudo –a “name=httpd state=present”

Now start the Apache service:

# ansible webservers –m service –u ec2-user –sudo –a “name=httpd state=started”

That’s it. Apache is now installed and running on the web servers instances group defined in the inventory file.

In our next blog post, we will have a close look at Ansible playbooks.

The post Get Started with Ansible on the Cloud appeared first on Cloud Academy.

]]>
2