Manually Configuring a webserver over a docker container seems too much tedious in a testing environment. To avoid manual tasks and errors, Automation is here to save us. In this story, I am going to automate process to setup containerized webserver over cloud with the help of Ansible.
We can configure EC2 instance by 3 ways
- AWS Management Console
- AWS CLI
In all this platform we have to authenticate us to do further process In AWS Management console we have to Sign In, In AWS CLI and API part we have to pass AWS_ACCESS_KEY and AWS_SECRET_ACCESS_KEY via an environment variable.
While writing playbook in ansible we have to pass this details also so best way to pass this details is we can create a separate credential file and encrypt it with ansible-vault. For this practice I am using ansible vault to store credentials.
What are the benefits of storing credentials in ansible-vault?
- You can secure your credentials with a file and because it is
encrypted so we can share it with our team.
- We can pass a reference of variable in the playbook so that we can use it where the authentication is required.
For this practical, I had already created an image named yash202000/webserver-php with HTTPD and PHP configured.
So let’s jump on practical.
To create vault file use below steps.
ansible-vault create cred.yml
For this First configure ansible.cfg file.
In the part of the defaults, We are telling the ansible that goes through ip.txt file whatever the Ips’ you have to use ssh to login whenever you log in as remote user ec2-user and use c19015–1.pem file for authentication and in playbook if any roles we have to execute so this is a path where I kept my roles.
In the privilege_escalation part, I have given remote user root power with the sudo method.
So without we are ready to go for code. Let’s take a look at code.
Let us understand it step by step.
- The first play will be executed in localhost where this play is using the cred.yml file for variables. ( This file is encrypted with ansible-vault)
- The tasks the first task is to launch ec2 instance over AWS cloud for this amazon.aws.ec2 module uses API to connect our system with AWS cloud. As we have to give credentials we are passing variable access_key and secret_key along with some details which are essential to launching instances over the cloud.
- In the second task, the wait_for module will hold the task till ssh connection with launched instances will establish.
- ec2_instance_info module used for fetching all instance information along with newly launched instances to store it into variable named x
- debug module is used to verify whether the information is stored in x or not.
- The most critical part in the playbook is updating existing inventory so for this we are using blockinfile module along with some conditions.
Here for loop is used to iterate over instances stored in x. With every iteration store, this information in the i variable. If the variable contains information of network_interfaces then store it into a file as a new entry and the next two lines are ending statements of if statement and for loop.
Note: blockinfile module will add entry which is not used for current running playbook This will be used for further plays/New plays.
- To use this instance for further configuration add_host will add a temporary host with the group name dockerhosts.
At the next play, we are using dockerhosts for further configuration.
- yum repository will configure yum for docker-ce (community edition of docker)
- The next stage is to install docker but if we use the package module it will not install some dependencies which will conflict in the future so a better way is to use the command module with — nobest option.
- Next is starting service with a service module
- docker_image module requires a request module in python so to solve these dependencies we installed python and with the help of pip, we ensure that this module will be available before we run play to fetch the image.
- docker_image will pull image named as yash202000/webserver-php
- copy module will copy the index.html file to the remote server.
- Once again we used the command module to launch a container with webserver-php image because docker_container will only launch it and after detaching it will stop which means it will not run in the background so to overcome this we are using the command module.
- Last but not least copy content to docker container so we used command module with docker cp command.
So let’s run the playbook.
— ask-vault-pass will used to give access to our encrypted file.
So our play is successfully run let’s check output on IP 188.8.131.52
Yeah, we did it finally we launched docker on the ec2 instance and set the webserver inside it.
Checkout code here.
Today is the world of the Automation and all the content is hosted on the webserver and millions of client use the services and to serve this huge number of client company require many webservers running but we cannot host it directly with the help of bare metals so to decrease cost and increase availability we use Docker and to automate this process we used Ansible.
Thanks for reading… If any questions please feel free to leave a comment below and Do connect with me on these platforms.
- Mail: firstname.lastname@example.org
- LinkedIn: https://www.linkedin.com/in/yash-panchwatkar/