Ansible 4 of 9: Playbooks 1 of 2
Background
There is a Terraform series:
1 of 5. Open Cloud9
2 of 5. Create main.tf for Terraform
The ingress rule permitting everything is using the Cloud9 Security group ID. Note that we included a provisioner for populating the ssh known_hosts file. Place the 'main.tf' file inside the 'ansible-tasks' folder you have been running these labs from. There is a 30 second delay on the local-execs, to give some lead time before the provisioner attempts to run the ssh key scan.
main.tf
provider "aws" {
region = "us-east-1"
}
resource "aws_instance" "host01" {
ami = "ami-026ebd4cfe2c043b2"
instance_type = "t2.micro"
key_name = "tcb-ansible-key"
vpc_security_group_ids = [aws_security_group.secgroup.id]
provisioner "local-exec" {
command = "sleep 30; ssh-keyscan ${self.private_ip} >> ~/.ssh/known_hosts"
}
tags = {
Name = "host01"
}
}
resource "aws_instance" "host02" {
ami = "ami-026ebd4cfe2c043b2"
instance_type = "t2.micro"
key_name = "tcb-ansible-key"
vpc_security_group_ids = [aws_security_group.secgroup.id]
provisioner "local-exec" {
command = "sleep 30; ssh-keyscan ${self.private_ip} >> ~/.ssh/known_hosts"
}
tags = {
Name = "host02"
}
}
resource "aws_security_group" "secgroup" {
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 0
to_port = 0
protocol = "-1"
security_groups = ["sg-05b2e6f0305ae4271"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
output "host01_private_ip" {
value = aws_instance.host01.private_ip
}
output "host02_private_ip" {
value = aws_instance.host02.private_ip
}
3 of 5. Run terraform
Before running terraform, validate that the SSH keys for the new hosts does not yet exist in the "known_hosts" file.
tail ~/.ssh/known_hosts
terraform init
terraform plan
terraform apply
tail ~/.ssh/known_hosts
aws ec2 describe-instances \
--query 'Reservations[*].Instances[*].{Instance:InstanceId,Name:Tags[?Key==`Name`]|[0].Value,PrivateIP:PrivateIpAddress,State:State.Name}' \
--output table
4 of 5. Make sure the inventory file 'hosts' is updated with the new host IP addresses and for the correct user name.
host01 ansible_host=172.31.21.160 ansible_user=ec2-user
host02 ansible_host=172.31.28.32 ansible_user=ec2-user
[all:vars]
ansible_ssh_private_key_file=/home/ec2-user/environment/ansible-tasks/tcb-ansible-key.pem
[webservers]
host01
5 of 5. Check that you can reach the hosts via ansible.
Also, check that the terraform /hosts file is updated with NEW ip addresses.
ansible all -m ping
cat .terraform/hosts
References
YAML Ain’t Markup Language (YAML™) version 1.2
Ansible playbooks - Ansible Documentation
ansible.builtin.yum module – Manages packages with the yum package manager
Command: init | Terraform | HashiCorp Developer
Command: plan | Terraform | HashiCorp Developer
Comamnd: apply | Terraform | HashiCorp Developer
Comamnd: destroy | Terrafrom | HashiCorp Developer
Provisioners | Terraform | Hashicorp Developer
Verify and Keep Control of Host Keys with Terraform
Getting started with systemctl | Enable Sysadmin
How to use systemctl to manage Linux services | Enable Syadmin
Comments
Post a Comment