-
Notifications
You must be signed in to change notification settings - Fork 0
Updated README.md #11
base: main
Are you sure you want to change the base?
Changes from 1 commit
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,2 +1,129 @@ | ||
| # aws-image-pipeline | ||
| Terraform Workspace for creating and managing AWS Image Pipelines. | ||
| RHEL Pipeline: End-to-End Flow Breakdown | ||
| This README describes the end-to-end process for the RHEL pipeline, covering the source, build, and test phases, and how the pipeline integrates with AWS services, Packer, Ansible, and Goss. | ||
| Overview | ||
| The RHEL pipeline is designed to automate the creation, configuration, and testing of an Amazon Machine Image (AMI) for Red Hat Enterprise Linux (RHEL) instances. It utilizes multiple repositories and tools such as Packer, Terraform, Ansible, and Goss to build and validate images. Below is a breakdown of each step and the repositories involved. | ||
| ________________________________________ | ||
| 1. Repositories Involved | ||
| • aws-image-pipeline: Provides Terraform configurations and defines the infrastructure pipeline (e.g., EC2 instances, security groups). | ||
| • linux-image-pipeline: Contains the Packer configurations responsible for building the AMIs. | ||
| • image-pipeline-ansible-playbooks: Hosts the Ansible playbooks that are used to configure the instances during the Packer build. | ||
| • image-pipeline-goss-testing: Contains Goss test definitions used to validate the AMI configurations during the test phase. | ||
| ________________________________________ | ||
| 2. Pipeline Flow | ||
| Source Phase: | ||
| 1. Pull Configurations: | ||
| o The pipeline pulls Terraform configurations from the aws-image-pipeline repository. | ||
| o It fetches Packer configurations from the linux-image-pipeline repository. | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. there's more than just the linux-image-pipeline repos, there's also docker-image-pipeline and windows-image-pipeline repos There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The pipeline fetches Packer configurations from multiple repositories, including |
||
| o The Ansible playbooks are retrieved from image-pipeline-ansible-playbooks. | ||
| o The Goss tests are pulled from image-pipeline-goss-testing. | ||
|
|
||
| Build Phase: | ||
| 1. Launch the EC2 Instance: | ||
| o Packer, defined in the build.pkr.hcl file, starts the build process by launching an EC2 instance using a pre-defined AMI. | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. the "pre-defined" ami is an ami that is specified when calling the terraform-aws-image-pipeline module in aws-image-pipelines There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Packer launches an EC2 instance using an AMI specified in the |
||
| o The instance type, subnet, security groups, and SSH user credentials are sourced from AWS Parameter Store. | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. credentials are not ever stored in Parameter Store. They are instead stored in Secrets Manager. All of the settings that are in parameter store and secrets manage are written there by the terraform-aw-image-pipeline module. These come from module parameters that are set in aws-image-pipeline. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Instance credentials are securely stored in AWS Secrets Manager, not in Parameter Store. Both AWS Parameter Store and Secrets Manager are populated by the |
||
| 2. Instance Configuration: | ||
| o Ansible playbooks from the image-pipeline-ansible-playbooks repo are run to configure the instance. | ||
| o Example configurations include installing required packages, setting up services, and applying security hardening measures. | ||
| 3. Capture AMI: | ||
| o After the instance is successfully configured by Ansible, Packer captures the configured instance as an AMI. | ||
| o The AMI ID is stored in tf_ami_id.txt and uploaded to the AWS Parameter Store for later use. | ||
| Test Phase: | ||
| 1. Run Goss Tests: | ||
| o Once the AMI is built, Goss tests defined in the image-pipeline-goss-testing repo are run on the newly created AMI to validate the configuration. | ||
| o Tests check whether all services are running, the necessary packages are installed, and the configurations meet security compliance. | ||
| 2. Post-Build Steps: If tests pass or fail, the following happens: | ||
| o Tests Passed: | ||
| If the Goss tests pass, no further action is required regarding the AMI. The AMI is not destroyed, and no action is taken to remove it. | ||
| o Tests Failed or Troubleshooting Enabled: | ||
| If tests fail, the EC2 instance used for testing is destroyed via Terraform. | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. the test instance is destroyed when tests pass as well There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The EC2 instance used for testing is destroyed via Terraform after the tests are completed, regardless of whether they pass or fail. |
||
| If troubleshooting is enabled or the tests fail, the AMI is deregistered from AWS using the following commands: | ||
| false || /bin/terraform destroy -var project_name=rhel-image-pipeline-demo -var goss_directory=${CODEBUILD_SRC_DIR_SourceGossOutput} -auto-approve | ||
|
|
||
| false || test -f tf_ami_id.txt && aws ec2 deregister-image --image-id `cat tf_ami_id.txt` --region $AWS_REGION || echo "Tests passed, no AMI to deregister" | ||
|
|
||
| The AMI is not immediately deleted but is marked for deregistration using aws ec2 deregister-image, which removes it from being available for future EC2 launches. | ||
|
|
||
|
|
||
|
|
||
|
|
||
|
|
||
|
|
||
| Additional Information for Pipeline Management | ||
| This section addresses specific questions that a consumer of these repositories might have when interacting with or modifying the pipeline. It will guide users on how to update Ansible playbooks, manage Goss tests, and add new pipeline types. | ||
| ________________________________________ | ||
| 1. How to Add/Change/Update Ansible Playbooks | ||
| If you need to modify or add Ansible playbooks that run during the AMI build process, follow these steps: | ||
| Where Do Playbooks Go? | ||
| • Playbooks should be stored in the image-pipeline-ansible-playbooks repository. | ||
| • Playbooks are structured within this repository, and you should add your playbooks here if you need to update the configurations applied to the AMI. | ||
| How Are They Integrated? | ||
| • The playbooks are integrated into the pipeline through the build.pkr.hcl file in the linux-image-pipeline repository. | ||
| • Within the build.pkr.hcl, the Ansible provisioner is used to execute the playbooks during the AMI build phase: | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. which playbooks run? |
||
|
|
||
| provisioner "ansible" { | ||
| command = "/root/.local/bin/ansible-playbook" | ||
| playbook_file = "${var.ansible_dir}/${data.amazon-parameterstore.playbook.value}" | ||
| roles_path = "${var.ansible_dir}/roles" | ||
| ansible_env_vars = ["ANSIBLE_STDOUT_CALLBACK=yaml", "ANSIBLE_NOCOLOR=True"] | ||
| user = data.amazon-parameterstore.ssh_user.value | ||
|
|
||
|
|
||
| Steps to Update or Add Playbooks: | ||
| 1. Modify or add your playbooks to the image-pipeline-ansible-playbooks repo. | ||
| 2. Ensure the correct playbook name is referenced in AWS Parameter Store under the parameter for playbook (e.g., /image-pipeline/<project_name>/playbook). | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. not quite, make sure the playbook parameter to the terraform-aws-image-pipeline module is pointing to the correct playbook. the module updates the value in parameter store. |
||
| 3. Update the rhel.tf (or equivalent) to point to the correct playbook by modifying the playbook parameter: | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. this is the only step that's required when pointing to new playbooks, it should not ever be required to manually modify secrets manager or parameter store. |
||
| playbook = "your-new-playbook.yaml" | ||
| ________________________________________ | ||
| 2. How to Add/Change/Update Goss Tests | ||
| If you need to add or update the Goss tests used to validate the AMI, follow these steps: | ||
| Where Do Goss Tests Go? | ||
| • Goss tests are stored in the image-pipeline-goss-testing repository. | ||
| • The relevant test files are located in the goss-files/ directory, where you can add or modify .yaml files to include new tests. | ||
| How Are They Integrated? | ||
| • The Goss tests are integrated through the linux.tf file in the image-pipeline-goss-testing repository. | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. what happens in the windows.tf file? |
||
| • The goss_profile is passed in as a variable from the rhel.tf in the aws-image-pipeline repository and is tied to a specific set of Goss tests in the goss-files/ directory. | ||
| goss_profile = "rhel-base-test" | ||
|
|
||
| Steps to Update or Add Goss Tests: | ||
| 1. Add or update your .yaml files in the image-pipeline-goss-testing/goss-files/ directory. | ||
| 2. Ensure the goss_profile in rhel.tf matches the name of the new or updated test file: | ||
| goss_profile = "your-new-test" | ||
| 3. The pipeline will automatically reference this new set of tests during the test phase when creating a new AMI. | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. how? |
||
| Consideration: Parking Goss Tests in the Ansible Playbooks | ||
| • While Goss tests are currently separated from the Ansible playbooks, we did consider parking Goss tests as part of the playbooks for simplicity. However, separating the two ensures that testing is isolated from configuration, providing clearer validation steps and better modularity. | ||
| • This separation also allows more flexibility in testing different configurations without modifying the Ansible playbooks directly. | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. it also enables different teams to control the goss repo and the packer repos, which means security teams can design tests to make sure their concers are met. Separation of Concerns. |
||
| ________________________________________ | ||
| 3. How to Add a New Pipeline Type (e.g., ARM Amazon Linux or ARM Windows Instance) | ||
| If you want to create a new pipeline type, such as one for ARM-based Amazon Linux or Windows instances, you need to follow these steps: | ||
| Steps to Add a New Pipeline Type: | ||
| 1. Create a New Packer Configuration: | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. is this absolutely required? |
||
| o In the linux-image-pipeline repository, create a new Packer configuration file, such as build-arm-linux.pkr.hcl or build-arm-windows.pkr.hcl. | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This shouldn't be required. Updating the instance type and source ami parameters when calling the terraform-aws-image-pipeline module should be sufficient. It should be rare that we actually need new packer configs. |
||
| o In this new file, define the necessary source AMI for ARM architectures and the required configurations for the instance type (e.g., a1.medium for ARM): | ||
|
|
||
| source_ami = "ami-arm-linux" # Replace with actual ARM Linux AMI ID | ||
| instance_type = "a1.medium" # Instance type for ARM | ||
|
|
||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. the above settings are already configured through terraform and the current build.pkr.hcl config, why do we need a new packer config? |
||
|
|
||
| 2. Update the Ansible Playbooks (if necessary): | ||
| o Update the Ansible playbooks in the image-pipeline-ansible-playbooks repo to account for any differences in ARM architecture, if applicable. | ||
| 3. Update the Goss Tests (if necessary): | ||
| o Modify or create Goss tests in the image-pipeline-goss-testing repo that validate ARM-specific configurations, such as hardware compatibility or package installations. | ||
| 4. Update the Terraform Configuration: | ||
| o In the aws-image-pipeline repository, create a new Terraform configuration file, such as arm-linux.tf or arm-windows.tf. | ||
| o Reference the new Packer file and adjust any required parameters (e.g., instance type, security groups, etc.). | ||
|
|
||
| module "arm-linux" { | ||
| source = "HappyPathway/image-pipeline/aws" | ||
| project_name = "arm-linux-image-pipeline" | ||
| playbook = "arm-linux-playbook.yaml" | ||
| instance_type = "a1.medium" | ||
| source_ami = "ami-arm-linux" # Replace with actual ARM Linux AMI ID | ||
| goss_profile = "arm-linux-test" | ||
| } | ||
|
|
||
| 5. Push to the Repository: | ||
| o Once you’ve made these changes, push your new files to the relevant repositories (e.g., aws-image-pipeline, linux-image-pipeline, image-pipeline-ansible-playbooks, and image-pipeline-goss-testing). | ||
| o Ensure the new pipeline type is registered by updating the AWS CodeBuild configuration, so it triggers the new pipeline on push. | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. You shouldn't ever need to modify codebuild. Codebuild is just a shell for automation, it's the ansible and the goss that would need to change the most. The infrastructure involved in actually running the pipeline shouldn't have to change. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Push to the Repository: |
||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this isn't correct, the pipeline pulls configuration from other repos. the aws-image-pipeline repo is just the repo that calls the modules that build the pipelines.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The pipeline configuration is initiated by the
aws-image-pipelinerepository, which calls the necessary Terraform modules responsible for building the pipelines. The actual configurations are pulled from other specified repositories.