Skip to content

Updated README.md #11

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open

Updated README.md #11

wants to merge 3 commits into from

Conversation

lolli001
Copy link
Contributor

No description provided.

README.md Outdated
2. Pipeline Flow
Source Phase:
1. Pull Configurations:
o The pipeline pulls Terraform configurations from the aws-image-pipeline repository.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this isn't correct, the pipeline pulls configuration from other repos. the aws-image-pipeline repo is just the repo that calls the modules that build the pipelines.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The pipeline configuration is initiated by the aws-image-pipeline repository, which calls the necessary Terraform modules responsible for building the pipelines. The actual configurations are pulled from other specified repositories.

README.md Outdated
Source Phase:
1. Pull Configurations:
o The pipeline pulls Terraform configurations from the aws-image-pipeline repository.
o It fetches Packer configurations from the linux-image-pipeline repository.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there's more than just the linux-image-pipeline repos, there's also docker-image-pipeline and windows-image-pipeline repos

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The pipeline fetches Packer configurations from multiple repositories, including linux-image-pipeline, docker-image-pipeline, and windows-image-pipeline.

README.md Outdated

Build Phase:
1. Launch the EC2 Instance:
o Packer, defined in the build.pkr.hcl file, starts the build process by launching an EC2 instance using a pre-defined AMI.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the "pre-defined" ami is an ami that is specified when calling the terraform-aws-image-pipeline module in aws-image-pipelines

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Packer launches an EC2 instance using an AMI specified in the terraform-aws-image-pipeline module, rather than a "predefined" AMI.

README.md Outdated
Build Phase:
1. Launch the EC2 Instance:
o Packer, defined in the build.pkr.hcl file, starts the build process by launching an EC2 instance using a pre-defined AMI.
o The instance type, subnet, security groups, and SSH user credentials are sourced from AWS Parameter Store.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

credentials are not ever stored in Parameter Store. They are instead stored in Secrets Manager. All of the settings that are in parameter store and secrets manage are written there by the terraform-aw-image-pipeline module. These come from module parameters that are set in aws-image-pipeline.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instance credentials are securely stored in AWS Secrets Manager, not in Parameter Store. Both AWS Parameter Store and Secrets Manager are populated by the terraform-aws-image-pipeline module based on the provided parameters.

README.md Outdated
o Tests Passed:
 If the Goss tests pass, no further action is required regarding the AMI. The AMI is not destroyed, and no action is taken to remove it.
o Tests Failed or Troubleshooting Enabled:
 If tests fail, the EC2 instance used for testing is destroyed via Terraform.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the test instance is destroyed when tests pass as well

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The EC2 instance used for testing is destroyed via Terraform after the tests are completed, regardless of whether they pass or fail.

• Playbooks are structured within this repository, and you should add your playbooks here if you need to update the configurations applied to the AMI.
How Are They Integrated?
• The playbooks are integrated into the pipeline through the build.pkr.hcl file in the linux-image-pipeline repository.
• Within the build.pkr.hcl, the Ansible provisioner is used to execute the playbooks during the AMI build phase:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

which playbooks run?

README.md Outdated

Steps to Update or Add Playbooks:
1. Modify or add your playbooks to the image-pipeline-ansible-playbooks repo.
2. Ensure the correct playbook name is referenced in AWS Parameter Store under the parameter for playbook (e.g., /image-pipeline/<project_name>/playbook).
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not quite, make sure the playbook parameter to the terraform-aws-image-pipeline module is pointing to the correct playbook. the module updates the value in parameter store.

README.md Outdated
Steps to Update or Add Playbooks:
1. Modify or add your playbooks to the image-pipeline-ansible-playbooks repo.
2. Ensure the correct playbook name is referenced in AWS Parameter Store under the parameter for playbook (e.g., /image-pipeline/<project_name>/playbook).
3. Update the rhel.tf (or equivalent) to point to the correct playbook by modifying the playbook parameter:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is the only step that's required when pointing to new playbooks, it should not ever be required to manually modify secrets manager or parameter store.

README.md Outdated
• Goss tests are stored in the image-pipeline-goss-testing repository.
• The relevant test files are located in the goss-files/ directory, where you can add or modify .yaml files to include new tests.
How Are They Integrated?
• The Goss tests are integrated through the linux.tf file in the image-pipeline-goss-testing repository.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what happens in the windows.tf file?

README.md Outdated
1. Add or update your .yaml files in the image-pipeline-goss-testing/goss-files/ directory.
2. Ensure the goss_profile in rhel.tf matches the name of the new or updated test file:
goss_profile = "your-new-test"
3. The pipeline will automatically reference this new set of tests during the test phase when creating a new AMI.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how?

README.md Outdated
3. The pipeline will automatically reference this new set of tests during the test phase when creating a new AMI.
Consideration: Parking Goss Tests in the Ansible Playbooks
• While Goss tests are currently separated from the Ansible playbooks, we did consider parking Goss tests as part of the playbooks for simplicity. However, separating the two ensures that testing is isolated from configuration, providing clearer validation steps and better modularity.
• This separation also allows more flexibility in testing different configurations without modifying the Ansible playbooks directly.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it also enables different teams to control the goss repo and the packer repos, which means security teams can design tests to make sure their concers are met. Separation of Concerns.

3. How to Add a New Pipeline Type (e.g., ARM Amazon Linux or ARM Windows Instance)
If you want to create a new pipeline type, such as one for ARM-based Amazon Linux or Windows instances, you need to follow these steps:
Steps to Add a New Pipeline Type:
1. Create a New Packer Configuration:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this absolutely required?

README.md Outdated
If you want to create a new pipeline type, such as one for ARM-based Amazon Linux or Windows instances, you need to follow these steps:
Steps to Add a New Pipeline Type:
1. Create a New Packer Configuration:
o In the linux-image-pipeline repository, create a new Packer configuration file, such as build-arm-linux.pkr.hcl or build-arm-windows.pkr.hcl.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This shouldn't be required. Updating the instance type and source ami parameters when calling the terraform-aws-image-pipeline module should be sufficient. It should be rare that we actually need new packer configs.

README.md Outdated

source_ami = "ami-arm-linux" # Replace with actual ARM Linux AMI ID
instance_type = "a1.medium" # Instance type for ARM

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the above settings are already configured through terraform and the current build.pkr.hcl config, why do we need a new packer config?

README.md Outdated

5. Push to the Repository:
o Once you’ve made these changes, push your new files to the relevant repositories (e.g., aws-image-pipeline, linux-image-pipeline, image-pipeline-ansible-playbooks, and image-pipeline-goss-testing).
o Ensure the new pipeline type is registered by updating the AWS CodeBuild configuration, so it triggers the new pipeline on push.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You shouldn't ever need to modify codebuild. Codebuild is just a shell for automation, it's the ansible and the goss that would need to change the most. The infrastructure involved in actually running the pipeline shouldn't have to change.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Push to the Repository:
Once you’ve made these changes, push your new files to the relevant repositories (e.g., aws-image-pipeline, linux-image-pipeline, image-pipeline-ansible-playbooks, and image-pipeline-goss-testing).
Note: You do not need to modify the AWS CodeBuild configuration when adding a new pipeline type. CodeBuild serves as a shell for automation, and the focus should be on updating Ansible playbooks and Goss tests. The infrastructure for running the pipeline remains unchanged.

Copy link
Collaborator

@arnol377 arnol377 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

commented

Changes from Davids Feedback, please review.
Sign in to join this conversation on GitHub.
Labels
None yet
Projects
None yet
2 participants