diff --git a/DEPLOYMENT.md b/DEPLOYMENT.md index 91d7c3c6..8e3e3af3 100644 --- a/DEPLOYMENT.md +++ b/DEPLOYMENT.md @@ -1,13 +1,13 @@ # Deployment Guide: Service Catalog Repository Generator -This guide walks through deploying the Lambda function that Service Catalog will invoke to create GitHub repositories. +This guide walks through deploying the Lambda function that Service Catalog will invoke (via CloudFormation Custom Resource) to create GitHub repositories. ## Overview -The deployment has three main steps: +The deployment has three main stages: 1. **Create ECR Repository** - Where the Lambda container image will be stored 2. **Build & Push Container** - Build the Lambda code into a container and push to ECR -3. **Deploy Lambda Function** - Deploy the Lambda and EventBridge integration using Terraform +3. **Deploy Lambda Function** - Deploy the Lambda function using Terraform ## Prerequisites @@ -19,7 +19,7 @@ The deployment has three main steps: - `csvd-template-automation-builds` (for build artifacts) - `image-pipeline-assets-dev` (for Packer binaries) -## Step 1: Create ECR Repository +## Step 1: Create ECR Repository (Infrastructure) ```bash cd /home/a/arnol377/git/lambda-template-repo-generator @@ -34,7 +34,7 @@ terraform apply **Output**: ECR repository at `229685449397.dkr.ecr.us-gov-west-1.amazonaws.com/service-catalog-repo-generator/lambda` -## Step 2: Build and Push Lambda Container +## Step 2: Build and Push Lambda Container (Pipeline) ```bash cd /home/a/arnol377/git/lambda-template-repo-generator @@ -57,7 +57,7 @@ aws ecr describe-images \ --region us-gov-west-1 ``` -## Step 3: Deploy Lambda Function +## Step 3: Deploy Lambda Function (Application) ### 3a. Configure Deployment Variables @@ -86,7 +86,7 @@ terraform init # Review the plan terraform plan -# Deploy the Lambda function and EventBridge rule +# Deploy the Lambda function and CloudFormation permissions terraform apply ``` @@ -94,98 +94,35 @@ This creates: - ✅ Lambda function using your container image - ✅ IAM roles and policies - ✅ SSM parameters for configuration -- ✅ EventBridge rule to trigger from Service Catalog -- ✅ Lambda permissions for EventBridge -- ✅ API Gateway (if you want direct invocation as well) +- ✅ Lambda permissions for CloudFormation to invoke it (as Custom Resource) + +**Important**: Note the `lambda_function_arn` output. You will need this for your Service Catalog CloudFormation template. ## Step 4: Create Service Catalog Product -Now you need to create a Service Catalog product that triggers your Lambda. You have two options: +Now you need to create a Service Catalog product that uses the Lambda as a CloudFormation Custom Resource. -### Option A: CloudFormation with Lambda-backed Custom Resource +### Use the Provided Template -Create a CloudFormation template: +A complete CloudFormation template is available at `cloudformation-template.yaml`. -```yaml -AWSTemplateFormatVersion: '2010-09-09' -Description: 'Service Catalog Product: Create GitHub Repository' - -Parameters: - ProjectName: - Type: String - Description: Name of the repository to create - OwningTeam: - Type: String - Description: GitHub team that should own the repository - Default: tf-module-admins - Environment: - Type: String - Description: Environment (dev, staging, prod) - AllowedValues: - - development - - staging - - production - AwsRegion: - Type: String - Description: AWS region for the project - Default: us-gov-west-1 +1. **Update the Template**: ensuring the `ServiceToken` points to your deployed Lambda ARN. +2. **Upload to Service Catalog**: Create a new product of type "CloudFormation Template" and upload this file. + +### Custom Resource Definition +If you are writing your own template, the resource definition looks like this: + +```yaml Resources: TriggerLambda: Type: Custom::RepositoryCreator Properties: - ServiceToken: !Sub 'arn:aws-us-gov:lambda:us-gov-west-1:229685449397:function:service-catalog-repo-gen-template-automation' + ServiceToken: !Sub 'arn:aws-us-gov:lambda:${AWS::Region}:${AWS::AccountId}:function:service-catalog-repo-gen-template-automation' ProjectName: !Ref ProjectName OwningTeam: !Ref OwningTeam Environment: !Ref Environment - AwsRegion: !Ref AwsRegion - -Outputs: - RepositoryUrl: - Description: URL of the created repository - Value: !GetAtt TriggerLambda.repository_url - PullRequestUrl: - Description: URL of the configuration pull request - Value: !GetAtt TriggerLambda.pull_request_url -``` - -### Option B: EventBridge Pattern (Already configured!) - -The EventBridge rule created in Step 3 already listens for Service Catalog events. Simply: - -1. Create a CloudFormation template that provisions **any** resource -2. Add your parameters (project_name, owning_team, etc.) -3. When Service Catalog provisions this product, it emits an event -4. Your EventBridge rule catches it and triggers the Lambda - -**Simple CloudFormation for this:** - -```yaml -AWSTemplateFormatVersion: '2010-09-09' -Description: 'Trigger repository creation via Service Catalog' - -Parameters: - ProjectName: - Type: String - Description: Name of the repository to create - OwningTeam: - Type: String - Description: GitHub team that should own the repository - Default: tf-module-admins - -Resources: - # This is just a placeholder - the real work is done by the Lambda - DummyParameter: - Type: AWS::SSM::Parameter - Properties: - Name: !Sub '/service-catalog/repositories/${ProjectName}' - Type: String - Value: !Ref ProjectName - Description: !Sub 'Service Catalog provisioned repository: ${ProjectName}' - -Outputs: - Message: - Value: !Sub 'Repository creation triggered for ${ProjectName}. Check GitHub/Lambda logs for details.' + # Add any other parameters you need ``` ## Step 5: Test the Integration @@ -193,10 +130,10 @@ Outputs: ### Manual Lambda Test ```bash -# Invoke Lambda directly with a test event +# Invoke Lambda directly with a Custom Resource test event aws lambda invoke \ --function-name service-catalog-repo-gen-template-automation \ - --payload file://events/service-catalog-event.json \ + --payload file://events/cloudformation-create-event.json \ --region us-gov-west-1 \ response.json @@ -206,10 +143,9 @@ cat response.json ### Service Catalog Test 1. Go to AWS Service Catalog console -2. Create a portfolio and add your product -3. Provision the product with test parameters -4. Watch CloudWatch Logs for the Lambda execution -5. Check GitHub for the new repository +2. Provision the product you created in Step 4 +3. Wait for the stack to complete (the Lambda runs synchronously) +4. Check the "Outputs" tab of the provisioned product for the Repository URL. ## Monitoring and Troubleshooting @@ -222,43 +158,13 @@ aws logs tail /aws/lambda/service-catalog-repo-gen-template-automation \ --region us-gov-west-1 ``` -### Verify EventBridge Rule +### Response Handling -```bash -# Check if the rule is enabled -aws events describe-rule \ - --name service-catalog-repo-provisioning \ - --region us-gov-west-1 -``` +The Lambda must send a response back to CloudFormation (using the pre-signed URL in the event) for the stack to proceed. If the Lambda times out or crashes before sending a response, the stack will hang until the timeout (usually 1 hour). -### Test Event Pattern - -```bash -# Send a test event to EventBridge -aws events put-events \ - --entries file://test-event.json \ - --region us-gov-west-1 -``` - -## Updating the Lambda - -When you make code changes: - -```bash -# 1. Rebuild the container -cd /home/a/arnol377/git/lambda-template-repo-generator -packer-pipeline build --config config_packer.hcl - -# 2. Update Lambda to use the new image -cd deploy -terraform apply -target=module.service_catalog_repo_generator.aws_lambda_function.this - -# Or force Lambda to pull the latest image -aws lambda update-function-code \ - --function-name service-catalog-repo-gen-template-automation \ - --image-uri 229685449397.dkr.ecr.us-gov-west-1.amazonaws.com/service-catalog-repo-generator/lambda:latest \ - --region us-gov-west-1 -``` +If your stack is stuck: +1. Check Lambda logs for errors +2. Manually signal failure if needed using `curl` to the ResponseURL found in the CloudWatch logs. ## Architecture Diagram @@ -267,22 +173,20 @@ User → Service Catalog UI ↓ Provisions Product ↓ - CloudFormation runs + CloudFormation Stack Creates ↓ - EventBridge emits event + Custom Resource Invokes Lambda ↓ - Lambda Function triggered + Lambda Creates GitHub Repository ↓ - Creates GitHub Repository + Lambda Responds SUCCESS to CloudFormation ↓ - Writes config.json - ↓ - Opens Pull Request + Stack Completes with Outputs ``` ## Next Steps 1. Create Service Catalog portfolio and products 2. Set up proper IAM permissions for users to provision products -3. Configure SNS notifications for repository creation -4. Add additional template repositories for different project types +3. Add constraints to Service Catalog product versions + diff --git a/MIGRATION.md b/MIGRATION.md deleted file mode 100644 index 4a890113..00000000 --- a/MIGRATION.md +++ /dev/null @@ -1,204 +0,0 @@ -# Service Catalog Migration Summary - -## Overview - -Successfully migrated the template-automation-lambda codebase to lambda-template-repo-generator with exclusive support for AWS Service Catalog events. - -## Changes Made - -### 1. Code Migration -- **Copied** all code from `/home/a/arnol377/git/template-automation-lambda` to `/home/a/arnol377/git/lambda-template-repo-generator` -- Preserved directory structure including: - - `template_automation/` - Main Python package - - `tests/` - Test suites - - `scripts/` - Utility scripts - - `events/` - Test events - - `docs/` - Documentation - - Infrastructure files (Dockerfile, Makefile, Terraform, etc.) - -### 2. Core Lambda Handler Changes (`template_automation/app.py`) - -#### Event Structure Parsing -**Before:** -```python -event_body = event.get('body', {}) -template_input = TemplateInput(**event_body) -``` - -**After:** -```python -if 'detail' not in event: - raise ValueError("Event missing 'detail' field - not a valid Service Catalog event") - -detail = event['detail'] -provisioning_params = detail['provisioningParameters'] -service_catalog_input = ServiceCatalogInput(**provisioning_params) -template_settings = service_catalog_input.to_template_settings() -``` - -#### Input Model -**Before:** `TemplateInput` with explicit `template_settings` field - -**After:** `ServiceCatalogInput` with dynamic field extraction -```python -class ServiceCatalogInput(BaseModel): - project_name: str - owning_team: Optional[str] = "tf-module-admins" - - model_config = {"extra": "allow"} # Accept any Service Catalog parameters - - def to_template_settings(self) -> Dict[str, Any]: - # Converts all extra fields to attrs/tags structure -``` - -#### Pull Request Messages -**Before:** Generic "Initialize repository from template" - -**After:** Service Catalog-specific messages with provisioning details -```python -title=f"Initialize {service_catalog_input.project_name} from Service Catalog" -description=f"...from Service Catalog provisioning.\n\nProvisioned Product: {detail.get('provisionedProductName')}" -``` - -#### Team Permissions -**Before:** Attempted for all providers - -**After:** Only for GitHub provider -```python -if service_catalog_input.owning_team and provider_type == "GitHubProvider": - provider.set_team_permission(...) -``` - -### 3. New Test Event Format - -Created `events/service-catalog-event.json` with EventBridge structure: -```json -{ - "version": "0", - "detail-type": "Service Catalog Product Provisioning", - "source": "aws.servicecatalog", - "detail": { - "eventName": "ProvisionProduct", - "provisioningParameters": { - "project_name": "...", - "owning_team": "...", - ... - } - } -} -``` - -### 4. Documentation - -- **README.md**: Completely rewritten for Service Catalog focus - - Architecture diagram showing Service Catalog → EventBridge → Lambda flow - - Service Catalog-specific event structure documentation - - Provisioning parameters specification - - EventBridge rule configuration examples - -- **Test Script**: Created `test_service_catalog.py` to validate event parsing - -### 5. Configuration File Output - -The Lambda now creates `config.json` in repositories with this structure: -```json -{ - "attrs": { - "aws_region": "...", - "environment": "...", - ... all other Service Catalog parameters - }, - "tags": { - ... if tags parameter provided - } -} -``` - -## Backwards Compatibility - -**NONE** - This is a clean break from the original implementation: - -- ❌ No support for direct Lambda invocation -- ❌ No support for API Gateway events -- ❌ No support for `template_settings` input format -- ❌ No support for `trigger_init_workflow` flag -- ✅ **ONLY** supports EventBridge events from AWS Service Catalog - -## Testing Results - -```bash -$ python3 test_service_catalog.py -Testing Service Catalog event parsing... -============================================================ -✓ Found 19 provisioning parameters -✓ ServiceCatalogInput validation successful -✓ Converted to template settings format -✓ All tests passed! -``` - -## Files Modified - -1. `template_automation/app.py` - Complete lambda_handler rewrite -2. `events/test-event.json` - Converted to Service Catalog format -3. `events/service-catalog-event.json` - New Service Catalog example -4. `README.md` - Complete rewrite -5. `test_service_catalog.py` - New test script - -## Files Unchanged - -- All provider implementations (`github_provider.py`, `gitlab_provider.py`) -- Repository provider interface (`repository_provider.py`) -- Models (`models.py`) -- Infrastructure files (Terraform, Dockerfile, etc.) -- Tests (existing tests may need updates) - -## Next Steps - -1. **Update Tests**: Modify existing tests to use Service Catalog event format -2. **EventBridge Rule**: Configure EventBridge to trigger Lambda on Service Catalog events -3. **Service Catalog Product**: Define product with appropriate parameters -4. **IAM Permissions**: Ensure Lambda has permissions to process EventBridge events -5. **Monitoring**: Set up CloudWatch alarms for Lambda failures - -## Deployment Considerations - -- **Container Image**: Existing Dockerfile and Packer configuration can be reused -- **Environment Variables**: No changes required (same as original) -- **IAM Role**: May need additional permissions for EventBridge event processing -- **Trigger**: Change from API Gateway/direct invoke to EventBridge rule - -## Sample EventBridge Rule - -```json -{ - "source": ["aws.servicecatalog"], - "detail-type": ["Service Catalog Product Provisioning"], - "detail": { - "eventName": ["ProvisionProduct"], - "status": ["SUCCEEDED"] - } -} -``` - -## Validation - -The code has been validated for: -- ✅ Python syntax (no compilation errors) -- ✅ Event structure parsing -- ✅ Parameter extraction and conversion -- ✅ Pydantic model validation (v2 compatibility) - -## Known Limitations - -1. Team permissions only work with GitHub provider (not GitLab) -2. Requires all Service Catalog parameters to be flat (nested objects become strings) -3. Special handling only for `tags` parameter (must be a dict) -4. No validation of Service Catalog event authenticity (trusts EventBridge) - -## Support - -For issues: -1. Check CloudWatch Logs with request ID -2. Verify event structure matches expected format -3. Confirm provisioning parameters include `project_name` -4. Check GitHub/GitLab provider configuration diff --git a/PACKER_UPDATES.md b/PACKER_UPDATES.md index 20efde7f..00c9b246 100644 --- a/PACKER_UPDATES.md +++ b/PACKER_UPDATES.md @@ -1,8 +1,8 @@ -# Packer Configuration Updates for Service Catalog Lambda +# Packer Configuration Updates for Repository Generator Lambda ## Summary of Changes -The Packer configuration has been updated to build a **Service Catalog-specific** Lambda container image that processes AWS Service Catalog provisioning events. +The Packer configuration has been updated to build the Lambda container image that processes **CloudFormation Custom Resource** events for Service Catalog repository creation. ## Files Modified @@ -60,7 +60,7 @@ The Packer template builds a Lambda container image with these characteristics: 1. **Base Image**: AWS Lambda Python 3.11 (`public.ecr.aws/lambda/python:3.11`) 2. **Handler**: `template_automation.app.lambda_handler` 3. **Dependencies**: Installed from `requirements.txt` including: - - pydantic (v2) for Service Catalog event validation + - pydantic (v2) for CloudFormation parameter validation - boto3 for AWS service integration - requests for GitHub/GitLab API calls @@ -79,9 +79,9 @@ packer build \ ## Deployment Integration The built container image is designed to be: -1. **Triggered by**: AWS EventBridge rules filtering Service Catalog events -2. **Event format**: Service Catalog provisioning events with `provisioningParameters` -3. **Output**: Creates GitHub/GitLab repositories with configuration from Service Catalog +1. **Triggered by**: CloudFormation via Custom Resource +2. **Event format**: CloudFormation Custom Resource events (Create, Update, Delete) +3. **Output**: Creates GitHub/GitLab repositories with configuration from CloudFormation parameters ## Validation @@ -98,8 +98,9 @@ To deploy the Lambda function: 1. Build the container image using Packer 2. Push to ECR (automated by Packer post-processor) 3. Create Lambda function using the container image -4. Configure EventBridge rule to trigger on Service Catalog events -5. Set environment variables for GitHub/GitLab integration +4. Grant CloudFormation permission to invoke the Lambda +5. Create a Service Catalog product using `cloudformation-template.yaml` +6. Set environment variables for GitHub/GitLab integration ## Environment Variables Required @@ -111,18 +112,3 @@ The Lambda function requires these environment variables: - `GITHUB_ORG_NAME` or `GITLAB_GROUP_NAME`: Organization/group name - `GITHUB_TOKEN_SECRET_NAME` or `GITLAB_TOKEN_SECRET_NAME`: Secrets Manager secret name - `VERIFY_SSL`: SSL verification (default: true) - -## EventBridge Rule Example - -```json -{ - "source": ["aws.servicecatalog"], - "detail-type": ["Service Catalog Product Provisioning"], - "detail": { - "eventName": ["ProvisionProduct"], - "status": ["SUCCEEDED"] - } -} -``` - -This ensures the Lambda only processes successful Service Catalog provisioning events. diff --git a/design-docs/CUSTOM_TEMPLATES.MD b/design-docs/CUSTOM_TEMPLATES.MD index 52b42d4f..1a3d089a 100644 --- a/design-docs/CUSTOM_TEMPLATES.MD +++ b/design-docs/CUSTOM_TEMPLATES.MD @@ -17,35 +17,37 @@ Allows using a specific subdirectory from a template repository, enabling: ### Creating from Full Repository -```json -{ - "action": "create", - "project_name": "my-service", - "template_settings": { - "type": "service", - "environment": "prod", - "variables": { - "region": "us-west-2" - } - } -} +CloudFormation Template snippet: + +```yaml +Resources: + MyServiceRepo: + Type: Custom::RepositoryCreator + Properties: + ServiceToken: !Sub 'arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:repo-generator' + ProjectName: "my-service" + # Other variables will be passed to template settings + Type: "service" + Environment: "prod" + Region: "us-west-2" ``` ### Creating from Subdirectory -```json -{ - "action": "create", - "project_name": "my-service", - "template_settings": { - "type": "service", - "environment": "prod", - "source_path": "templates/microservice", - "variables": { - "region": "us-west-2" - } - } -} +CloudFormation Template snippet: + +```yaml +Resources: + MyMicroserviceRepo: + Type: Custom::RepositoryCreator + Properties: + ServiceToken: !Sub 'arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:repo-generator' + ProjectName: "my-service" + # Specify SourcePath to use a subdirectory of the template repo + SourcePath: "templates/microservice" + Type: "service" + Environment: "prod" + Region: "us-west-2" ``` ## Template Organization diff --git a/design-docs/README.md b/design-docs/README.md index f8c2601d..f1015161 100644 --- a/design-docs/README.md +++ b/design-docs/README.md @@ -8,7 +8,8 @@ The Template Automation System is designed to be a generic, template-agnostic in #### terraform-aws-template-automation This is the foundational Terraform module that deploys the automation infrastructure: -- Deploys the Lambda function and required AWS resources (API Gateway, IAM roles, etc.) +- Deploys the Lambda function and required AWS resources (IAM roles, etc.) +- Configures the Lambda as a CloudFormation Custom Resource integration - Manages any required SSM parameters or Secrets - Provides a reusable module that can be included in any AWS environment - Template-agnostic - works with any type of repository template @@ -17,6 +18,7 @@ This is the foundational Terraform module that deploys the automation infrastruc This is the engine of the automation system: - Implements the core repository templating logic in template_automation/app.py - Packaged as a Docker image for Lambda deployment +- Handles CloudFormation Custom Resource events (Create, Update, Delete) - Handles repository creation, branch management, and PR automation - Template-agnostic - can work with any properly structured template repository diff --git a/design-docs/REPO_VARS_AND_SECRETS.md b/design-docs/REPO_VARS_AND_SECRETS.md index 728942bb..87891370 100644 --- a/design-docs/REPO_VARS_AND_SECRETS.md +++ b/design-docs/REPO_VARS_AND_SECRETS.md @@ -183,17 +183,17 @@ aws ssm put-parameter \ ### Creating a Repository with Secrets -Create a new EKS cluster repository with the Lambda function: - -```json -{ - "action": "create", - "project_name": "production-eks", - "template_settings": { - "type": "eks-cluster", - "environment": "production" - } -} +Create a new EKS cluster repository via CloudFormation Custom Resource: + +```yaml +Resources: + EKSClusterRepo: + Type: Custom::RepositoryCreator + Properties: + ServiceToken: !Sub 'arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:repo-generator' + ProjectName: "production-eks" + Type: "eks-cluster" + Environment: "production" ``` The Lambda function will: @@ -226,23 +226,12 @@ ESxK2ld9J4mCpA-ghi8932jk... ### Destroying a Repository -To clean up a repository and its associated secrets/variables: - -```json -{ - "action": "destroy", - "project_name": "production-eks", - "destroy_token": "ESxK2ld9J4mCpA-ghi8932jk..." -} -``` - -The Lambda function will: -1. Validate the provided destroy token -2. Delete all repository secrets -3. Delete all repository variables -4. Delete the repository itself +When a CloudFormation stack is deleted, the Lambda receives a `Delete` request type. +The Lambda acknowledges the delete but does **not** automatically delete the repository (manual cleanup required). -If an invalid destroy token is provided, the operation will fail with an error. +To manually clean up: +1. Delete repository secrets and variables via GitHub/GitLab API +2. Delete the repository itself ## Future Enhancements diff --git a/docs/callnotes.md b/docs/callnotes.md deleted file mode 100644 index dd5a747c..00000000 --- a/docs/callnotes.md +++ /dev/null @@ -1,24 +0,0 @@ -# Meeting Notes - -## Participants -- Srinivasa R Nangunuri (CENSUS/CSVD FED) -- Matthew Creal Morgan (CENSUS/CSVD CTR) -- David John Arnold Jr (CENSUS/CSVD CTR) - -## Key Issues -1. Environment details not appearing in README file - - Needs to be fixed - - Estimated work time: couple of hours - -## Timeline -- Implementation needed within next 24 hours -- Current progress: 90% complete -- Work estimate: 2-3 hours -- Follow-up meeting scheduled for tomorrow, same time - -## Next Steps -- David to implement environment details fix either tonight or tomorrow morning -- Team to reconvene tomorrow at the same time to review changes - -## Status -Current completion status reported to leadership: 90% complete \ No newline at end of file diff --git a/docs/callnotes.txt b/docs/callnotes.txt deleted file mode 100644 index d2240f3f..00000000 --- a/docs/callnotes.txt +++ /dev/null @@ -1,267 +0,0 @@ -SN -Srinivasa R Nangunuri (CENSUS/CSVD FED) -0 minutes 4 seconds0:04 -Srinivasa R Nangunuri (CENSUS/CSVD FED) 0 minutes 4 seconds -Yeah. -MM -Matthew Creal Morgan (CENSUS/CSVD CTR) -0 minutes 4 seconds0:04 -Matthew Creal Morgan (CENSUS/CSVD CTR) 0 minutes 4 seconds -Yeah. -D -David John Arnold Jr (CENSUS/CSVD CTR) -0 minutes 5 seconds0:05 -David John Arnold Jr (CENSUS/CSVD CTR) 0 minutes 5 seconds -I had no idea. -David John Arnold Jr (CENSUS/CSVD CTR) 0 minutes 6 seconds -OK, right on sweet. -MM -Matthew Creal Morgan (CENSUS/CSVD CTR) -0 minutes 8 seconds0:08 -Matthew Creal Morgan (CENSUS/CSVD CTR) 0 minutes 8 seconds -It's alright, it's fine. -Matthew Creal Morgan (CENSUS/CSVD CTR) 0 minutes 9 seconds -Don't worry about it. -Matthew Creal Morgan (CENSUS/CSVD CTR) 0 minutes 10 seconds -OK. -Matthew Creal Morgan (CENSUS/CSVD CTR) 0 minutes 11 seconds -So environment is not coming through in environment details in the README. -D -David John Arnold Jr (CENSUS/CSVD CTR) -0 minutes 16 seconds0:16 -David John Arnold Jr (CENSUS/CSVD CTR) 0 minutes 16 seconds -Yep. -MM -Matthew Creal Morgan (CENSUS/CSVD CTR) -0 minutes 16 seconds0:16 -Matthew Creal Morgan (CENSUS/CSVD CTR) 0 minutes 16 seconds -That's one. -Matthew Creal Morgan (CENSUS/CSVD CTR) 0 minutes 19 seconds -Umm. -David John Arnold Jr (CENSUS/CSVD CTR) 7 minutes 10 seconds -You give me admin.org and I'll clean up all those reports. I can give you anything if I don't have admin on those repos, which I probably don't, I can't do anything about it. -MM -Matthew Creal Morgan (CENSUS/CSVD CTR) -7 minutes 11 seconds7:11 -Matthew Creal Morgan (CENSUS/CSVD CTR) 7 minutes 11 seconds -Yeah. -Matthew Creal Morgan (CENSUS/CSVD CTR) 7 minutes 13 seconds -No, definitely not that. -Matthew Creal Morgan (CENSUS/CSVD CTR) 7 minutes 20 seconds -Yeah, I don't. -D -David John Arnold Jr (CENSUS/CSVD CTR) -7 minutes 21 seconds7:21 -David John Arnold Jr (CENSUS/CSVD CTR) 7 minutes 21 seconds -So. -MM -Matthew Creal Morgan (CENSUS/CSVD CTR) -7 minutes 23 seconds7:23 -Matthew Creal Morgan (CENSUS/CSVD CTR) 7 minutes 23 seconds -I don't think I have admin to do anything about it. -Matthew Creal Morgan (CENSUS/CSVD CTR) 7 minutes 26 seconds -Yeah. I don't even have admin. -D -David John Arnold Jr (CENSUS/CSVD CTR) -7 minutes 27 seconds7:27 -David John Arnold Jr (CENSUS/CSVD CTR) 7 minutes 27 seconds -33. -David John Arnold Jr (CENSUS/CSVD CTR) 7 minutes 29 seconds -Exactly. -MM -Matthew Creal Morgan (CENSUS/CSVD CTR) -7 minutes 29 seconds7:29 -Matthew Creal Morgan (CENSUS/CSVD CTR) 7 minutes 29 seconds -So what we'll need to do is just put a list together of the repos that we need deleted, and then we'll pass that over to Youssef and he can take care of it. -D -David John Arnold Jr (CENSUS/CSVD CTR) -7 minutes 31 seconds7:31 -David John Arnold Jr (CENSUS/CSVD CTR) 7 minutes 31 seconds -Play. -David John Arnold Jr (CENSUS/CSVD CTR) 7 minutes 41 seconds -Yeah, alright. -David John Arnold Jr (CENSUS/CSVD CTR) 7 minutes 41 seconds -I'm good with that. -MM -Matthew Creal Morgan (CENSUS/CSVD CTR) -7 minutes 43 seconds7:43 -Matthew Creal Morgan (CENSUS/CSVD CTR) 7 minutes 43 seconds -I know that that's a pain in the ***. -Matthew Creal Morgan (CENSUS/CSVD CTR) 7 minutes 44 seconds -It'd be so much easier just doing our damn selves, but you know. -D -David John Arnold Jr (CENSUS/CSVD CTR) -7 minutes 47 seconds7:47 -David John Arnold Jr (CENSUS/CSVD CTR) 7 minutes 47 seconds -Help. -SN -Srinivasa R Nangunuri (CENSUS/CSVD FED) -7 minutes 47 seconds7:47 -Srinivasa R Nangunuri (CENSUS/CSVD FED) 7 minutes 47 seconds -Yeah. -MM -Matthew Creal Morgan (CENSUS/CSVD CTR) -7 minutes 49 seconds7:49 -Matthew Creal Morgan (CENSUS/CSVD CTR) 7 minutes 49 seconds -OK, so so Srini, what's your report for leadership from this call? -D -David John Arnold Jr (CENSUS/CSVD CTR) -7 minutes 49 seconds7:49 -David John Arnold Jr (CENSUS/CSVD CTR) 7 minutes 49 seconds -All right. -David John Arnold Jr (CENSUS/CSVD CTR) 7 minutes 52 seconds -Oh. -SN -Srinivasa R Nangunuri (CENSUS/CSVD FED) -7 minutes 59 seconds7:59 -Srinivasa R Nangunuri (CENSUS/CSVD FED) 7 minutes 59 seconds -90% done. -MM -Matthew Creal Morgan (CENSUS/CSVD CTR) -8 minutes 1 second8:01 -Matthew Creal Morgan (CENSUS/CSVD CTR) 8 minutes 1 second -There we go. -SN -Srinivasa R Nangunuri (CENSUS/CSVD FED) -8 minutes 2 seconds8:02 -Srinivasa R Nangunuri (CENSUS/CSVD FED) 8 minutes 2 seconds -Yep, yes, I I I I'll always. -MM -Matthew Creal Morgan (CENSUS/CSVD CTR) -8 minutes 4 seconds8:04 -Matthew Creal Morgan (CENSUS/CSVD CTR) 8 minutes 4 seconds -OK. -SN -Srinivasa R Nangunuri (CENSUS/CSVD FED) -8 minutes 6 seconds8:06 -Srinivasa R Nangunuri (CENSUS/CSVD FED) 8 minutes 6 seconds -Do the like favorable report? -Srinivasa R Nangunuri (CENSUS/CSVD FED) 8 minutes 9 seconds -No, no worries. -MM -Matthew Creal Morgan (CENSUS/CSVD CTR) -8 minutes 9 seconds8:09 -Matthew Creal Morgan (CENSUS/CSVD CTR) 8 minutes 9 seconds -OK. -SN -Srinivasa R Nangunuri (CENSUS/CSVD FED) -8 minutes 10 seconds8:10 -Srinivasa R Nangunuri (CENSUS/CSVD FED) 8 minutes 10 seconds -Yeah, what? -Srinivasa R Nangunuri (CENSUS/CSVD FED) 8 minutes 11 seconds -What? See, there's no, nothing less, nothing more. -Srinivasa R Nangunuri (CENSUS/CSVD FED) 8 minutes 14 seconds -Whatever we did, what we I I report. -MM -Matthew Creal Morgan (CENSUS/CSVD CTR) -8 minutes 18 seconds8:18 -Matthew Creal Morgan (CENSUS/CSVD CTR) 8 minutes 18 seconds -So timeline, how much time do you need David to fix this up? -SN -Srinivasa R Nangunuri (CENSUS/CSVD FED) -8 minutes 18 seconds8:18 -Srinivasa R Nangunuri (CENSUS/CSVD FED) 8 minutes 18 seconds -But. -MM -Matthew Creal Morgan (CENSUS/CSVD CTR) -8 minutes 26 seconds8:26 -Matthew Creal Morgan (CENSUS/CSVD CTR) 8 minutes 26 seconds -Is it a today today, tonight? -D -David John Arnold Jr (CENSUS/CSVD CTR) -8 minutes 26 seconds8:26 -David John Arnold Jr (CENSUS/CSVD CTR) 8 minutes 26 seconds -Let me go. -MM -Matthew Creal Morgan (CENSUS/CSVD CTR) -8 minutes 27 seconds8:27 -Matthew Creal Morgan (CENSUS/CSVD CTR) 8 minutes 27 seconds -Let's reconvene tomorrow. -Matthew Creal Morgan (CENSUS/CSVD CTR) 8 minutes 29 seconds -Or is it a couple hours? -D -David John Arnold Jr (CENSUS/CSVD CTR) -8 minutes 31 seconds8:31 -David John Arnold Jr (CENSUS/CSVD CTR) 8 minutes 31 seconds -It's probably a couple hours. -SN -Srinivasa R Nangunuri (CENSUS/CSVD FED) -8 minutes 33 seconds8:33 -Srinivasa R Nangunuri (CENSUS/CSVD FED) 8 minutes 33 seconds -No, but we'll reconvene tomorrow. -Srinivasa R Nangunuri (CENSUS/CSVD FED) 8 minutes 35 seconds -I I need to be on the call so. -MM -Matthew Creal Morgan (CENSUS/CSVD CTR) -8 minutes 37 seconds8:37 -Matthew Creal Morgan (CENSUS/CSVD CTR) 8 minutes 37 seconds -OK. -SN -Srinivasa R Nangunuri (CENSUS/CSVD FED) -8 minutes 38 seconds8:38 -Srinivasa R Nangunuri (CENSUS/CSVD FED) 8 minutes 38 seconds -Yeah, what? -Srinivasa R Nangunuri (CENSUS/CSVD FED) 8 minutes 38 seconds -What? What are the fixes you want to do, David? -MM -Matthew Creal Morgan (CENSUS/CSVD CTR) -8 minutes 39 seconds8:39 -Matthew Creal Morgan (CENSUS/CSVD CTR) 8 minutes 39 seconds -So. -SN -Srinivasa R Nangunuri (CENSUS/CSVD FED) -8 minutes 41 seconds8:41 -Srinivasa R Nangunuri (CENSUS/CSVD FED) 8 minutes 41 seconds -Do it tonight if you can, or do it tomorrow. Either way, it's fine. -Srinivasa R Nangunuri (CENSUS/CSVD FED) 8 minutes 45 seconds -Like we can. -Srinivasa R Nangunuri (CENSUS/CSVD FED) 8 minutes 45 seconds -We can win tomorrow afternoon. -Srinivasa R Nangunuri (CENSUS/CSVD FED) 8 minutes 48 seconds -At your login time, so that will be afternoon for me, so that's fine. -MM -Matthew Creal Morgan (CENSUS/CSVD CTR) -8 minutes 51 seconds8:51 -Matthew Creal Morgan (CENSUS/CSVD CTR) 8 minutes 51 seconds -Same bat time, same bat channel. -SN -Srinivasa R Nangunuri (CENSUS/CSVD FED) -8 minutes 54 seconds8:54 -Srinivasa R Nangunuri (CENSUS/CSVD FED) 8 minutes 54 seconds -Yeah. -D -David John Arnold Jr (CENSUS/CSVD CTR) -8 minutes 54 seconds8:54 -David John Arnold Jr (CENSUS/CSVD CTR) 8 minutes 54 seconds -Yeah. So where does the transcription for this recording show up? -MM -Matthew Creal Morgan (CENSUS/CSVD CTR) -9 minutes9:00 -Matthew Creal Morgan (CENSUS/CSVD CTR) 9 minutes -Once we close this call, it'll post in the channel and then you can pull it from there and I'll I'll shoot it to you also. -SN -Srinivasa R Nangunuri (CENSUS/CSVD FED) -9 minutes9:00 -Srinivasa R Nangunuri (CENSUS/CSVD FED) 9 minutes -I'll yeah, I'll send. -D -David John Arnold Jr (CENSUS/CSVD CTR) -9 minutes 7 seconds9:07 -David John Arnold Jr (CENSUS/CSVD CTR) 9 minutes 7 seconds -All right, sweetie. -David John Arnold Jr (CENSUS/CSVD CTR) 9 minutes 10 seconds -Good deal. -MM -Matthew Creal Morgan (CENSUS/CSVD CTR) -9 minutes 10 seconds9:10 -Matthew Creal Morgan (CENSUS/CSVD CTR) 9 minutes 10 seconds -Cool. -SN -Srinivasa R Nangunuri (CENSUS/CSVD FED) -9 minutes 11 seconds9:11 -Srinivasa R Nangunuri (CENSUS/CSVD FED) 9 minutes 11 seconds -I can. -Srinivasa R Nangunuri (CENSUS/CSVD FED) 9 minutes 11 seconds -So I can stop the recording right now. -Srinivasa R Nangunuri (CENSUS/CSVD FED) 9 minutes 13 seconds -It'll it'll. I stopped. \ No newline at end of file diff --git a/docs/gitlab-migration.md b/docs/gitlab-migration.md deleted file mode 100644 index 00f46795..00000000 --- a/docs/gitlab-migration.md +++ /dev/null @@ -1,71 +0,0 @@ -# GitLab Migration: What Actually Needs to Get Done - -## 1. Update Lambda Function to Use GitLab -- Replace all GitHub API usage (PyGithub) with GitLab API usage (`python-gitlab`). - - Uninstall PyGithub from requirements.txt, add python-gitlab. - - Update all import statements and API calls in your Lambda code (e.g., `app.py`, `template_automation/`). -- Update authentication to use a GitLab token (Personal Access Token with `api` scope). - - Store the token in AWS SSM or Secrets Manager, update Lambda environment/config to use it. -- Change all repo creation, file commit, and merge/pull request logic to use GitLab’s API and terminology. - - GitHub: `repo.create_pull` → GitLab: `project.mergerequests.create` - - GitHub: `repo.create_file` → GitLab: `project.files.create` - - GitHub: `org.create_repo_from_template` → GitLab: fork or create project, then push files -- Update config/env vars: - - Use `GITLAB_API_URL`, `GITLAB_GROUP_ID` (or `GITLAB_NAMESPACE`), and `GITLAB_TOKEN` instead of GitHub equivalents. -- Test Lambda end-to-end with a real GitLab group/project. - -## 2. Migrate CI/CD to AWS CodeBuild -- Convert your GitHub Actions workflow (e.g., `.github/workflows/initialize.yml`) to a `buildspec.yml` for CodeBuild. - - Each step in the workflow should become a phase in `buildspec.yml` (install, pre_build, build, post_build). - - Example: - ```yaml - version: 0.2 - phases: - install: - commands: - - pip install ansible - - pip install -r requirements.txt || true - build: - commands: - - ansible-playbook ansible/generate_hcl_files.yml -e "config_file=config.json" - - git add -A - - git diff --staged --quiet || git commit -m "Initialize repository structure from template" - - git push origin HEAD:repo-init || true - ``` -- Set up AWS CodeBuild projects for each repo that needs CI/CD. - - Use the AWS Console or Terraform to create the projects. - - Make sure CodeBuild has permissions to pull from GitLab and push to your repos. -- Set up triggers so CodeBuild runs on changes: - - Use GitLab webhooks to trigger a Lambda that starts CodeBuild, or use AWS CodeStar Connections if available for GitLab. - - Make sure the trigger covers the same events as your old GitHub Actions (e.g., PRs to `main`/`master`, manual triggers). -- Test CodeBuild by pushing a change to a test branch and verifying the pipeline runs and updates the repo as expected. - -## 3. Update Documentation -- Change all references from GitHub to GitLab in your README files and internal docs. -- Document the new workflow: - - How to trigger the pipeline (CodeBuild) - - How to configure the Lambda for GitLab - - Any new environment variables or secrets -- Remove or update any GitHub Actions badges, links, or instructions. - -## 4. Test Everything -- Test the Lambda function end-to-end with GitLab: - - Trigger a repo creation and make sure it works as expected. - - Check that files, branches, and merge requests are created correctly. -- Test CodeBuild pipelines: - - Make sure they run on the right events and update the repo as expected. - - Check logs for errors and fix any issues. -- Validate that new repos are created, initialized, and built as expected. - -## 5. Coordinate Cutover -- Wait for the repo migration team to finish moving code to GitLab. -- Switch all automation, scripts, and users to use the new GitLab URLs and CodeBuild pipelines. -- Monitor for issues and fix anything that breaks. -- Announce the cutover to your team and update any onboarding or support docs. - ---- - -**Summary:** -- Focus on Lambda code changes, CI/CD migration, documentation, and testing. -- Don’t worry about the actual repo migration (another team is handling it). -- Make sure everything works with GitLab and CodeBuild before switching over. \ No newline at end of file diff --git a/docs/source/conf.py b/docs/source/conf.py index 95510f0b..4555bf6e 100644 --- a/docs/source/conf.py +++ b/docs/source/conf.py @@ -12,7 +12,6 @@ 'sphinx.ext.napoleon', # Support for Google-style docstrings 'sphinx.ext.viewcode', # Add links to source code 'sphinx.ext.intersphinx', # Link to other project's documentation - 'sphinx_autodoc_typehints', # Support for type hints ] # -- Options for autodoc ---------------------------------------------------- @@ -20,20 +19,47 @@ 'members': True, 'undoc-members': True, 'show-inheritance': True, - 'special-members': '__init__', 'imported-members': False, # Don't document imported members } +# Prevent autodoc from trying to instantiate Pydantic BaseModel subclasses +autodoc_class_signature = 'separated' + # Don't document imported members in app module autodoc_mock_imports = ['github'] autodoc_member_order = 'bysource' +# Skip Pydantic Field objects that cause instantiation issues +def autodoc_skip_member_handler(app, what, name, obj, skip, options): + """Skip members that cause Pydantic instantiation errors.""" + # Skip model_config, Field objects, and other Pydantic internals + if name in ('model_config', 'model_fields', 'model_computed_fields'): + return True + return skip + +def setup(app): + app.connect('autodoc-skip-member', autodoc_skip_member_handler) + +# Monkey-patch napoleon's _skip_member to handle Pydantic v2 Field objects +# that raise PydanticUserError when getattr is called on them +import sphinx.ext.napoleon as _napoleon + +_original_skip_member = _napoleon._skip_member + +def _patched_skip_member(app, what, name, obj, skip, options): + try: + return _original_skip_member(app, what, name, obj, skip, options) + except Exception: + return None # Don't skip, let autodoc handle it + +_napoleon._skip_member = _patched_skip_member + # Napoleon settings for Google-style docstrings napoleon_google_docstring = True napoleon_numpy_docstring = False -napoleon_include_init_with_doc = True -napoleon_include_private_with_doc = True -napoleon_include_special_with_doc = True +napoleon_include_init_with_doc = False # Disabled to prevent Pydantic v2 instantiation errors +napoleon_include_private_with_doc = False +napoleon_include_special_with_doc = False napoleon_use_admonition_for_examples = True napoleon_use_admonition_for_notes = True napoleon_use_admonition_for_references = True diff --git a/docs/source/index.rst b/docs/source/index.rst index 60273e14..c2267b39 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -1,22 +1,24 @@ Template Automation Lambda Documentation -===================================== +========================================= -Welcome to the Template Automation Lambda documentation. This system provides a flexible -template automation framework for creating and configuring repositories from templates. +Welcome to the Template Automation Lambda documentation. This system provides a +CloudFormation Custom Resource-backed Lambda function for creating and configuring +repositories from templates via AWS Service Catalog. Quick Start ----------- +----------- -The Template Automation Lambda is an AWS Lambda function that automates the process of creating -repositories from templates. It handles: +The Template Automation Lambda is an AWS Lambda function invoked as a **CloudFormation +Custom Resource**. It automates: -- Repository creation from templates -- Template rendering with variable substitution -- Pull request creation with customizable settings -- Workflow automation triggers +- Repository creation from templates (GitHub and GitLab) +- Template content cloning to new repositories +- Configuration file generation (``config.json``) +- Pull/merge request creation with customizable settings +- Team permission assignment (GitHub) Installation ------------ +------------ To install the package and its dependencies: @@ -28,27 +30,37 @@ To install the package and its dependencies: Usage ----- -Basic usage example: - -.. code-block:: python - - from template_automation.app import lambda_handler - - event = { - "project_name": "my-new-repo", - "owning_team": "devops", - "template_settings": { - "variables": { - "environment": "prod", - "region": "us-west-2" - } +The Lambda is designed to be invoked as a CloudFormation Custom Resource. + +Example CloudFormation Custom Resource event: + +.. code-block:: json + + { + "RequestType": "Create", + "ResponseURL": "http://pre-signed-s3-url-for-response", + "StackId": "arn:aws:cloudformation:us-east-1:123456789012:stack/MyStack/guid", + "RequestId": "unique-request-id", + "ResourceType": "Custom::RepositoryCreator", + "LogicalResourceId": "MyRepository", + "ResourceProperties": { + "ServiceToken": "arn:aws:lambda:us-east-1:123456789012:function:repo-generator", + "ProjectName": "my-new-repo", + "OwningTeam": "devops", + "Environment": "prod", + "Region": "us-west-2" } } + # In Python (for local testing): + from template_automation.app import lambda_handler + + # Construct a mock CloudFormation event + event = { ... } # as above lambda_handler(event, {}) API Documentation ---------------- +----------------- .. toctree:: :maxdepth: 2 @@ -60,31 +72,34 @@ API Documentation modules/lambda_handler Core Components -------------- +--------------- +- :doc:`modules/lambda_handler` - AWS Lambda function entry point (CloudFormation Custom Resource handler) - :doc:`modules/github_client` - GitHub API interaction for repository and PR management - :doc:`modules/template_manager` - Template rendering and configuration handling - :doc:`modules/models` - Pydantic data models for input validation -- :doc:`modules/lambda_handler` - AWS Lambda function entry point Configuration ------------- +------------- The system uses several configuration models: -- **GitHubConfig**: GitHub API and authentication settings -- **WorkflowConfig**: Template workflow configuration -- **PRConfig**: Pull request settings -- **TemplateInput**: Input parameters for template processing +- **CloudFormationResourceInput**: Pydantic model for validating CloudFormation Custom Resource parameters +- **MergeRequestSettings**: Pull/merge request configuration +- **FileContent**: File content model for writing to repositories +- **GitHubProvider / GitLabProvider**: Repository provider implementations Environment Variables -------------------- +--------------------- Required environment variables: -- ``GITHUB_TOKEN``: GitHub Personal Access Token -- ``GITHUB_ORG``: GitHub Organization name -- ``TEMPLATE_REPO``: Template repository name +- ``GITHUB_TOKEN_SECRET_NAME`` or ``GITLAB_TOKEN_SECRET_NAME``: AWS Secrets Manager secret name containing the API token +- ``GITHUB_API`` or ``GITLAB_API``: API base URL for the Git provider +- ``GITHUB_ORG_NAME`` or ``GITLAB_GROUP_NAME``: Organization or group name +- ``TEMPLATE_REPO_NAME``: Template repository to clone from +- ``TEMPLATE_CONFIG_FILE``: Config file path in the repository (default: ``config.json``) +- ``VERIFY_SSL``: SSL verification toggle (default: ``true``) Indices and tables ================== diff --git a/docs/source/modules/lambda_handler.rst b/docs/source/modules/lambda_handler.rst index 9e1f993e..9ba7509f 100644 --- a/docs/source/modules/lambda_handler.rst +++ b/docs/source/modules/lambda_handler.rst @@ -1,7 +1,9 @@ Lambda Handler -============= +============== .. automodule:: template_automation.app - :members: lambda_handler, get_github_token + :members: lambda_handler, send_cfn_response, get_provider, get_secret, CloudFormationResourceInput + :exclude-members: model_config, model_fields, model_computed_fields :undoc-members: :show-inheritance: + :no-special-members: diff --git a/docs/tf-native-v2.md b/docs/tf-native-v2.md deleted file mode 100644 index 499762ad..00000000 --- a/docs/tf-native-v2.md +++ /dev/null @@ -1,83 +0,0 @@ -# Plan for Migrating to a Terraform-Native GitHub Repository Management Workflow (v2) - -This document provides a corrected and more accurate plan to replace the current Python Lambda-based workflow with a Terraform-native approach for creating and managing GitHub repositories. - -## 1. Corrected Analysis of the Current State - -The current system uses a Python-based AWS Lambda function (`template-automation-lambda`) to automate repository creation. It does **not** use Ansible for repository configuration as previously assumed. - -The workflow is as follows: -1. The Lambda function is invoked with details for a new repository (e.g., `project_name`, `template_settings`). -2. The Python script within the Lambda performs a series of GitHub API calls to: - a. Create a new repository. - b. Clone the contents of a base template repository into it. - c. Add a custom `config.json` file. - d. Assign team permissions. - e. Create a pull request to merge the initial setup into the `main` branch. - -This process, while effective, involves a custom-built Python application, a Lambda deployment, an ECR container registry, and associated IAM roles, making it complex to maintain. - -## 2. Proposed Solution: A Purely Terraform-Native Workflow - -We will replace the entire Lambda-based system with a declarative Terraform configuration that uses the `terraform-github-repo` module. This module natively supports the actions currently performed by the Python script. - -The new process will be: -1. A developer defines a new repository by adding a `module` block to a central Terraform configuration file. -2. Running `terraform apply` will instruct Terraform to perform all the necessary setup steps. - -### Mapping Lambda Actions to Terraform Resources - -| Action (Current Python Lambda) | Terraform Equivalent (`terraform-github-repo` module) | -| :--- | :--- | -| 1. Create a new repository. | The `github_repository` resource. The module uses this internally. | -| 2. Clone a template repository. | The `template` block within the `github_repository` resource. This is a native feature for creating a repo from a template. | -| 3. Write a `config.json` file. | The `github_repository_file` resource. The module accepts a `files` variable to manage this. | -| 4. Assign team permissions. | The `github_team_repository` resource. The module has a `teams` input for this. | -| 5. Create a pull request. | The `github_pull_request` resource. This can be defined outside the module to initialize the repository. | - -## 3. Detailed Migration Plan - -### Phase 1: Scoping and Proof of Concept - -1. **Identify All Template Variables:** Document all the key-value pairs that are currently passed into the `template_settings` of the Lambda. These will become variables in our new Terraform module. - -2. **Create a Wrapper Module:** Create a new, internal Terraform module that wraps the `terraform-github-repo` module. This wrapper will provide a simplified interface for our developers and contain the logic for creating the initial pull request. - -3. **Develop the PoC:** In a test environment, write a Terraform configuration that uses this new wrapper module to create a single, non-critical repository. The configuration should: - * Use the `template` feature to create the repository from the existing template repo. - * Use the `files` feature to add the `config.json`. - * Use the `teams` feature to grant permissions. - * Define a `github_pull_request` resource to create the initial PR. - -### Phase 2: Implementation and Import - -1. **Build Out the Full Configuration:** Create a new Git repository to house the Terraform configuration for all repositories that will be managed this way. - -2. **Import Existing Repositories:** For repositories previously created by the Lambda, use `terraform import` to bring them under Terraform's state management. This is a critical step to prevent any disruption. - * `terraform import module.my_repo.github_repository.this[0] my-repo-name` - * `terraform import module.my_repo.github_team_repository.teams["tf-module-admins"] my-repo-name:tf-module-admins` - -3. **Parallel Run:** For a transition period, both systems can exist. New repositories should be created using the Terraform method, while the Lambda is left in place to manage older ones if needed. - -### Phase 3: Testing and Validation - -1. **Dry Run with `terraform plan`:** Before applying any changes to a production repository, run `terraform plan` and carefully review the output to ensure it matches expectations and doesn't plan any destructive changes. - -2. **Full Application:** Once validated, apply the configuration to manage all target repositories. - -### Phase 4: Decommissioning - -Once all repositories are successfully managed by Terraform and the new workflow is stable, the old infrastructure can be safely removed. - -1. **Disable the Lambda Trigger:** The first step is to disable the mechanism that invokes the Lambda function. -2. **Delete the Lambda Function:** Remove the `template-automation-lambda` function from AWS. -3. **Delete the ECR Repository:** Delete the ECR repository holding the Lambda's container images. -4. **Delete the Deployment Pipeline:** Remove the `template-repos-lambda-deployment` Terraform configuration and state. -5. **Archive Old Repositories:** Archive the `template-automation-lambda` and `template-repos-lambda-deployment` Git repositories to mark them as deprecated. - -## 4. Rollback Plan - -If issues arise, we can revert by: -1. Removing the problematic repository from Terraform's state using `terraform state rm`. -2. Re-enabling or re-deploying the Lambda function to take over management again. -3. Manually correcting any unintended changes made by Terraform. diff --git a/docs/tf-native-v3.md b/docs/tf-native-v3.md deleted file mode 100644 index b723b86e..00000000 --- a/docs/tf-native-v3.md +++ /dev/null @@ -1,75 +0,0 @@ -# Plan for Migrating to a Terraform-Native EKS Deployment Workflow (v4) - -This document outlines the plan to replace the current Lambda/Ansible-based system with a streamlined, Terraform-native workflow for creating and configuring repositories for EKS deployments. - -## 1. Analysis of the Current State - -The current process for provisioning a new EKS cluster repository involves multiple, loosely-coupled components: - -1. **`template-automation-lambda`**: A Python Lambda function that creates a new GitHub repository from the `template-eks-cluster` template. It clones the template, adds a `config.json` file with user-provided settings, and opens a pull request. -2. **`generate_hcl_files.yml`**: An Ansible playbook inside the newly created repository that is run manually after the initial PR is merged. It reads the `config.json` and generates a set of Terragrunt HCL files (`root.hcl`, `account.hcl`, `region.hcl`, etc.). -3. **`terraform-eks-deployment`**: A Terraform module that is referenced by the generated Terragrunt configuration to deploy the actual EKS cluster. - -This workflow is complex, involves manual steps, and relies on a mix of technologies (Python, Lambda, Ansible, Terraform). - -## 2. Proposed Solution: A Unified, Terraform-Native Workflow - -We will create a single, unified Terraform workflow that handles the entire process of repository creation and configuration declaratively. This eliminates the need for the Lambda function and the Ansible playbook. - -The new process will be: -1. A developer defines a new EKS cluster by adding a single `module` block to a central Terraform configuration. -2. Running `terraform apply` will automatically: - a. Create a new GitHub repository. - b. Generate and commit all the necessary Terragrunt HCL files and `README.md`. - c. Configure team permissions for the repository. - -### Core Component: The New `terragrunt-eks-repo` Wrapper Module - -The centerpiece of this new workflow is a new Terraform module, `terragrunt-eks-repo`. This module will be responsible for all the setup logic. - -| Action (Old Workflow) | Terraform Equivalent (New `terragrunt-eks-repo` Module) | -| :--- | :--- | -| 1. Create a new repository from a template. | The module will call the `terraform-github-repo` module internally, using its `template` feature to clone from `template-eks-cluster`. | -| 2. Generate HCL files from `config.json`. | The module will contain HCL templates (`.tf.tpl` files). It will use Terraform's `templatefile()` function to render the final HCL content directly from its input variables. | -| 3. Write files to the repository. | The rendered file content will be passed to the `files` input of the underlying `terraform-github-repo` module, which uses the `github_repository_file` resource to commit them. | -| 4. Assign team permissions. | The module will accept a `teams` variable and pass it to the `terraform-github-repo` module to configure permissions using the `github_team_repository` resource. | - -## 3. Detailed Migration Plan - -### Phase 1: Develop the `terragrunt-eks-repo` Module - -1. **Create Module Scaffolding:** Create a new directory for the `terragrunt-eks-repo` module. - -2. **Define Input Variables:** Create a `variables.tf` file. The variables will be derived directly from the `generate_hcl_files.yml` playbook's `config` object (e.g., `environment`, `region`, `cluster_name`, `account`, `vpc`, etc.). - -3. **Create HCL Templates:** Create a `templates` directory within the module. For each file generated by the Ansible playbook (`root.hcl`, `account.hcl`, `region.hcl`, `vpc.hcl`, `cluster.hcl`, and `README.md`), create a corresponding `.tf.tpl` template file. Convert the Jinja2 syntax to Terraform's `${...}` interpolation syntax. - -4. **Implement Module Logic (`main.tf`):** - * Use `locals` to render the file content for each template using the `templatefile()` function. - * Call the `terraform-github-repo` module. - * Pass the repository name, template configuration, and team permissions to the module. - * Map the rendered local variables to the `files` input of the `terraform-github-repo` module. This will instruct it to create the files in the new repository. - -### Phase 2: Implementation and Onboarding - -1. **Create a Central Management Repository:** Set up a new Git repository (e.g., `terragrunt-environments`) that will contain the Terraform configuration for creating all new EKS cluster repositories. - -2. **Onboard a Pilot Project:** In the new management repository, add a `main.tf` file. Add a module block that calls the newly created `terragrunt-eks-repo` module to provision a repository for a new test cluster. - -3. **Execute and Validate:** Run `terraform apply` to create the repository. Verify that: - * The repository is created on GitHub. - * It is correctly initialized from the `template-eks-cluster` template. - * All the Terragrunt HCL files and the `README.md` are present and correctly populated with the variable values. - * Team permissions are correctly assigned. - -### Phase 3: Decommissioning the Old Workflow - -Since we are not concerned with migrating existing repositories, the decommissioning process is straightforward. Once the new workflow is validated and adopted for all new cluster provisioning: - -1. **Disable the Lambda Function:** The Lambda trigger can be disabled in AWS. -2. **Archive Old Repositories:** The `template-automation-lambda` and `template-repos-lambda-deployment` Git repositories should be archived to prevent further use. -3. **Delete AWS Resources:** The old AWS resources (Lambda function, ECR repository, IAM roles) can be deleted via Terraform from the `template-repos-lambda-deployment` project. - -## 4. Rollback Plan - -As we are not migrating existing resources, a rollback is not applicable in the traditional sense. If the new workflow fails for a new repository, the state can be destroyed (`terraform destroy`), the module can be fixed, and the process can be re-run. The old Lambda-based system can be temporarily kept available for emergency use until the new workflow is fully proven. diff --git a/docs/tf-native-v4.md b/docs/tf-native-v4.md deleted file mode 100644 index 3d9245fa..00000000 --- a/docs/tf-native-v4.md +++ /dev/null @@ -1,79 +0,0 @@ -# Plan for Migrating to a Terraform-Native EKS Deployment Workflow (v4) - -This document outlines the plan to replace the current Lambda/Ansible-based system with a streamlined, Terraform-native workflow by enhancing the `terraform-eks-deployment` module itself. - -## 1. Analysis of the Current State - -The current process for provisioning a new EKS cluster repository involves multiple components: - -1. **`template-automation-lambda`**: A Python Lambda function that creates a new GitHub repository from the `template-eks-cluster` template. -2. **`generate_hcl_files.yml`**: An Ansible playbook inside the new repository that is run manually to generate a set of Terragrunt HCL files (`root.hcl`, `account.hcl`, etc.). -3. **`terraform-eks-deployment`**: The Terraform module that is referenced by the generated Terragrunt configuration to deploy the actual EKS cluster. - -This workflow is complex, involves manual steps, and relies on a mix of technologies. - -## 2. Proposed Solution: A Unified, All-in-One EKS Deployment Module - -We will consolidate the entire workflow into the `terraform-eks-deployment` module. This module will be enhanced to handle not only the EKS deployment but also the initial GitHub repository creation and configuration. This eliminates the need for the Lambda function and the Ansible playbook. - -The new, unified process will be: -1. A developer defines a new EKS cluster by adding a single `module "eks_deployment"` block to a central Terraform configuration. -2. By setting `create_repository = true`, the developer instructs the module to perform the initial setup. -3. Running `terraform apply` will automatically: - a. Create a new GitHub repository using the `terraform-github-repo` module as a submodule. - b. Generate and commit all the necessary Terragrunt HCL files and a `README.md`. - c. Configure team permissions for the repository. - -The same module, when referenced from within the newly created repository's Terragrunt files, will have `create_repository = false` and will proceed with deploying the EKS cluster as it does today. - -### Core Component: The Enhanced `terraform-eks-deployment` Module - -| Action (Old Workflow) | Terraform Equivalent (Inside `terraform-eks-deployment`) | -| :--- | :--- | -| 1. Create a new repository from a template. | A new submodule block calling `terraform-github-repo` will be added, controlled by a `create_repository` flag. | -| 2. Generate HCL files from `config.json`. | The module will contain a new `templates` directory with HCL templates (`.tf.tpl`). It will use `templatefile()` to render the final HCL content from its input variables. | -| 3. Write files to the repository. | The rendered file content will be passed to the `files` input of the `terraform-github-repo` submodule. | -| 4. Assign team permissions. | The module will accept a `teams` variable and pass it to the `terraform-github-repo` submodule. | - -## 3. Detailed Migration Plan - -### Phase 1: Enhance the `terraform-eks-deployment` Module - -1. **Add Input Variables:** In `variables.tf`, add new variables: - * `create_repository`: A boolean to control whether to execute the repository creation logic. Default to `false`. - * `repository_name`: The name of the GitHub repository to create. - * `repository_teams`: A map of teams and their permissions for the new repository. - * Variables derived from the `generate_hcl_files.yml` playbook's `config` object (e.g., `environment`, `region`, `cluster_name`, `account`, `vpc`, etc.). - -2. **Create HCL Templates:** Create a `templates` directory within the module. For each file generated by the Ansible playbook (`root.hcl`, `account.hcl`, `region.hcl`, `vpc.hcl`, `cluster.hcl`, and `README.md`), create a corresponding `.tf.tpl` template file. Convert the Jinja2 syntax to Terraform's `${...}` interpolation syntax. - -3. **Implement Module Logic (`main.tf`):** - * Use `locals` to render the file content for each template using the `templatefile()` function. - * Add a `module "github_repo"` block that calls the `terraform-github-repo` module. - * Set the `count` of this submodule to `var.create_repository ? 1 : 0`. - * Pass the repository name, template configuration, and team permissions to the submodule. - * Map the rendered local variables to the `files` input of the `github_repo` submodule. - -### Phase 2: Implementation and Onboarding - -1. **Create a Central Management Repository:** Set up a new Git repository (e.g., `terragrunt-environments`) that will contain the Terraform configuration for creating all new EKS cluster repositories. - -2. **Onboard a Pilot Project:** In the new management repository, add a `main.tf` file. Add a module block that calls the enhanced `terraform-eks-deployment` module with `create_repository = true` to provision a repository for a new test cluster. - -3. **Execute and Validate:** Run `terraform apply` to create the repository. Verify that: - * The repository is created on GitHub. - * It is correctly initialized from the `template-eks-cluster` template. - * All the Terragrunt HCL files and the `README.md` are present and correctly populated. - * Team permissions are correctly assigned. - -### Phase 3: Decommissioning the Old Workflow - -Since we are not migrating existing repositories, the decommissioning process is straightforward. - -1. **Disable the Lambda Function:** The Lambda trigger can be disabled in AWS. -2. **Archive Old Repositories:** The `template-automation-lambda` and `template-repos-lambda-deployment` Git repositories should be archived. -3. **Delete AWS Resources:** The old AWS resources (Lambda, ECR, IAM roles) can be deleted. - -## 4. Rollback Plan - -A rollback is not applicable in the traditional sense. If the new workflow fails, the state can be destroyed (`terraform destroy`), the module can be fixed, and the process can be re-run. The old Lambda-based system can be kept available for emergency use. diff --git a/docs/tf-native-v5.md b/docs/tf-native-v5.md deleted file mode 100644 index 541d50f8..00000000 --- a/docs/tf-native-v5.md +++ /dev/null @@ -1,72 +0,0 @@ -# Plan for Migrating to a Terraform-Native EKS Deployment Workflow (v5) - -This document outlines the plan to replace the current Lambda/Ansible-based system with a streamlined, Terraform-native workflow by enhancing the `terraform-eks-deployment` module. - -## 1. Analysis of the Current State - -The current process for provisioning a new EKS cluster repository involves multiple components: - -1. **`template-automation-lambda`**: A Python Lambda function that creates a new GitHub repository from a template. -2. **`generate_hcl_files.yml`**: An Ansible playbook inside the new repository that is run manually to generate a set of Terragrunt HCL files. -3. **`terraform-eks-deployment`**: The Terraform module that is referenced by the generated Terragrunt configuration to deploy the actual EKS cluster. - -This workflow is complex and involves manual steps. - -## 2. Proposed Solution: A Unified Repository Bootstrap Module - -We will consolidate the repository creation and configuration logic into the `terraform-eks-deployment` module. Its new, single purpose will be to bootstrap a fully configured GitHub repository for an EKS cluster. This eliminates the need for the Lambda function and the Ansible playbook. - -The new, unified process will be: -1. A developer defines a new EKS cluster repository by adding a single `module "eks_deployment"` block to a central Terraform configuration. -2. Running `terraform apply` will automatically: - a. Create a new GitHub repository using the `terraform-github-repo` module as a submodule. - b. Generate and commit all the necessary Terragrunt HCL files and a `README.md`. - c. Configure team permissions for the repository. - -The module will no longer be dual-purpose; it will *always* create a repository. The actual EKS deployment will be handled by the Terragrunt configuration within that new repository, which may in turn reference other modules. - -### Core Component: The Enhanced `terraform-eks-deployment` Module - -| Action (Old Workflow) | Terraform Equivalent (Inside `terraform-eks-deployment`) | -| :--- | :--- | -| 1. Create a new repository from a template. | A submodule block calling `terraform-github-repo` will create the repository. | -| 2. Generate HCL files from `config.json`. | The module will contain a `templates` directory with HCL templates (`.tf.tpl`). It will use `templatefile()` to render the final HCL content from its input variables. | -| 3. Write files to the repository. | The rendered file content will be passed to the `files` input of the `terraform-github-repo` submodule. | -| 4. Assign team permissions. | The module will accept a `teams` variable and pass it to the `terraform-github-repo` submodule. | - -## 3. Detailed Migration Plan - -### Phase 1: Enhance the `terraform-eks-deployment` Module - -1. **Define Input Variables:** In `variables.tf`, ensure all necessary variables are present. These are derived from the `generate_hcl_files.yml` playbook's `config` object (e.g., `repository_name`, `repository_teams`, `environment`, `region`, `cluster_name`, `account_config`, `vpc_config`, etc.). - -2. **Create HCL Templates:** Create a `templates` directory within the module. For each file generated by the Ansible playbook (`root.hcl`, `account.hcl`, `region.hcl`, `vpc.hcl`, `cluster.hcl`, and `README.md`), create a corresponding `.tf.tpl` template file. - -3. **Implement Module Logic (`main.tf`):** - * Use `locals` to render the file content for each template using the `templatefile()` function. - * Call the `terraform-github-repo` module unconditionally. - * Pass the repository name, template configuration, team permissions, and the rendered file content to the submodule. - -### Phase 2: Implementation and Onboarding - -1. **Integrate into a Management Repository:** The enhanced `terraform-eks-deployment` module will be consumed from a designated infrastructure management repository (e.g., `terragrunt-environments`). - -2. **Onboard a Pilot Project:** In the management repository, add a module block that calls the enhanced `terraform-eks-deployment` module to provision a repository for a new test cluster. - -3. **Execute and Validate:** Run `terraform apply` to create the repository. Verify that: - * The repository is created on GitHub. - * It is correctly initialized from the `template-eks-cluster` template. - * All the Terragrunt HCL files and the `README.md` are present and correctly populated. - * Team permissions are correctly assigned. - -### Phase 3: Decommissioning the Old Workflow - -Since we are not migrating existing repositories, the decommissioning process is straightforward. - -1. **Disable the Lambda Function:** The Lambda trigger can be disabled in AWS. -2. **Archive Old Repositories:** The `template-automation-lambda` and `template-repos-lambda-deployment` Git repositories should be archived. -3. **Delete AWS Resources:** The old AWS resources (Lambda, ECR, IAM roles) can be deleted. - -## 4. Rollback Plan - -A rollback is not applicable in the traditional sense. If the new workflow fails, the state can be destroyed (`terraform destroy`), the module can be fixed, and the process can be re-run. The old Lambda-based system can be kept available for emergency use. diff --git a/docs/tf-native.md b/docs/tf-native.md deleted file mode 100644 index a5e5a9cf..00000000 --- a/docs/tf-native.md +++ /dev/null @@ -1,72 +0,0 @@ -# Plan for Migrating to a Terraform-Native GitHub Repository Management Workflow - -This document outlines the plan to transition from the current Lambda/Ansible-based repository management system to a purely Terraform-native approach, leveraging the `terraform-github-repo` module. - -## 1. Current State Analysis - -Our current workflow for managing GitHub repositories relies on a Lambda function that executes Ansible playbooks. This setup has the following key characteristics: - -* **Technology Stack:** AWS Lambda, Python, Ansible. -* **Process:** A Lambda function is triggered, which in turn runs an Ansible playbook to configure GitHub repositories. -* **Drawbacks:** - * **Complexity:** Involves multiple technologies (Lambda, Ansible, Python) which increases the maintenance overhead. - * **State Management:** Managing state across these different systems can be challenging. - * **Less Declarative:** While Ansible is declarative, the overall workflow is more imperative compared to a pure Terraform solution. - -## 2. Proposed Solution: Terraform-Native Workflow - -We will replace the existing Lambda/Ansible setup with a new workflow centered around the `terraform-github-repo` Terraform module. This module provides a comprehensive set of resources for managing GitHub repositories declaratively. - -* **Technology Stack:** Terraform. -* **Process:** A new Terraform configuration will be created that uses the `terraform-github-repo` module to define the desired state of our GitHub repositories. -* **Benefits:** - * **Simplicity:** A single technology (Terraform) will be used for infrastructure and repository management. - * **Declarative:** The entire configuration will be declarative, making it easier to understand and manage. - * **State Management:** Terraform's state management will provide a single source of truth for the state of our repositories. - * **Reusability:** The `terraform-github-repo` module is a reusable component that can be used across multiple projects. - -## 3. Migration Plan - -The migration will be performed in the following phases: - -### Phase 1: Scoping and Setup - -1. **Identify Ansible Playbook Functionality:** Analyze the existing Ansible playbooks to identify all the repository configuration tasks they perform. This includes: - * Creating repositories. - * Managing collaborators and permissions. - * Configuring branch protection rules. - * Managing repository files (e.g., `CODEOWNERS`, license files). - * Setting up webhooks and deploy keys. - -2. **Map to Terraform Resources:** For each Ansible task, identify the corresponding resource in the `terraform-github-repo` module. - -3. **Setup New Terraform Project:** Create a new Git repository for the Terraform configuration that will manage the GitHub repositories. This repository will contain the new Terraform code. - -### Phase 2: Implementation - -1. **Develop Terraform Configuration:** Write the Terraform code that uses the `terraform-github-repo` module to replicate the functionality of the Ansible playbooks. The configuration should be modular and easily extensible. - -2. **Import Existing Resources:** Use `terraform import` to bring the existing GitHub repositories and their configurations under the management of the new Terraform configuration. This is a critical step to ensure a seamless transition without disrupting existing repositories. - -### Phase 3: Testing and Validation - -1. **Dry Run:** Perform a `terraform plan` to verify that the new configuration matches the existing state of the repositories. - -2. **Targeted Application:** Apply the new configuration to a non-critical repository first to validate the process. - -3. **Full Application:** Once the process is validated, apply the configuration to all repositories. - -### Phase 4: Decommissioning - -1. **Disable Lambda:** Disable the existing Lambda function to prevent it from making any further changes to the repositories. - -2. **Monitor:** Monitor the repositories for any unexpected changes or issues. - -3. **Remove Old Infrastructure:** Once the new system is stable, decommission the Lambda function and the associated Ansible playbooks. - -## 4. Rollback Plan - -In case of any issues, we can roll back to the previous system by: - -1. **Re-enabling the Lambda function.** -2. **Removing the new Terraform configuration from the state file using `terraform state rm`.** diff --git a/events/service-catalog-event.json b/events/service-catalog-event.json deleted file mode 100644 index 31c84566..00000000 --- a/events/service-catalog-event.json +++ /dev/null @@ -1,57 +0,0 @@ -{ - "version": "0", - "id": "12345678-1234-1234-1234-123456789012", - "detail-type": "Service Catalog Product Provisioning", - "source": "aws.servicecatalog", - "account": "123456789012", - "time": "2024-01-01T12:00:00Z", - "region": "us-east-1", - "resources": [ - "arn:aws:catalog:us-east-1:123456789012:portfolio/port-abcdefghijk" - ], - "detail": { - "eventName": "ProvisionProduct", - "requestId": "12345678-1234-1234-1234-123456789012", - "provisionedProductId": "pp-abcdefghijklm", - "provisionedProductName": "example-template-repos-cluster", - "productId": "prod-abcdefghijk", - "provisioningArtifactId": "pa-abcdefghijk", - "recordId": "rec-abcdefghijk", - "status": "SUCCEEDED", - "outputs": [ - { - "OutputKey": "RepositoryName", - "OutputValue": "example-template-repos-cluster" - }, - { - "OutputKey": "ClusterName", - "OutputValue": "example-cluster-dev" - } - ], - "provisioningParameters": { - "project_name": "example-template-repos-cluster", - "owning_team": "platform-team", - "account_name": "dev-account", - "aws_region": "us-gov-west-1", - "cluster_mailing_list": "eks-admins@example.com", - "cluster_name": "example-cluster-dev", - "eks_instance_disk_size": "100", - "eks_ng_desired_size": "2", - "eks_ng_max_size": "10", - "eks_ng_min_size": "2", - "environment": "development", - "environment_abbr": "dev", - "finops_project_name": "example_project", - "finops_project_number": "fp00000001", - "finops_project_role": "example_project_app", - "organization": "example:dept:team", - "vpc_domain_name": "dev.example.com", - "vpc_name": "vpc-dev", - "tags": { - "managed_by": "terraform", - "owner": "platform-team", - "slim:schedule": "8:00-17:00" - } - } - } -} diff --git a/events/test-event.json b/events/test-event.json deleted file mode 100644 index 99a078d4..00000000 --- a/events/test-event.json +++ /dev/null @@ -1,47 +0,0 @@ -{ - "version": "0", - "id": "12345678-1234-1234-1234-123456789012", - "detail-type": "Service Catalog Product Provisioning", - "source": "aws.servicecatalog", - "account": "123456789012", - "time": "2024-01-01T12:00:00Z", - "region": "us-east-1", - "resources": [ - "arn:aws:catalog:us-east-1:123456789012:portfolio/port-abcdefghijk" - ], - "detail": { - "eventName": "ProvisionProduct", - "requestId": "12345678-1234-1234-1234-123456789012", - "provisionedProductId": "pp-abcdefghijklm", - "provisionedProductName": "example-template-repos-cluster", - "productId": "prod-abcdefghijk", - "provisioningArtifactId": "pa-abcdefghijk", - "recordId": "rec-abcdefghijk", - "status": "SUCCEEDED", - "provisioningParameters": { - "project_name": "example-template-repos-cluster", - "owning_team": "platform-team", - "account_name": "dev-account", - "aws_region": "us-gov-west-1", - "cluster_mailing_list": "eks-admins@example.com", - "cluster_name": "example-cluster-dev", - "eks_instance_disk_size": "100", - "eks_ng_desired_size": "2", - "eks_ng_max_size": "10", - "eks_ng_min_size": "2", - "environment": "development", - "environment_abbr": "dev", - "finops_project_name": "example_project", - "finops_project_number": "fp00000001", - "finops_project_role": "example_project_app", - "organization": "example:dept:team", - "vpc_domain_name": "dev.example.com", - "vpc_name": "vpc-dev", - "tags": { - "managed_by": "terraform", - "owner": "platform-team", - "slim:schedule": "8:00-17:00" - } - } - } -} \ No newline at end of file diff --git a/template_automation/ROADMAP.md b/template_automation/ROADMAP.md deleted file mode 100644 index f0a9a53c..00000000 --- a/template_automation/ROADMAP.md +++ /dev/null @@ -1 +0,0 @@ -in terraform-aws-template-automation, can we setup IAM access rules for the lambda function? \ No newline at end of file diff --git a/template_automation/old.py b/template_automation/old.py deleted file mode 100644 index eb1a1a81..00000000 --- a/template_automation/old.py +++ /dev/null @@ -1,473 +0,0 @@ -"""GitHub client module for template automation. - -This module provides the GitHubClient class which handles all interactions with the GitHub API -for template repository automation using the requests library directly. -""" - -import base64 -import json -import logging -import time -import urllib.parse -from typing import List, Optional, Dict, Any, Union - -import requests - -logger = logging.getLogger(__name__) - -class GitHubClient: - """A client for interacting with GitHub's API in the context of template automation. - - This class provides methods for template repository operations including: - - Creating repositories from templates - - Managing repository contents - - Setting up team access - - Configuring repository settings - - Attributes: - api_base_url (str): Base URL for the GitHub API - token (str): GitHub authentication token - org_name (str): GitHub organization name - commit_author_name (str): Name to use for automated commits - commit_author_email (str): Email to use for automated commits - verify_ssl (bool): Whether to verify SSL certificates - - Example: - ```python - client = GitHubClient( - api_base_url="https://api.github.com", - token="ghp_...", - org_name="my-org", - commit_author_name="Template Bot", - commit_author_email="bot@example.com" - ) - - repo = client.create_repository_from_template( - template_repo_name="template-service", - new_repo_name="new-service", - private=True - ) - ``` - """ - - def __init__( - self, - api_base_url: str, - token: str, - org_name: str, - commit_author_name: str = "Template Automation", - commit_author_email: str = "automation@example.com", - verify_ssl: bool = True - ): - """Initialize a new GitHub client. - - Args: - api_base_url: Base URL for the GitHub API - token: GitHub authentication token - org_name: GitHub organization name - commit_author_name: Name to use for automated commits - commit_author_email: Email to use for automated commits - verify_ssl: Whether to verify SSL certificates - """ - self.api_base_url = api_base_url.rstrip('/') - self.token = token - self.org_name = org_name - self.commit_author_name = commit_author_name - self.commit_author_email = commit_author_email - self.verify_ssl = verify_ssl - - # Create session for connection reuse - self.session = requests.Session() - self.session.headers.update({ - 'Authorization': f'token {token}', - 'Accept': 'application/vnd.github.v3+json', - 'User-Agent': 'Template-Automation-Lambda' - }) - - # Log initialization - logger.info(f"Initialized GitHub client for org: {org_name} (SSL verify: {verify_ssl})") - - def _request(self, method: str, url: str, **kwargs) -> Dict[str, Any]: - """Make a request to the GitHub API. - - Args: - method: HTTP method (GET, POST, PATCH, PUT, DELETE) - url: URL path or full URL to request - **kwargs: Additional arguments to pass to requests - - Returns: - Response data as a dictionary - - Raises: - requests.exceptions.RequestException: On request errors - """ - # Prepend base URL if not already an absolute URL - if not url.startswith('http'): - url = f"{self.api_base_url}{url}" - - # Set SSL verification - kwargs['verify'] = self.verify_ssl - - # Log the request - logger.debug(f"GitHub API {method} request: {url}") - - # Make the request - response = self.session.request(method, url, **kwargs) - - # Raise exception for error status codes - response.raise_for_status() - - # Return JSON data for non-empty responses - if response.text: - return response.json() - return {} - - def get_repository( - self, - repo_name: str, - create: bool = False, - owning_team: Optional[str] = None - ) -> Dict[str, Any]: - """Get or create a GitHub repository with optional team permissions. - - Args: - repo_name: The name of the repository to retrieve or create - create: Whether to create the repository if it doesn't exist - owning_team: The name of the GitHub team to grant admin access - - Returns: - The repository data - """ - try: - # Try to get the repository - url = f"/repos/{self.org_name}/{repo_name}" - repo = self._request("GET", url) - logger.info(f"Found existing repository: {repo_name}") - - if owning_team: - self.set_team_permission(repo_name, owning_team, "admin") - - return repo - except requests.exceptions.HTTPError as e: - if e.response.status_code == 404 and create: - logger.info(f"Creating repository {repo_name}") - - # Create a new repository - url = f"/orgs/{self.org_name}/repos" - repo = self._request("POST", url, json={ - "name": repo_name, - "private": True, - "auto_init": True, - "allow_squash_merge": True, - "allow_merge_commit": True, - "allow_rebase_merge": True, - "delete_branch_on_merge": True - }) - - # Wait for repository initialization - max_retries = 100 - retry_delay = 1 - for _ in range(max_retries): - try: - self.get_branch(repo_name, "main") - break - except requests.exceptions.HTTPError: - time.sleep(retry_delay) - else: - raise Exception(f"Repository {repo_name} initialization timed out") - - if owning_team: - self.set_team_permission(repo_name, owning_team, "admin") - - return repo - raise - - def get_branch(self, repo_name: str, branch_name: str) -> Dict[str, Any]: - """Get branch information. - - Args: - repo_name: Name of the repository - branch_name: Name of the branch - - Returns: - Branch data - """ - url = f"/repos/{self.org_name}/{repo_name}/branches/{branch_name}" - return self._request("GET", url) - - def get_default_branch(self, repo_name: str) -> str: - """Get the default branch name of a repository. - - Args: - repo_name: Name of the repository - - Returns: - Default branch name (usually 'main' or 'master') - """ - repo = self.get_repository(repo_name) - return repo["default_branch"] - - def create_branch(self, repo_name: str, branch_name: str, from_ref: str = "main") -> None: - """Create a new branch in the repository. - - Args: - repo_name: Name of the repository - branch_name: Name of the branch to create - from_ref: Reference to create branch from - """ - # Get the SHA of the source branch - source_branch = self.get_branch(repo_name, from_ref) - commit_sha = source_branch["commit"]["sha"] - - # Create the new branch - url = f"/repos/{self.org_name}/{repo_name}/git/refs" - self._request("POST", url, json={ - "ref": f"refs/heads/{branch_name}", - "sha": commit_sha - }) - - logger.info(f"Created branch {branch_name} in {repo_name}") - - def create_reference(self, repo_name: str, ref: str, sha: str) -> None: - """Create a Git reference. - - Args: - repo_name: Name of the repository - ref: The name of the reference - sha: The SHA1 value to set this reference to - """ - url = f"/repos/{self.org_name}/{repo_name}/git/refs" - self._request("POST", url, json={ - "ref": ref, - "sha": sha - }) - - logger.info(f"Created reference {ref} in {repo_name}") - - def update_reference(self, repo_name: str, ref: str, sha: str, force: bool = False) -> None: - """Update a Git reference. - - Args: - repo_name: Name of the repository - ref: The name of the reference without 'refs/' prefix - sha: The SHA1 value to set this reference to - force: Force update if not a fast-forward update - """ - url = f"/repos/{self.org_name}/{repo_name}/git/refs/{ref}" - self._request("PATCH", url, json={ - "sha": sha, - "force": force - }) - - logger.info(f"Updated reference {ref} in {repo_name}") - - def write_file( - self, - repo: Dict[str, Any], - path: str, - content: str, - branch: str = "main", - commit_message: Optional[str] = None - ) -> Dict[str, Any]: - """Write or update a file in a repository. - - Args: - repo: The repository object - path: Path where to create/update the file - content: Content to write to the file - branch: Branch to commit to - commit_message: Commit message to use - - Returns: - The created/updated file content - """ - repo_name = repo["name"] - content_bytes = content.encode("utf-8") - content_base64 = base64.b64encode(content_bytes).decode("utf-8") - - # Try to get the existing file to check if it exists - try: - file = self.get_file_contents(repo_name, path, branch) - # Update existing file - url = f"/repos/{self.org_name}/{repo_name}/contents/{path}" - result = self._request("PUT", url, json={ - "message": commit_message or f"Update {path}", - "content": content_base64, - "sha": file["sha"], - "branch": branch, - "committer": { - "name": self.commit_author_name, - "email": self.commit_author_email - } - }) - logger.info(f"Updated file {path} in repo {repo_name}") - return result["content"] - except requests.exceptions.HTTPError as e: - if e.response.status_code == 404: - # Create new file - url = f"/repos/{self.org_name}/{repo_name}/contents/{path}" - result = self._request("PUT", url, json={ - "message": commit_message or f"Create {path}", - "content": content_base64, - "branch": branch, - "committer": { - "name": self.commit_author_name, - "email": self.commit_author_email - } - }) - logger.info(f"Created new file {path} in repo {repo_name}") - return result["content"] - raise - - def get_file_contents(self, repo_name: str, path: str, ref: str = "main") -> Dict[str, Any]: - """Get the contents of a file in a repository. - - Args: - repo_name: Name of the repository - path: Path to the file - ref: Branch, tag, or commit SHA - - Returns: - File data - """ - url = f"/repos/{self.org_name}/{repo_name}/contents/{path}" - params = {"ref": ref} - return self._request("GET", url, params=params) - - def read_file(self, repo: Dict[str, Any], path: str, ref: str = "main") -> str: - """Read a file from a repository. - - Args: - repo: The repository object - path: Path to the file to read - ref: Git reference (branch, tag, commit) to read from - - Returns: - The file contents as a string - """ - repo_name = repo["name"] - file = self.get_file_contents(repo_name, path, ref) - content = base64.b64decode(file["content"]).decode("utf-8") - return content - - def create_pull_request( - self, - repo_name: str, - title: str, - body: str, - head_branch: str, - base_branch: str = "main" - ) -> Dict[str, Any]: - """Create a pull request in a repository. - - Args: - repo_name: Name of the repository - title: Title of the pull request - body: Description/body of the pull request - head_branch: Branch containing the changes - base_branch: Branch to merge into - - Returns: - The created pull request object - """ - url = f"/repos/{self.org_name}/{repo_name}/pulls" - pr = self._request("POST", url, json={ - "title": title, - "body": body, - "head": head_branch, - "base": base_branch, - "maintainer_can_modify": True - }) - - logger.info(f"Created PR #{pr['number']} in {repo_name}: {title}") - return pr - - def trigger_workflow( - self, - repo_name: str, - workflow_id: str, - ref: str, - inputs: Optional[Dict[str, Any]] = None - ) -> None: - """Trigger a GitHub Actions workflow. - - Args: - repo_name: Name of the repository - workflow_id: ID or filename of the workflow - ref: Git reference to run the workflow on - inputs: Input parameters for the workflow - """ - url = f"/repos/{self.org_name}/{repo_name}/actions/workflows/{workflow_id}/dispatches" - workflow_inputs = inputs if inputs is not None else {} - - self._request("POST", url, json={ - "ref": ref, - "inputs": workflow_inputs - }) - - logger.info(f"Triggered workflow {workflow_id} in {repo_name} on {ref}") - - def set_team_permission(self, repo_name: str, team_name: str, permission: str) -> None: - """Set a team's permission on a repository. - - Args: - repo_name: Name of the repository - team_name: Name of the team - permission: Permission level ('pull', 'push', 'admin', 'maintain', 'triage') - """ - url = f"/orgs/{self.org_name}/teams/{team_name}/repos/{self.org_name}/{repo_name}" - self._request("PUT", url, json={"permission": permission}) - - logger.info(f"Set {team_name} permission on {repo_name} to {permission}") - - def update_repository_topics(self, repo_name: str, topics: List[str]) -> None: - """Update the topics of a repository. - - Args: - repo_name: Name of the repository - topics: List of topics to set - """ - # GitHub API requires a special media type for repository topics - headers = {"Accept": "application/vnd.github.mercy-preview+json"} - url = f"/repos/{self.org_name}/{repo_name}/topics" - - self._request("PUT", url, json={"names": topics}, headers=headers) - - logger.info(f"Updated topics for {repo_name}: {topics}") - - def create_repository_from_template( - self, - template_repo_name: str, - new_repo_name: str, - private: bool = True, - description: Optional[str] = None, - topics: Optional[List[str]] = None - ) -> Dict[str, Any]: - """Create a new repository from a template. - - Args: - template_repo_name: Name of the template repository - new_repo_name: Name for the new repository - private: Whether the new repository should be private - description: Description for the new repository - topics: List of topics to add to the repository - - Returns: - The newly created repository - """ - url = f"/repos/{self.org_name}/{template_repo_name}/generate" - - # Create repository from template - new_repo = self._request("POST", url, json={ - "name": new_repo_name, - "owner": self.org_name, - "description": description or f"Repository created from template: {template_repo_name}", - "private": private - }) - - # Add topics if provided - if topics: - self.update_repository_topics(new_repo_name, topics) - - logger.info(f"Created new repository: {new_repo_name} from template: {template_repo_name}") - return new_repo \ No newline at end of file diff --git a/test_service_catalog.py b/test_service_catalog.py deleted file mode 100755 index 18435f23..00000000 --- a/test_service_catalog.py +++ /dev/null @@ -1,71 +0,0 @@ -#!/usr/bin/env python3 -"""Test script for Service Catalog event parsing.""" - -import json -import sys -from pathlib import Path - -# Add parent directory to path -sys.path.insert(0, str(Path(__file__).parent)) - -from template_automation.app import ServiceCatalogInput - - -def test_service_catalog_parsing(): - """Test parsing of Service Catalog event.""" - - # Load test event - with open('events/service-catalog-event.json', 'r') as f: - event = json.load(f) - - print("Testing Service Catalog event parsing...") - print("=" * 60) - - # Extract provisioning parameters - if 'detail' not in event: - print("❌ ERROR: Event missing 'detail' field") - return False - - detail = event['detail'] - - if 'provisioningParameters' not in detail: - print("❌ ERROR: Event detail missing 'provisioningParameters' field") - return False - - provisioning_params = detail['provisioningParameters'] - print(f"✓ Found {len(provisioning_params)} provisioning parameters") - print() - - # Parse with Pydantic model - try: - service_catalog_input = ServiceCatalogInput(**provisioning_params) - print("✓ ServiceCatalogInput validation successful") - print(f" - project_name: {service_catalog_input.project_name}") - print(f" - owning_team: {service_catalog_input.owning_team}") - print() - except Exception as e: - print(f"❌ ERROR: Validation failed: {e}") - return False - - # Convert to template settings - try: - template_settings = service_catalog_input.to_template_settings() - print("✓ Converted to template settings format:") - print(f" - attrs keys: {list(template_settings['attrs'].keys())}") - print(f" - tags keys: {list(template_settings['tags'].keys())}") - print() - print("Full template_settings:") - print(json.dumps(template_settings, indent=2)) - print() - except Exception as e: - print(f"❌ ERROR: Conversion failed: {e}") - return False - - print("=" * 60) - print("✓ All tests passed!") - return True - - -if __name__ == "__main__": - success = test_service_catalog_parsing() - sys.exit(0 if success else 1)