diff --git a/examples/full-cluster/efs/README.md b/examples/full-cluster/efs/README.md index 7d589b0..e69de29 100644 --- a/examples/full-cluster/efs/README.md +++ b/examples/full-cluster/efs/README.md @@ -1,164 +0,0 @@ -# EFS - -This sets up the needed EFS resources for persistent volumes. See [this](README.efs.md) for more details. - -## Links - -* https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html -* https://github.com/kubernetes-sigs/aws-efs-csi-driver -* https://github.com/kubernetes-sigs/aws-efs-csi-driver/issues/433 -* https://github.com/hashicorp/terraform-provider-kubernetes/issues/723#issuecomment-679423792 -* https://dev.to/vidyasagarmsc/update-multiple-lines-in-a-yaml-file-49fb - -## Initialize - -* Proxy setup - -Proxy is needed because system may not have access to the `registry.terraform.io` site directory, -and if indirectly, it may not be able to handle a proxy redirect. You may not need to use this, but if you get -errors from the `tf-init`, this is your first thing to setup. - -```shell -export HTTP_PROXY=http://proxy.tco.census.gov:3128 -export HTTPS_PROXY=http://proxy.tco.census.gov:3128 -``` - -## Terraform Automated - -A `tf-run.data` file exists here, so the simplest way to implemnt is with the `tf-run.sh` script. - -* copy the `remote_state.yml` from the parent and update `directory` to be the current directory -* run the tf-run.sh - -```console -% tf-run.sh apply -``` - -* example of the `tf-run.sh` steps - -This is part of a larger cluster configuration, so at the end of the run it indicates another directory -to visit when done. - -```console -% tf-run.sh list -* running action=plan -* START: tf-run.sh v1.1.2 start=1636558187 end= logfile=logs/run.plan.20211110.1636558187.log (not-created) -* reading from tf-run.data -* read 7 entries from tf-run.data -> list -** START: start=1636558187 -* 1 COMMAND> tf-directory-setup.py -l none -f -* 2 COMMAND> setup-new-directory.sh -* 3 COMMAND> tf-init -upgrade -* 4 POLICY> (*.tf) aws_iam_policy.efs-policy -* 4 tf-plan -target=aws_iam_policy.efs-policy -* 5 tf-plan -* 6 COMMAND> tf-directory-setup.py -l s3 -* 7 STOP> cd ../common-services and tf-run.sh apply -** END: start=1636558187 end=1636558187 elapsed=0 logfile=logs/run.plan.20211110.1636558187.log (not-created) -``` - -It is highly recommended to use the `tf-run.sh` approach. - -## Terraform Manual - - -```shell -tf-directory-setup.py -l none -setup-new-directory.sh -tf-init -```` - -* Apply the EFS policy first (before the role) - -```shell -tf-apply -target=aws_iam_policy.efs-policy -``` - -* Apply the rest - -This must be done from a system with the skopeo command, so RHEL8+. - -To use the local install, The efs/charts/ directory -must be populated with the expected code (see [README.md](README.md)) outside of terraform, -much like the .tf files are created. Currently, as the box we run this from has internet access, -we can deploy by pulling the helm stuff from the internet. - -```shell -tf-apply -tf-directory-setup.py -l s3 -``` - -## Post Setup Examination - -This gives us (look at the efs-csi-* ones) to see what was setup. Your `kubectl` configuration file -needs to be setup (one is extracted in `setup/kube.config` as part of this configuration). - -```console -% kubectl --kubeconfig setup/kube.config get pods -n kube-system -NAME READY STATUS RESTARTS AGE -aws-node-j6n6z 1/1 Running 1 27h -aws-node-nmgqm 1/1 Running 1 27h -aws-node-t5ggn 1/1 Running 1 27h -aws-node-vxlvw 1/1 Running 0 27h -coredns-65bfc5645f-254kx 1/1 Running 0 29h -coredns-65bfc5645f-zpvld 1/1 Running 0 29h -efs-csi-controller-7c88dbd56d-chdkt 3/3 Running 0 3m36s -efs-csi-controller-7c88dbd56d-hsws7 3/3 Running 0 3m36s -efs-csi-node-4gjdh 3/3 Running 0 3m36s -efs-csi-node-g49r7 3/3 Running 0 3m36s -efs-csi-node-hq6q9 3/3 Running 0 3m36s -efs-csi-node-lcdmd 3/3 Running 0 3m36s -kube-proxy-dp9zl 1/1 Running 0 27h -kube-proxy-n9l75 1/1 Running 0 27h -kube-proxy-qrv2w 1/1 Running 0 27h -kube-proxy-zssvb 1/1 Running 0 27h -``` - -* Create PVC Automated - -Use the `persistent-volume.tf`, which is setup by default, and should happen as part of the final apply above. - -* Create PVC Manually - -```json -# pvc.yaml -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: efs-test3-claim -spec: - accessModes: - - ReadWriteMany - volumeMode: Filesystem - resources: - requests: - storage: 25Gi - storageClassName: efs -``` - -* Examinine the PV and PVC - -```console -% kubectl get pv -No resources found -% kubectl get pvc -No resources found in default namespace. -% kubectl apply -f pvc.yaml -persistentvolumeclaim/efs-test3-claim created -% kubectl get pvc -NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE -efs-test3-claim Pending efs 39s -``` - -* Describing the PVC - -```shell -kubectl --kubeconfig setup/kube.config describe pvc efs-test3-claim -``` - -To patch to make it work with the regional STS endpoint (this is handled in the TF code): - -```shell -kubectl --kubeconfig setup/kube.config -n kube-system set env deployment/efs-csi-controller AWS_STS_REGIONAL_ENDPOINTS=regional -``` diff --git a/examples/full-cluster/efs/copy_images.tf b/examples/full-cluster/efs/copy_images.tf index bf89085..f7e13be 100644 --- a/examples/full-cluster/efs/copy_images.tf +++ b/examples/full-cluster/efs/copy_images.tf @@ -47,6 +47,8 @@ resource "null_resource" "copy_images" { provisioner "local-exec" { command = "${path.module}/copy_image.sh" environment = { + AWS_PROFILE = var.profile + AWS_REGION = local.region SOURCE_IMAGE = format("%v/%v:%v", local.src_reg, each.value.image, each.value.tag) DESTINATION_IMAGE = format("%v:%v", aws_ecr_repository.repository[each.key].repository_url, each.value.tag) DESTINATION_USERNAME = data.aws_ecr_authorization_token.token.user_name diff --git a/examples/full-cluster/efs/locals.tf b/examples/full-cluster/efs/locals.tf index 3042080..4b9ae5a 100644 --- a/examples/full-cluster/efs/locals.tf +++ b/examples/full-cluster/efs/locals.tf @@ -12,6 +12,6 @@ locals { subnet_ids = local.parent_rs.cluster_subnet_ids cluster_worker_sg_id = local.parent_rs.cluster_worker_sg_id - oidc_provider_url = local.parent_rs.oidc_provider_url - oidc_provider_arn = local.parent_rs.oidc_provider_arn + oidc_provider_url = local.parent_rs.oidc_provider_url + oidc_provider_arn = local.parent_rs.oidc_provider_arn }