Skip to content

Commit

Permalink
fix readme
Browse files Browse the repository at this point in the history
  • Loading branch information
badra001 committed Nov 15, 2021
1 parent 23c6f04 commit 7525dbc
Showing 1 changed file with 164 additions and 0 deletions.
164 changes: 164 additions & 0 deletions examples/full-cluster/efs/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,164 @@
# EFS

This sets up the needed EFS resources for persistent volumes. See [this](README.efs.md) for more details.

## Links

* https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html
* https://github.com/kubernetes-sigs/aws-efs-csi-driver
* https://github.com/kubernetes-sigs/aws-efs-csi-driver/issues/433
* https://github.com/hashicorp/terraform-provider-kubernetes/issues/723#issuecomment-679423792
* https://dev.to/vidyasagarmsc/update-multiple-lines-in-a-yaml-file-49fb

## Initialize

* Proxy setup

Proxy is needed because system may not have access to the `registry.terraform.io` site directory,
and if indirectly, it may not be able to handle a proxy redirect. You may not need to use this, but if you get
errors from the `tf-init`, this is your first thing to setup.

```shell
export HTTP_PROXY=http://proxy.tco.census.gov:3128
export HTTPS_PROXY=http://proxy.tco.census.gov:3128
```

## Terraform Automated

A `tf-run.data` file exists here, so the simplest way to implemnt is with the `tf-run.sh` script.

* copy the `remote_state.yml` from the parent and update `directory` to be the current directory
* run the tf-run.sh

```console
% tf-run.sh apply
```

* example of the `tf-run.sh` steps

This is part of a larger cluster configuration, so at the end of the run it indicates another directory
to visit when done.

```console
% tf-run.sh list
* running action=plan
* START: tf-run.sh v1.1.2 start=1636558187 end= logfile=logs/run.plan.20211110.1636558187.log (not-created)
* reading from tf-run.data
* read 7 entries from tf-run.data
> list
** START: start=1636558187
* 1 COMMAND> tf-directory-setup.py -l none -f
* 2 COMMAND> setup-new-directory.sh
* 3 COMMAND> tf-init -upgrade
* 4 POLICY> (*.tf) aws_iam_policy.efs-policy
* 4 tf-plan -target=aws_iam_policy.efs-policy
* 5 tf-plan
* 6 COMMAND> tf-directory-setup.py -l s3
* 7 STOP> cd ../common-services and tf-run.sh apply
** END: start=1636558187 end=1636558187 elapsed=0 logfile=logs/run.plan.20211110.1636558187.log (not-created)
```

It is highly recommended to use the `tf-run.sh` approach.

## Terraform Manual


```shell
tf-directory-setup.py -l none
setup-new-directory.sh
tf-init
````

* Apply the EFS policy first (before the role)

```shell
tf-apply -target=aws_iam_policy.efs-policy
```

* Apply the rest

This must be done from a system with the skopeo command, so RHEL8+.

To use the local install, The efs/charts/ directory
must be populated with the expected code (see [README.md](README.md)) outside of terraform,
much like the .tf files are created. Currently, as the box we run this from has internet access,
we can deploy by pulling the helm stuff from the internet.

```shell
tf-apply
tf-directory-setup.py -l s3
```

## Post Setup Examination

This gives us (look at the efs-csi-* ones) to see what was setup. Your `kubectl` configuration file
needs to be setup (one is extracted in `setup/kube.config` as part of this configuration).

```console
% kubectl --kubeconfig setup/kube.config get pods -n kube-system
NAME READY STATUS RESTARTS AGE
aws-node-j6n6z 1/1 Running 1 27h
aws-node-nmgqm 1/1 Running 1 27h
aws-node-t5ggn 1/1 Running 1 27h
aws-node-vxlvw 1/1 Running 0 27h
coredns-65bfc5645f-254kx 1/1 Running 0 29h
coredns-65bfc5645f-zpvld 1/1 Running 0 29h
efs-csi-controller-7c88dbd56d-chdkt 3/3 Running 0 3m36s
efs-csi-controller-7c88dbd56d-hsws7 3/3 Running 0 3m36s
efs-csi-node-4gjdh 3/3 Running 0 3m36s
efs-csi-node-g49r7 3/3 Running 0 3m36s
efs-csi-node-hq6q9 3/3 Running 0 3m36s
efs-csi-node-lcdmd 3/3 Running 0 3m36s
kube-proxy-dp9zl 1/1 Running 0 27h
kube-proxy-n9l75 1/1 Running 0 27h
kube-proxy-qrv2w 1/1 Running 0 27h
kube-proxy-zssvb 1/1 Running 0 27h
```

* Create PVC Automated

Use the `persistent-volume.tf`, which is setup by default, and should happen as part of the final apply above.

* Create PVC Manually

```json
# pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: efs-test3-claim
spec:
accessModes:
- ReadWriteMany
volumeMode: Filesystem
resources:
requests:
storage: 25Gi
storageClassName: efs
```

* Examinine the PV and PVC

```console
% kubectl get pv
No resources found
% kubectl get pvc
No resources found in default namespace.
% kubectl apply -f pvc.yaml
persistentvolumeclaim/efs-test3-claim created
% kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
efs-test3-claim Pending efs 39s
```

* Describing the PVC

```shell
kubectl --kubeconfig setup/kube.config describe pvc efs-test3-claim
```

To patch to make it work with the regional STS endpoint (this is handled in the TF code):

```shell
kubectl --kubeconfig setup/kube.config -n kube-system set env deployment/efs-csi-controller AWS_STS_REGIONAL_ENDPOINTS=regional
```

0 comments on commit 7525dbc

Please sign in to comment.