Multi-mount Volume Support
Over AWS EFS
This article will share one of the most interesting experiences we’ve recently had – sharing files between two pods inside a Kubernetes cluster.
Speaking of Kubernetes, if one works with it, perhaps you’ll find MinIO a useful tool. Read more about installing it here. And we get back to sharing files.
The challenge laid not in sharing the files but in creating multi-mount volume support over AWS EFS. And the peculiarity of our case lies in the elegant solution – attaching the EFS volume to EKS using Terraform.
Our team will highlight all pitfalls and reveal a step-by-step guide on creating the multi-mount volume support for Amazon EFS + Terraform. And now, to details.
Challenge
The story background is rather standard: an old problem requiring a new solution. Our engineers had two different pods inside a Kubernetes cluster; the first generated files, while the latter used them. Evidently, the files had to be shared between these pods.
We chose not to synchronise files between pods over the network ourselves but to use a network file system. And, of course, EFS was the first idea for handling this case.
However, this decision brought us another challenge – attaching a multi-mount volume with EFS is not supported by EKS from the box. So, we need to turn on the EFS support manually. But before proceeding to the yummiest part, let us devote a couple of lines to the tech stack used.
Amazon EFS
Amazon initially released EFS in 2016, having made it fully available in all public AWS regions in 2019. EFS stands for elastic file system; it’s a scalable and flexible cloud storage service.
Amazon EFS provides a simple interface and a pay-per-use model (without a minimum fee) to work with on-premises resources. It grows automatically when adding files and decreases the necessity of manual provisioning.
Besides top scalability, one of the main Amazon EFS advantages is avoiding the complexity of maintaining complex file system configurations – it securely manages the whole file storage infrastructure for you.
Amazon EFS is a perfect technology when working with multi-attach volumes. However, the EKS from the box doesn’t support it, as said before. That’s why our engineers needed to use Helm and Kubernetes CSI.
While Helm is quiet on the ear, Kubernetes CSI remains rather unexplored by IT folks. It’s time to change the game, don’t you think so? 😉
Kubernetes CSI
Any volume requires a Container Storage Interface (CSI) – the thing is that some volumes can offer it from the box. In case you’re not satisfied with the default volume type or don’t have it at all (like Amazon EFS), consider Kubernetes CSI.
CSI is a Kubernetes component – a controller that allows its API to communicate with the storage API to interact with volumes. In other words, it’s a bridge between k8s and storage.
In December 2018, it was introduced as alpha in v1.13 as a relatively deprecative gesture towards built-in k8s functionality. CSI was developed as a standard for exposing arbitrary block and file storage systems to containerised workloads on Container Orchestration Systems like Kubernetes.
With the adoption of the CSI, the k8s volume layer becomes truly extensible. Using it, third-party storage providers can write and deploy plugins exposing new storage systems in Kubernetes without ever having to touch the core code.
Kubernetes itself prefers CSI drivers, claiming they give k8s users more options for storage and make the whole system far more secure and reliable. If you want to dive into the CSI topic, read this article – and we’ll move on.
How to Turn On Support Over EFS + Terraform
Hey there, truly interested and the most patient readers. The tech stack is discussed, it’s high time we shared a detailed description of turning on multi-mount volume support over Amazon EFS and Terraform.
So, besides the EFS volume, you need to have the EFS CSI Driver, which allows you to work with Amazon EFS and grant permissions to Kubernetes.
Starting from the EFS volume allocation, you need the EFS volume itself, the Security Group (data protection is everything), and the EFS mount target. The latter will allow you to mount volumes to your compute instances. Let’s create them:
# Create VPC to attach EFS volume to. EKS cluster must be in the same VPC
resource "aws_vpc" "vpc"{
cidr_block = "10.0.0.0/16"
}
resource "aws_efs_file_system" "efs" {
creation_token = "my-product"
}
resource "aws_subnet" "subnet" {
vpc_id = aws_vpc.vpc.id
availability_zone = "us-east-1a"
cidr_block = "10.0.1.0/24"
}
resource "aws_security_group" "allow_efs" {
name = "Allow EFS"
description = "Allow EFS traffic"
vpc_id = aws_vpc.vpc.id
ingress {
from_port = 0
to_port = 0
# Recommended to set EKS Security Groups with security_groups to improve security
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
}
resource "aws_efs_mount_target" "efs" {
file_system_id = aws_efs_file_system.efs.id
subnet_id = aws_subnet.subnet.id
security_groups = [aws_security_group.allow_efs.id]
}
The next step is to grant permissions to your Kubernetes Nodes to access EFS. At this point, you must have a role for Kubernetes Nodes. Let’s call this role eks_node_manager:
resource "aws_iam_policy" "efs" {
name = "efs-csi-driver"
path = "/"
description = "Policy for the EFS CSI driver"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"elasticfilesystem:DescribeAccessPoints",
"elasticfilesystem:DescribeFileSystems",
"elasticfilesystem:DescribeMountTargets",
"elasticfilesystem:CreateAccessPoint",
"elasticfilesystem:DeleteAccessPoint",
"ec2:DescribeAvailabilityZones"
]
Effect = "Allow"
Resource = "*"
},
]
})
}
resource "aws_iam_role_policy_attachment" "efs_node" {
role = aws_iam_role.eks_node_manager.name
policy_arn = aws_iam_policy.efs[0].arn
Last but not least: configuring Helm EFS CSI Driver. Note that the Helm EFS CSI Driver supports dynamic provisioning and static one. We’ve used dynamic provisioning for our case, and there is a difference between them.
In the case of static provisioning, the EFS file system needs to be created manually first, then it could be mounted inside a container as a persistent volume (PV) using the driver.
When dealing with dynamic provisioning, use a persistent volume claim (PVC) to provision a persistent volume (PV) dynamically. On Creating a PVC, Kubernetes requests the EFS to create an Access Point in a file system which will be used to mount the PV.
And now, back to business. The block code responsible for the Storage Class is called storageClasses. We’ll override only this block in values.yaml file:
storageClasses:
# Add StorageClass resources like:
- name: efs-sc
annotations:
# Use that annotation if you want this to your default storageclass
#storageclass.kubernetes.io/is-default-class: "true"
mountOptions:
# Mount protocol
- tls
parameters:
# Leave intact
provisioningMode: efs-ap
# ID of EFS created in aws_efs_file_system.efs
fileSystemId: fs-1122aabb
directoryPerms: "700"
gidRangeStart: "1000"
gidRangeEnd: "2000"
# Mount target path
basePath: "/dynamic_provisioning"
# Reclaim Policy
reclaimPolicy: Delete
volumeBindingMode: Immediate
Once the .yaml file is ready, you can deploy the AWS EFS CSI Driver Helm chart with Terraform:
resource "helm_release" "efs" {
name = "aws-efs-csi-driver"
repository = "https://kubernetes-sigs.github.io/aws-efs-csi-driver/"
chart = "aws-efs-csi-driver"
version = "2.2.7"
values = [
"${file("values.yaml")}"
]
}
Now you can create PVC in a manifest with aws_efs_file_system.efs storageClassName type to allocate EFS volumes:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-pvc
...
spec:
storageClassName: fs-1122aabb
Congrats, you can use the same PVC to attach the volume to different Kubernetes resources now:
Manifest 1:
apiVersion: v1
kind: Pod
metadata:
name: test-efs1
spec:
containers:
- image: ubuntu:latest
name: test-container1
volumeMounts:
- mountPath: /test-efs
name: test-volume
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: test-pvc
Manifest 2:
apiVersion: v1
kind: Pod
metadata:
name: test-efs2
spec:
containers:
- image: ubuntu:latest
name: test-container2
volumeMounts:
- mountPath: /test-efs
name: test-volume
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: test-pvc
From now on, if you update the content of the test-efs1 pod, you’ll see these changes in the test-efs2 pod.
Wrapping Up
This guide is a quintessence of our expertise and a thorough documentation study. And as a result, we handled all challenges and launched the project on time, contributing to business development and a positive client experience.
Visit our website or our Medium blog to find more valuable tech information and priceless business insights.