1
0
forked from github/onyx

feat(infra): Adding new AWS Terraform Template Code (#5194)

* feat(infra): Adding new AWS Terraform Template Code

* Addressing greptile comments

* Applying some updates after the cubic reviews as well

* Adding one detail

* Removing unused var

* Addressing more cubic comments
This commit is contained in:
Justin Tahara
2025-08-14 16:47:15 -07:00
committed by GitHub
parent a605bd4ca4
commit ce8cb1112a
19 changed files with 1046 additions and 0 deletions

View File

@@ -0,0 +1,188 @@
# Onyx AWS modules
## Overview
This directory contains Terraform modules to provision the core AWS infrastructure for Onyx:
- `vpc`: Creates a VPC with public/private subnets sized for EKS
- `eks`: Provisions an Amazon EKS cluster, essential addons (EBS CSI, metrics server, cluster autoscaler), and optional IRSA for S3 access
- `postgres`: Creates an Amazon RDS for PostgreSQL instance and returns a connection URL
- `redis`: Creates an ElastiCache for Redis replication group
- `s3`: Creates an S3 bucket (and VPC endpoint) for file storage
- `onyx`: A higher-level composition that wires the above modules together for a complete, opinionated stack
Use the `onyx` module if you want a working EKS + Postgres + Redis + S3 stack with sane defaults. Use the individual modules if you need more granular control.
## Quickstart (copy/paste)
The snippet below shows a minimal working example that:
- Sets up providers
- Waits for EKS to be ready
- Configures `kubernetes` and `helm` providers against the created cluster
- Provisions the full Onyx AWS stack via the `onyx` module
```hcl
locals {
region = "us-west-2"
}
provider "aws" {
region = local.region
}
module "onyx" {
# If your root module is next to this modules/ directory:
# source = "./modules/aws/onyx"
# If referencing from this repo as a template, adjust the path accordingly.
source = "./modules/aws/onyx"
region = local.region
name = "onyx" # used as a prefix and workspace-aware
postgres_username = "pgusername"
postgres_password = "your-postgres-password"
# create_vpc = true # default true; set to false to use an existing VPC (see below)
}
resource "null_resource" "wait_for_cluster" {
provisioner "local-exec" {
command = "aws eks wait cluster-active --name ${module.onyx.cluster_name} --region ${local.region}"
}
}
data "aws_eks_cluster" "eks" {
name = module.onyx.cluster_name
depends_on = [null_resource.wait_for_cluster]
}
data "aws_eks_cluster_auth" "eks" {
name = module.onyx.cluster_name
depends_on = [null_resource.wait_for_cluster]
}
provider "kubernetes" {
host = data.aws_eks_cluster.eks.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.eks.certificate_authority[0].data)
token = data.aws_eks_cluster_auth.eks.token
}
provider "helm" {
kubernetes {
host = data.aws_eks_cluster.eks.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.eks.certificate_authority[0].data)
token = data.aws_eks_cluster_auth.eks.token
}
}
# Optional: expose handy outputs at the root module level
output "cluster_name" {
value = module.onyx.cluster_name
}
output "postgres_connection_url" {
value = module.onyx.postgres_connection_url
sensitive = true
}
output "redis_connection_url" {
value = module.onyx.redis_connection_url
sensitive = true
}
```
Apply with:
```bash
terraform init
terraform apply
```
### Using an existing VPC
If you already have a VPC and subnets, disable VPC creation and provide IDs and CIDR:
```hcl
module "onyx" {
source = "./modules/aws/onyx"
region = local.region
name = "onyx"
postgres_username = "pgusername"
postgres_password = "your-postgres-password"
create_vpc = false
vpc_id = "vpc-xxxxxxxx"
private_subnets = ["subnet-aaaa", "subnet-bbbb", "subnet-cccc"]
public_subnets = ["subnet-dddd", "subnet-eeee", "subnet-ffff"]
vpc_cidr_block = "10.0.0.0/16"
}
```
## What each module does
### `onyx`
- Orchestrates `vpc`, `eks`, `postgres`, `redis`, and `s3`
- Names resources using `name` and the current Terraform workspace
- Exposes convenient outputs:
- `cluster_name`: EKS cluster name
- `postgres_connection_url` (sensitive): `postgres://...`
- `redis_connection_url` (sensitive): hostname:port
Inputs (common):
- `name` (default `onyx`), `region` (default `us-west-2`), `tags`
- `postgres_username`, `postgres_password`
- `create_vpc` (default true) or existing VPC details
### `vpc`
- Builds a VPC sized for EKS with multiple private and public subnets
- Outputs: `vpc_id`, `private_subnets`, `public_subnets`, `vpc_cidr_block`
### `eks`
- Creates the EKS cluster and node groups
- Enables addons: EBS CSI driver, metrics server, cluster autoscaler
- Optionally configures IRSA for S3 access to specified buckets
- Outputs: `cluster_name`, `cluster_endpoint`, `cluster_certificate_authority_data`, `s3_access_role_arn` (if created)
Key inputs include:
- `cluster_name`, `cluster_version` (default `1.33`)
- `vpc_id`, `subnet_ids`
- `public_cluster_enabled` (default true), `private_cluster_enabled` (default false)
- `cluster_endpoint_public_access_cidrs` (optional)
- `eks_managed_node_groups` (defaults include a main and a vespa-dedicated group with GP3 volumes)
- `s3_bucket_names` (optional list). If set, creates an IRSA role and Kubernetes service account for S3 access
### `postgres`
- Amazon RDS for PostgreSQL with parameterized instance size, storage, version
- Accepts VPC/subnets and ingress CIDRs; returns a ready-to-use connection URL
### `redis`
- ElastiCache for Redis (transit encryption enabled by default)
- Supports optional `auth_token` and instance sizing
- Outputs endpoint, port, and whether SSL is enabled
### `s3`
- Creates an S3 bucket for file storage and a gateway VPC endpoint for private access
## Installing the Onyx Helm chart (after Terraform)
Once the cluster is active, deploy application workloads via Helm. You can use the chart in `deployment/helm/charts/onyx`.
```bash
# Set kubeconfig to your new cluster (if youre not using the TF providers for kubernetes/helm)
aws eks update-kubeconfig --name $(terraform output -raw cluster_name) --region ${AWS_REGION:-us-west-2}
kubectl create namespace onyx --dry-run=client -o yaml | kubectl apply -f -
# If using AWS S3 via IRSA created by the EKS module, consider disabling MinIO
# Replace the path below with the absolute or correct relative path to the onyx Helm chart
helm upgrade --install onyx /path/to/onyx/deployment/helm/charts/onyx \
--namespace onyx \
--set minio.enabled=false \
--set serviceAccount.create=false \
--set serviceAccount.name=onyx-s3-access
```
Notes:
- The EKS module can create an IRSA role plus a Kubernetes `ServiceAccount` named `onyx-s3-access` (by default in namespace `onyx`) when `s3_bucket_names` is provided. Use that service account in the Helm chart to avoid static S3 credentials.
- If you prefer MinIO inside the cluster, leave `minio.enabled=true` (default) and skip IRSA.
## Workflow tips
- First apply can be infra-only; once EKS is active, install the Helm chart.
- Use Terraform workspaces to create isolated environments; the `onyx` module automatically includes the workspace in resource names.
## Security
- Database and Redis connection outputs are marked sensitive. Handle them carefully.
- When using IRSA, avoid storing long-lived S3 credentials in secrets.

View File

@@ -0,0 +1,149 @@
locals {
s3_bucket_arns = [for name in var.s3_bucket_names : {
bucket_arn = "arn:aws:s3:::${name}"
bucket_objects = "arn:aws:s3:::${name}/*"
}]
}
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 20.0"
cluster_name = var.cluster_name
cluster_version = var.cluster_version
vpc_id = var.vpc_id
subnet_ids = var.subnet_ids
cluster_endpoint_public_access = var.public_cluster_enabled
cluster_endpoint_private_access = var.private_cluster_enabled
cluster_endpoint_public_access_cidrs = var.cluster_endpoint_public_access_cidrs
enable_cluster_creator_admin_permissions = true
eks_managed_node_group_defaults = {
ami_type = "AL2023_x86_64_STANDARD"
}
eks_managed_node_groups = var.eks_managed_node_groups
tags = var.tags
}
# https://aws.amazon.com/blogs/containers/amazon-ebs-csi-driver-is-now-generally-available-in-amazon-eks-add-ons/
data "aws_iam_policy" "ebs_csi_policy" {
arn = "arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy"
}
module "irsa-ebs-csi" {
source = "terraform-aws-modules/iam/aws//modules/iam-assumable-role-with-oidc"
version = "4.7.0"
create_role = true
role_name = "AmazonEKSTFEBSCSIRole-${module.eks.cluster_name}"
provider_url = module.eks.oidc_provider
role_policy_arns = [data.aws_iam_policy.ebs_csi_policy.arn]
oidc_fully_qualified_subjects = ["system:serviceaccount:kube-system:ebs-csi-controller-sa"]
depends_on = [module.eks]
}
# Create the EBS CSI Driver addon for volume provisioning.
resource "aws_eks_addon" "ebs-csi" {
cluster_name = module.eks.cluster_name
addon_name = "aws-ebs-csi-driver"
service_account_role_arn = module.irsa-ebs-csi.iam_role_arn
tags = var.tags
depends_on = [module.eks]
}
# Create GP3 storage class for EBS volumes
resource "kubernetes_storage_class" "gp3_default" {
count = var.create_gp3_storage_class ? 1 : 0
metadata {
name = "gp3"
annotations = {
"storageclass.kubernetes.io/is-default-class" = "true"
}
}
storage_provisioner = "ebs.csi.aws.com"
reclaim_policy = "Delete"
volume_binding_mode = "WaitForFirstConsumer"
allow_volume_expansion = true
parameters = {
type = "gp3"
}
depends_on = [aws_eks_addon.ebs-csi]
}
# Create some important addons for the EKS cluster.
module "eks_blueprints_addons" {
source = "aws-ia/eks-blueprints-addons/aws"
version = "1.16.3"
cluster_name = module.eks.cluster_name
cluster_endpoint = module.eks.cluster_endpoint
cluster_version = module.eks.cluster_version
oidc_provider_arn = module.eks.oidc_provider_arn
enable_aws_load_balancer_controller = true
enable_karpenter = false
enable_metrics_server = true
enable_cluster_autoscaler = true
depends_on = [module.eks]
}
# Create IAM policy for S3 access (optional)
resource "aws_iam_policy" "s3_access_policy" {
count = length(var.s3_bucket_names) == 0 ? 0 : 1
name = "${module.eks.cluster_name}-s3-access-policy"
description = "Policy for S3 access from EKS cluster"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Action = [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket"
]
Resource = flatten([
for a in local.s3_bucket_arns : [a.bucket_arn, a.bucket_objects]
])
}
]
})
}
# Create IAM role for S3 access using IRSA (optional)
module "irsa-s3-access" {
count = length(var.s3_bucket_names) == 0 ? 0 : 1
source = "terraform-aws-modules/iam/aws//modules/iam-assumable-role-with-oidc"
version = "4.7.0"
create_role = true
role_name = "AmazonEKSTFS3AccessRole-${module.eks.cluster_name}"
provider_url = module.eks.oidc_provider
role_policy_arns = [aws_iam_policy.s3_access_policy[0].arn]
oidc_fully_qualified_subjects = ["system:serviceaccount:${var.irsa_service_account_namespace}:${var.irsa_service_account_name}"]
depends_on = [module.eks]
}
# Create Kubernetes service account for S3 access (optional)
resource "kubernetes_service_account" "s3_access" {
count = length(var.s3_bucket_names) == 0 ? 0 : 1
metadata {
name = var.irsa_service_account_name
namespace = var.irsa_service_account_namespace
annotations = {
"eks.amazonaws.com/role-arn" = module.irsa-s3-access[0].iam_role_arn
}
}
}

View File

@@ -0,0 +1,17 @@
output "cluster_name" {
value = module.eks.cluster_name
}
output "cluster_endpoint" {
value = module.eks.cluster_endpoint
}
output "cluster_certificate_authority_data" {
value = module.eks.cluster_certificate_authority_data
sensitive = true
}
output "s3_access_role_arn" {
description = "ARN of the IAM role for S3 access"
value = length(module.irsa-s3-access) > 0 ? module.irsa-s3-access[0].iam_role_arn : null
}

View File

@@ -0,0 +1,127 @@
variable "cluster_name" {
type = string
description = "The name of the cluster"
}
variable "cluster_version" {
type = string
description = "The EKS version of the cluster"
default = "1.33"
}
variable "vpc_id" {
type = string
description = "The ID of the VPC"
}
variable "subnet_ids" {
type = list(string)
description = "The IDs of the subnets"
}
variable "public_cluster_enabled" {
type = bool
description = "Whether to enable public cluster access"
default = true
}
variable "private_cluster_enabled" {
type = bool
description = "Whether to enable private cluster access"
default = false
}
variable "cluster_endpoint_public_access_cidrs" {
type = list(string)
description = "List of CIDR blocks allowed to access the public EKS API endpoint"
default = []
}
variable "eks_managed_node_groups" {
type = map(any)
description = "EKS managed node groups with EBS volume configuration"
default = {
# Main node group for all pods except Vespa
main = {
name = "main-node-group"
instance_types = ["r7i.4xlarge"]
min_size = 1
max_size = 5
# EBS volume configuration
block_device_mappings = {
xvda = {
device_name = "/dev/xvda"
ebs = {
volume_size = 50
volume_type = "gp3"
encrypted = true
delete_on_termination = true
iops = 3000
throughput = 125
}
}
}
# No taints for main node group
taints = []
}
# Vespa dedicated node group
vespa = {
name = "vespa-node-group"
instance_types = ["m6i.2xlarge"]
min_size = 1
max_size = 1
# Larger EBS volume for Vespa storage
block_device_mappings = {
xvda = {
device_name = "/dev/xvda"
ebs = {
volume_size = 100
volume_type = "gp3"
encrypted = true
delete_on_termination = true
iops = 3000
throughput = 125
}
}
}
# Taint to ensure only Vespa pods can schedule here
taints = [
{
key = "vespa-dedicated"
value = "true"
effect = "NO_SCHEDULE"
}
]
}
}
}
variable "tags" {
type = map(string)
description = "Tags to apply to the resources"
default = {}
}
variable "create_gp3_storage_class" {
type = bool
description = "Whether to create the gp3 storage class. The gp3 storage class will be patched to make it default and allow volume expansion."
default = true
}
variable "s3_bucket_names" {
type = list(string)
description = "List of S3 bucket names that workloads in this cluster are allowed to access via IRSA. If empty, no S3 access role/policy/service account will be created."
default = []
}
variable "irsa_service_account_namespace" {
type = string
description = "Namespace where the IRSA-enabled Kubernetes service account for S3 access will be created"
default = "onyx"
}
variable "irsa_service_account_name" {
type = string
description = "Name of the IRSA-enabled Kubernetes service account for S3 access"
default = "onyx-s3-access"
}

View File

@@ -0,0 +1,77 @@
locals {
workspace = terraform.workspace
name = var.name
merged_tags = merge(var.tags, { tenant = local.name, environment = local.workspace })
vpc_name = "${var.name}-vpc-${local.workspace}"
cluster_name = "${var.name}-${local.workspace}"
bucket_name = "${var.name}-file-store-${local.workspace}"
redis_name = "${var.name}-redis-${local.workspace}"
postgres_name = "${var.name}-postgres-${local.workspace}"
vpc_id = var.create_vpc ? module.vpc[0].vpc_id : var.vpc_id
private_subnets = var.create_vpc ? module.vpc[0].private_subnets : var.private_subnets
public_subnets = var.create_vpc ? module.vpc[0].public_subnets : var.public_subnets
vpc_cidr_block = var.create_vpc ? module.vpc[0].vpc_cidr_block : var.vpc_cidr_block
}
provider "aws" {
region = var.region
default_tags {
tags = local.merged_tags
}
}
module "vpc" {
source = "../vpc"
count = var.create_vpc ? 1 : 0
vpc_name = local.vpc_name
tags = local.merged_tags
}
module "redis" {
source = "../redis"
name = local.redis_name
vpc_id = local.vpc_id
subnet_ids = local.private_subnets
instance_type = "cache.m6g.xlarge"
ingress_cidrs = [local.vpc_cidr_block]
tags = local.merged_tags
# Pass Redis authentication token as a sensitive input variable
auth_token = var.redis_auth_token
}
module "postgres" {
source = "../postgres"
identifier = local.postgres_name
vpc_id = local.vpc_id
subnet_ids = local.private_subnets
ingress_cidrs = [local.vpc_cidr_block]
username = var.postgres_username
password = var.postgres_password
tags = local.merged_tags
}
module "s3" {
source = "../s3"
bucket_name = local.bucket_name
region = var.region
vpc_id = local.vpc_id
tags = local.merged_tags
}
module "eks" {
source = "../eks"
cluster_name = local.cluster_name
vpc_id = local.vpc_id
subnet_ids = concat(local.private_subnets, local.public_subnets)
tags = local.merged_tags
s3_bucket_names = [local.bucket_name]
# These variables must be defined in variables.tf or passed in via parent module
public_cluster_enabled = var.public_cluster_enabled
private_cluster_enabled = var.private_cluster_enabled
cluster_endpoint_public_access_cidrs = var.cluster_endpoint_public_access_cidrs
}

View File

@@ -0,0 +1,13 @@
output "postgres_connection_url" {
value = module.postgres.connection_url
sensitive = true
}
output "redis_connection_url" {
value = module.redis.redis_endpoint
sensitive = true
}
output "cluster_name" {
value = module.eks.cluster_name
}

View File

@@ -0,0 +1,88 @@
variable "name" {
type = string
description = "Name of the Onyx resources. Example: 'onyx'"
default = "onyx"
}
variable "region" {
type = string
description = "AWS region for all resources"
default = "us-west-2"
}
variable "create_vpc" {
type = bool
description = "Whether to create a new VPC"
default = true
}
variable "vpc_id" {
type = string
description = "ID of the VPC. Required if create_vpc is false."
default = null
}
variable "private_subnets" {
type = list(string)
description = "Private subnets. Required if create_vpc is false."
default = [] # This will default to 0.0.0.0/0 if not provided
}
variable "public_subnets" {
type = list(string)
description = "Public subnets. Required if create_vpc is false."
default = []
}
variable "vpc_cidr_block" {
type = string
description = "VPC CIDR block. Required if create_vpc is false."
default = null
}
variable "tags" {
type = map(string)
description = "Base tags applied to all AWS resources"
default = {
"project" = "onyx"
}
}
variable "postgres_username" {
type = string
description = "Username for the postgres database"
default = "postgres"
sensitive = true
}
variable "postgres_password" {
type = string
description = "Password for the postgres database"
default = null
sensitive = true
}
variable "public_cluster_enabled" {
type = bool
description = "Whether to enable public cluster access"
default = true
}
variable "private_cluster_enabled" {
type = bool
description = "Whether to enable private cluster access"
default = false # Should be true for production, false for dev/staging
}
variable "cluster_endpoint_public_access_cidrs" {
type = list(string)
description = "CIDR blocks allowed to access the public EKS API endpoint"
default = []
}
variable "redis_auth_token" {
type = string
description = "Authentication token for the Redis cluster"
default = null
sensitive = true
}

View File

@@ -0,0 +1,18 @@
terraform {
required_version = ">= 1.12.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.100"
}
helm = {
source = "hashicorp/helm"
version = "~> 2.16"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 2.37"
}
}
}

View File

@@ -0,0 +1,45 @@
resource "aws_db_subnet_group" "this" {
name = "${var.identifier}-subnet-group"
subnet_ids = var.subnet_ids
tags = var.tags
}
resource "aws_security_group" "this" {
name = "${var.identifier}-sg"
description = "Allow PostgreSQL access"
vpc_id = var.vpc_id
tags = var.tags
ingress {
description = "Postgres ingress"
from_port = 5432
to_port = 5432
protocol = "tcp"
cidr_blocks = var.ingress_cidrs
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_db_instance" "this" {
identifier = var.identifier
db_name = var.db_name
engine = "postgres"
engine_version = var.engine_version
instance_class = var.instance_type
allocated_storage = var.storage_gb
username = var.username
password = var.password
db_subnet_group_name = aws_db_subnet_group.this.name
vpc_security_group_ids = [aws_security_group.this.id]
publicly_accessible = false
deletion_protection = true
storage_encrypted = true
tags = var.tags
}

View File

@@ -0,0 +1,4 @@
output "connection_url" {
value = "postgres://${aws_db_instance.this.username}:${aws_db_instance.this.password}@${aws_db_instance.this.endpoint}/${aws_db_instance.this.db_name}"
sensitive = true
}

View File

@@ -0,0 +1,63 @@
variable "identifier" {
type = string
description = "Identifier for the database and related resources"
}
variable "db_name" {
type = string
description = "Name of the database"
default = "postgres"
}
variable "instance_type" {
type = string
description = "Instance type"
default = "db.t4g.large" # 2 vCPU and 8 GB of memory
}
variable "storage_gb" {
type = number
description = "Storage size in GB"
default = 20
}
variable "engine_version" {
type = string
description = "Engine version"
default = "17"
}
variable "vpc_id" {
type = string
description = "VPC ID"
}
variable "subnet_ids" {
type = list(string)
description = "Subnet IDs"
}
variable "ingress_cidrs" {
type = list(string)
description = "Ingress CIDR blocks"
}
variable "username" {
type = string
description = "Username for the database"
default = "postgres"
sensitive = true
}
variable "password" {
type = string
description = "Password for the database"
default = null
sensitive = true
}
variable "tags" {
type = map(string)
description = "Tags to apply to RDS resources"
default = {}
}

View File

@@ -0,0 +1,53 @@
# Define the Redis security group
resource "aws_security_group" "redis_sg" {
name = "${var.name}-sg"
description = "Allow inbound traffic from EKS to Redis"
vpc_id = var.vpc_id
tags = var.tags
# Standard Redis port
ingress {
from_port = 6379
to_port = 6379
protocol = "tcp"
cidr_blocks = var.ingress_cidrs
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_elasticache_subnet_group" "elasticache_subnet_group" {
name = "${var.name}-subnet-group"
subnet_ids = var.subnet_ids
tags = var.tags
}
# The actual Redis instance
resource "aws_elasticache_replication_group" "redis" {
replication_group_id = var.name
description = "Redis cluster for ${var.name}"
engine = "redis"
node_type = var.instance_type
num_cache_clusters = 1
parameter_group_name = "default.redis7"
engine_version = "7.0"
port = 6379
security_group_ids = [aws_security_group.redis_sg.id]
subnet_group_name = aws_elasticache_subnet_group.elasticache_subnet_group.name
# Enable transit encryption (SSL/TLS)
transit_encryption_enabled = var.transit_encryption_enabled
# Enable encryption at rest
at_rest_encryption_enabled = true
# Enable authentication if auth_token is provided
# If transit_encryption_enabled is true, AWS requires an auth_token to be set.
auth_token = var.auth_token
tags = var.tags
}

View File

@@ -0,0 +1,14 @@
output "redis_endpoint" {
description = "The endpoint of the Redis cluster"
value = aws_elasticache_replication_group.redis.primary_endpoint_address
}
output "redis_port" {
description = "The port of the Redis cluster"
value = aws_elasticache_replication_group.redis.port
}
output "redis_ssl_enabled" {
description = "Whether SSL/TLS is enabled for Redis"
value = var.transit_encryption_enabled
}

View File

@@ -0,0 +1,44 @@
variable "name" {
description = "The name of the redis instance"
type = string
}
variable "vpc_id" {
description = "The ID of the vpc to deploy the redis instance into"
type = string
}
variable "subnet_ids" {
description = "The subnets of the vpc to deploy into"
type = list(string)
}
variable "ingress_cidrs" {
description = "CIDR block to allow ingress from"
type = list(string)
}
variable "instance_type" {
description = "The instance type of the redis instance"
type = string
default = "cache.m5.large" # 2 vCPU and 6 GB of memory
}
variable "transit_encryption_enabled" {
description = "Enable transit encryption (SSL/TLS) for Redis"
type = bool
default = true
}
variable "auth_token" {
description = "The password used to access a password protected server"
type = string
default = null
sensitive = true
}
variable "tags" {
description = "Tags to apply to ElastiCache resources"
type = map(string)
default = {}
}

View File

@@ -0,0 +1,47 @@
resource "aws_s3_bucket" "bucket" {
bucket = var.bucket_name
tags = var.tags
}
data "aws_route_tables" "vpc" {
filter {
name = "vpc-id"
values = [var.vpc_id]
}
}
resource "aws_vpc_endpoint" "s3" {
vpc_id = var.vpc_id
service_name = "com.amazonaws.${var.region}.s3"
vpc_endpoint_type = "Gateway"
route_table_ids = data.aws_route_tables.vpc.ids
tags = var.tags
}
resource "aws_s3_bucket_policy" "bucket_policy" {
bucket = aws_s3_bucket.bucket.id
policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Sid = "AllowAccessViaVPCE",
Effect = "Allow",
Principal = "*", # Update this to be the specific IAM roles, users, or service principals as needed
Action = [
"s3:GetObject",
"s3:ListBucket"
],
Resource = [
aws_s3_bucket.bucket.arn,
"${aws_s3_bucket.bucket.arn}/*"
],
Condition = {
StringEquals = {
"aws:SourceVpce" = aws_vpc_endpoint.s3.id
}
}
}
]
})
}

View File

@@ -0,0 +1,20 @@
variable "bucket_name" {
type = string
description = "Name of the S3 bucket"
}
variable "region" {
type = string
description = "AWS region"
}
variable "vpc_id" {
type = string
description = "VPC ID where your EKS cluster runs"
}
variable "tags" {
type = map(string)
description = "Tags to apply to S3 resources and VPC endpoint"
default = {}
}

View File

@@ -0,0 +1,35 @@
# Get the availability zones for the region without requiring opt-in
data "aws_availability_zones" "available" {
filter {
name = "opt-in-status"
values = ["opt-in-not-required"]
}
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "5.0.0"
name = var.vpc_name
cidr = var.cidr_block
azs = slice(data.aws_availability_zones.available.names, 0, 3)
private_subnets = var.private_subnets
public_subnets = var.public_subnets
map_public_ip_on_launch = true
enable_nat_gateway = true
single_nat_gateway = false
enable_dns_hostnames = true
public_subnet_tags = {
"kubernetes.io/role/elb" = "1"
}
private_subnet_tags = {
"kubernetes.io/role/internal-elb" = "1"
}
tags = var.tags
}

View File

@@ -0,0 +1,15 @@
output "vpc_id" {
value = module.vpc.vpc_id
}
output "private_subnets" {
value = module.vpc.private_subnets
}
output "public_subnets" {
value = module.vpc.public_subnets
}
output "vpc_cidr_block" {
value = module.vpc.vpc_cidr_block
}

View File

@@ -0,0 +1,29 @@
variable "vpc_name" {
type = string
description = "The name of the VPC"
default = "onyx-vpc"
}
variable "cidr_block" {
type = string
description = "The CIDR block for the VPC"
default = "10.0.0.0/16"
}
variable "private_subnets" {
type = list(string)
description = "The private subnets for the VPC"
default = ["10.0.0.0/21", "10.0.8.0/21", "10.0.16.0/21", "10.0.24.0/21", "10.0.32.0/21"]
}
variable "public_subnets" {
type = list(string)
description = "The public subnets for the VPC"
default = ["10.0.40.0/21", "10.0.48.0/21", "10.0.56.0/21"]
}
variable "tags" {
type = map(string)
description = "Tags to apply to all VPC-related resources"
default = {}
}