When deploying to EKS did you know that each EC2 instance has a habit of taking way too many IP addresses? take this P2.Xlarge, this instance alone was responsible for reducing my available IP address pool by 10 IPs!
Because of this issue, there is a long complex and ardous process that AWS EKS directs users to implement outlined here. Basically users will need to add another CIDR address to their VPC that contains non-routable addresses. Which in lamens terms means that the IP’s will only route within the aws VPC but not outside to a VPN or external connection
I know what your thinking, why would we care why is this secondary pool needed? Why don’t admins create large /8 subnets and bridge the connections to larger external gateways capable of exposing one IP? well the reason is mainly due to the network topology of corperate data centers. You see, corperations can get so large that their internal addresses need to be also rationed. Which allows users on a corporate VPN direct access to individual EC2 instances within AWS. Administrators will then need to create and attach secondary CIDR addresses with non-routable CIDRs and assign them to be utilized by instances that are launched within a cluster.
The process to accomplish this is complex.
- Get the main
VPC-ID
- Add a new
non-routeable CIDR
to VPC-ID Create subnets
for new VPC-IDTAG subnets
with the same tags given by the EKS cluster to existing subnets.Install the networking "CNI" plugin
– this makes sure only one address acts as the gateway between nodes and the external gateways rather than 15.Configure
CNI pluginTerminate worker nodes
to refresh daemon to load the plugin- Describe subnets and
get subnet ID's for larger non-routable pool
. Get security groups for each instance
in the EKS worker pool.Create and apply new YAML config files
that link subnets to security groups in kubectl as CRD Custom resource defannotate each instance
with this information to add the corresponding YAML config file.
This means each YAML config is for a specific subnet and availability zone, these need to match the instance availability zone. This also means that each new instance that launches needs to be annotated to the exact proper availability zone!.