In this lab, you will create a private cluster, and add an authorized network for API access to it.
In this lab, you learn how to perform the following tasks:
- Create and test a private cluster
- Configure a cluster for authorized network master access
Task 1. Create a private cluster
In this task, you create a private cluster, consider the options for how private to make it, and then compare your private cluster to your original cluster.
In a private cluster, the nodes have internal RFC 1918 IP addresses only, which ensures that their workloads are isolated from the public Internet. The nodes in a non-private cluster have external IP addresses, potentially allowing traffic to and from the internet.
Set up a private cluster
- On the Navigation menu > click Kubernetes Engine > Clusters.
- Click Create.
- Click Configure to select Standard mode for the cluster.
- Name the cluster
us-central1-aas the zone.
- Click on default-pool under NODE POOLS section and then enter
2in Number of nodes section.
7. Click on Networking section, select Enable VPC-native traffic routing (uses alias IP).
8. In the Networking section, select Private cluster and select Access control plane using its external IP address.
9. For Control plane IP Range, enter 172.16.0.0/28.
10. Deselect Enable control plane authorized networks.
This setting allows you to define the range of addresses that can access the cluster master externally. When this checkbox is not selected, you can access
kubectl only from within the Google Cloud network. In this lab, you will only access
kubectl through the Google Cloud network but you will modify this setting later.
11. Click Create.
Note: You need to wait a few minutes for the cluster deployment to complete.
Inspect your cluster
- In the Cloud Shell, enter the following command to review the details of your new cluster:
gcloud container clusters describe private-cluster --region us-central1-a
The following values appear only under the private cluster:
- privateEndpoint, an internal IP address. Nodes use this internal IP address to communicate with the cluster master.
- publicEndpoint, an external IP address. External services and administrators can use the external IP address to communicate with the cluster master.
You have several options to lock down your cluster to varying degrees:
- The whole cluster can have external access.
- The whole cluster can be private.
- The nodes can be private while the cluster master is public, and you can limit which external networks are authorized to access the cluster master.
Without public IP addresses, code running on the nodes can’t access the public internet unless you configure a NAT gateway such as Cloud NAT.
You might use private clusters to provide services such as internal APIs that are meant only to be accessed by resources inside your network. For example, the resources might be private tools that only your company uses. Or they might be backend services accessed by your frontend services, and perhaps only those frontend services are accessed directly by external customers or users. In such cases, private clusters are a good way to reduce the surface area of attack for your application.
Task 2. Add an authorized network for cluster master access
After cluster creation, you might want to issue commands to your cluster from outside Google Cloud. For example, you might decide that only your corporate network should issue commands to your cluster master. Unfortunately, you didn’t specify the authorized network on cluster creation.
In this task, you add an authorized network for cluster master access.
- In the Google Cloud Console Navigation menu > Kubernetes Engine > Clusters.
- Click private-cluster to open the Clusters details page.
- In Details tab, for Control plane authorized networks click on the Edit.
4. Select Enable Control plane authorized networks.
5. Click Add authorized network.
6. For Name, type the name for the network, use
7. For Network, type a CIDR range that you want to grant whitelisted access to your cluster master. As an example, you can use
8. Click Done. Click Save Changes.
Multiple networks can be added here if necessary, but no more than 50 CIDR ranges.
9. Now try to connect to your cluster’s control plane using cloudshell. you won’t be able to connect.
This concludes our lab for “Configuring Private Kubernetes Cluster”.