Google Cloud: Introduction to Google Cloud VMware Engine (GCVE)

Share At:

Google Introduces Google Cloud VMware Engine

What Is Google Cloud VMware Engine(GCVE)

Google Cloud VMware Engine is a fully managed service that lets you run the VMware platform in Google Cloud. VMware Engine provides you with VMware operational continuity so you can benefit from a cloud consumption model and lower your total cost of ownership. VMware Engine also offers on-demand provisioning, pay-as-you-grow, and capacity optimization.

Your VMware environment runs natively on Google Cloud bare metal infrastructure in Google Cloud locations and fully integrates with the rest of Google Cloud. Google manages the infrastructure and all the necessary networking and management services so you can consume the VMware platform efficiently and securely.

VMware Engine includes vSphere, vCenter, vSAN, NSX-T, HCX, and corresponding tools, so it’s fully compatible with your existing VMware tools, processes, and skills training.

Features and benefits

VMware Engine provides you with a number of benefits to your overall productivity:

  • Infrastructure agility. You can get on-demand self-service provisioning of VMware cloud environments, with the ability to add and remove capacity on demand or reserve capacity to lower costs.
  • Infrastructure monitoring, troubleshooting, and support. Google operates your underlying infrastructure as a service. Failed hardware is automatically replaced. Focus on consumption while Google manages VMware platform deployments and upgrades, management plane backups, health and capacity monitoring, alerting, troubleshooting, and remediation.
  • Security. Edge-type networking services, including VPN, public IP, and internet gateways run on Google Cloud and carry the security and distributed denial-of-service attack (DDoS) protection of Google Cloud. Infrastructure is fully dedicated to you and physically isolated from infrastructure of other customers.
  • Hybrid platform. VMware Engine enables high-speed, low-latency connectivity to the rest of Google Cloud, as well as your on-premises environment. VMware Engine also provides the underlay networking services required to enable VMware, including L2/L3 services and firewall rule management.
  • Convenient monitoring. Monitoring and management tools help you keep track of platform activity, resource usage, user account management, billing, and metering.
  • Lower cost. VMware Engine provides high levels of automation, operational efficiency, and economies of scale. Google further lowers costs by publishing solution architectures for you to use Google Cloud services in an integrated VMware cloud in a public cloud architecture.
  • Operational continuity and policy compatibility. Google offers native access to VMware platforms. The architecture is compatible with your existing:
  • VMware-based applications
  • Security procedures
  • Disaster recovery backups
  • Audit practices
  • Compliance tools and certifications

VMware Engine private clouds

A Google Cloud VMware Engine private cloud is an isolated VMware stack that consists of the following VMware components:

  • ESXi hosts
  • vCenter Server
  • vSAN
  • NSX
  • HCX

Private clouds help you address a variety of common needs for network infrastructure:

  • Growth. Add nodes with no new hardware investment when you reach a hardware refresh point for your existing infrastructure.
  • Fast expansion. Create additional capacity immediately when temporary or unplanned capacity needs arise.
  • Increased protection. Get automatic redundancy and availability protection when using a private cloud of three or more nodes.
  • Long-term infrastructure needs. Retire data centers and migrate to a cloud-based solution while remaining compatible with your enterprise operations. This is especially useful if your data centers are at capacity or you want to restructure to lower costs.

VMware component versions

A private cloud VMware stack has the following software versions:

ESXi

When you create a private cloud, VMware ESXi is installed on provisioned Google Cloud VMware Engine nodes. ESXi provides the hypervisor for deploying workload virtual machines (VMs). Nodes provide hyper-converged infrastructure (compute and storage) and are a part of the vSphere cluster on your private cloud.

Each node has four physical network interfaces connected to the underlying network. Using two physical network interfaces, VMware Engine creates a vSphere distributed switch (VDS) on the vCenter. Using the other two interfaces, VMware Engine creates an NSX-managed virtual distributed switch (N-VDS). Network interfaces are configured in active-active mode for high availability.

vCenter Server Appliance

vCenter Server Appliance (VCSA) provides the authentication, management, and orchestration functions for VMware Engine. When you create and deploy your private cloud, VMware Engine deploys a VCSA with an embedded Platform Services Controller (PSC) on the vSphere cluster. Each private cloud has its own VCSA. Adding nodes to a private cloud adds nodes to the VCSA.

vCenter Single Sign-On

The embedded platform services controller on VCSA is associated with a vCenter Single Sign-On. The domain name is gve.local. To access vCenter, use the default user, CloudOwner@gve.local, which is created for you to access vCenter. You can add your on-premises/Active Directory identity sources for vCenter.

vSAN storage

Private clouds have fully configured all-flash vSAN storage that’s local to the cluster. At least three nodes of the same SKU are required to create a vSphere cluster with a vSAN datastore. Deduplication and compression are enabled on the vSAN datastore by default. Each node of the vSphere cluster has two disk groups. Each disk group contains one cache disk and three capacity disks.

vSAN storage policies

A vSAN storage policy defines the Failures to tolerate (FTT) and the Failure tolerance method. You can create new storage policies and apply them to VMs. To maintain SLA, you must maintain 25% spare capacity on the vSAN datastore.

On each vSphere cluster, there’s a default vSAN storage policy that applies to the vSAN datastore. The storage policy determines how to provision and allocate VM storage objects within the datastore to guarantee a level of service.

The following table shows the default vSAN storage policy parameters:

Supported vSAN storage policies

The following table shows the supported vSAN storage policies and the minimum number of nodes required to enable the policy:

NSX Data Center

NSX Data Center provides network virtualization, micro segmentation, and network security capabilities on your private cloud. You can configure all services supported by NSX Data Center on your private cloud by using NSX. When you create a private cloud, VMware Engine installs and configures the following NSX components:

  • NSX-T Manager
  • Transport Zones
  • Host and Edge Uplink Profile
  • Logical Switch for Edge Transport, Ext1, and Ext2
  • IP Pool for ESXi Transport Node
  • IP Pool for Edge Transport Node
  • Edge Nodes
  • DRS Anti-affinity rule for controller and Edge VMs
  • Tier 0 Router
  • Enable border gateway protocol (BGP) on Tier0 Router

vSphere cluster

To ensure high availability of the private cloud, ESXi hosts are configured as a cluster. When you create a private cloud, VMware Engine deploys management components of vSphere on the first cluster. VMware Engine creates a resource pool for management components, and deploys all management VMs in this resource pool.

The first cluster cannot be deleted to shrink the private cloud. The vSphere cluster uses vSphere HA to provide high availability for VMs. Failures to tolerate (FTT) are based on the number of available nodes in the cluster. The formula Number of nodes = 2N+1, where N is the FTT, describes the relationship between available nodes in a cluster and FTT.

vSphere cluster limits

Guest operating system support

You can install a VM with any guest operating system supported by VMware for the ESXi version in your private cloud. For a list of supported guest operating systems, see the VMware Compatibility Guide for Guest OS.

VMware infrastructure maintenance

Occasionally it’s necessary to make changes to the configuration of the VMware infrastructure. Currently, these intervals can occur every 1‑2 months, but the frequency is expected to decline over time. This type of maintenance can usually be done without interrupting normal usage of the services.

During a VMware maintenance interval, the following services continue to function without any effect:

  • VMware management plane and applications
  • vCenter access
  • All networking and storage
  • All cloud traffic

Private Cloud Environment

Private clouds are managed through the VMware Engine portal. They have their own vCenter Server in its own management domain. The stack runs on dedicated, isolated bare metal hardware nodes in Google Cloud locations. You use the stack through native VMware tools, including vCenter Server and NSX Manager.

Private clouds are also designed to eliminate single points of failure:

  • Clusters of ESXi hosts are configured with vSphere High Availability (HA) and sized to have at least one spare node for resilience. vSphere HA protects against node and network failures.
  • vSAN provides redundant primary storage. vSAN requires at least three nodes in a private cloud to provide protection against a single failure. You can configure vSAN to provide higher resilience for larger clusters.

You can connect the private cloud to your on-premises environment using the following connections:

VLANs and subnets on VMware Engine

Google Cloud VMware Engine creates a network per region in which your VMware Engine service is deployed. The network is a single TCP Layer 3 address space with routing enabled by default. All private clouds and subnets created in this region can communicate with each other without any additional configuration. You can create network segments (subnets) using NSX-T for your workload virtual machines (VMs).

Management VLANs

Google creates a VLAN (Layer 2 network) for each private cloud. The Layer 2 traffic stays within the boundary of a private cloud, letting you isolate the local traffic within the private cloud. These VLANs are used for the management network. For workload VMs, you must create network segments on NSX-T Manager for your private cloud.

Subnets

You must create a network segment on the NSX-T manager for your private cloud. A single private Layer 3 address space is assigned per customer and region. You can configure any RFC 1918 that doesn’t overlap with other networks in your private cloud, on-premises network, management network of your private cloud, or any VPC subnets.

All subnets can communicate with each other by default, reducing the configuration overhead for routing between private cloud. No egress is required for communication between private clouds in a region.

vSphere/vSAN subnets CIDR range

VMware Engine deploys management components of a private cloud in the vSphere/vSAN subnets CIDR range provided during private cloud creation. Each private cloud requires a vSphere/vSAN subnet CIDR range, and the CIDR range is divided into different subnets during private cloud deployment. CIDR range limits cannot be changed after private cloud creation without deleting the private cloud.

The CIDR range prefix has the following requirements:

  • Minimum vSphere/vSAN subnets CIDR range prefix: /24
  • Maximum vSphere/vSAN subnets CIDR range prefix: /21

Note: IP addresses in the vSphere/vSAN subnets CIDR range are reserved for private cloud infrastructure. You can’t use IP addresses in this range for your workload VMs.

vSphere/vSAN subnets CIDR range limits

The vSphere/vSAN subnets CIDR range size affects the size of your private cloud.

The following table shows the maximum number of nodes you can have, based on the size of the vSphere/vSAN subnets CIDR range.

Management subnets created on a private cloud

When you create a private cloud, the following management subnets are created:

  • System management: VLAN and subnet for ESXi hosts’ management network, DNS server, vCenter Server
  • VMotion: VLAN and subnet for ESXi hosts’ vMotion network
  • VSAN: VLAN and subnet for ESXi hosts’ vSAN network
  • NsxtEdgeUplink1: VLAN and subnet for VLAN uplinks to an external network
  • NsxtEdgeUplink2: VLAN and subnet for VLAN uplinks to an external network
  • NsxtEdgeTransport: VLAN and subnet for transport zones control the reach of Layer 2 networks in NSX-T
  • NsxtHostTransport: VLAN and subnet for host transport zone

Management network CIDR range breakdown

The vSphere/vSAN subnets CIDR range you specify is divided into multiple subnets. The following table shows an example of the breakdown for allowed prefixes. The example uses 192.168.0.0 as the CIDR range

HCX deployment network CIDR range

When you create a private cloud on VMware Engine, HCX is installed on the private cloud automatically. You can specify a network CIDR range for use by HCX components. The CIDR range must be /27 or higher.

The network provided is split into three subnets. HCX manager is installed in the HCX management subnet. The HCX vMotion subnet is used for vMotion of virtual machines between your on-premises environment and VMware Engine private cloud. The HCX WANUplink subnet is used for establishing the tunnel between your on-premises environment and VMware Engine private cloud.

Recommended MTU settings

The maximum transmission unit (MTU) is the size, in bytes, of the largest packet supported by a network layer protocol, including both headers and data. To avoid fragmentation-related issues, we recommend the following MTU settings on endpoints that communicate to or from a private cloud. These recommendations are especially important in cases where an application isn’t able to control the maximum payload size.

Use an MTU setting of 1440 bytes or lower for VM interfaces that send traffic in the following ways:

  • From the internet to a private cloud.
  • From an on-premises endpoint to a private cloud.
  • From a private cloud VM to an on-premises endpoint.
  • From a private cloud VM to the internet.
  • From a VM in one private cloud to a VM in another private cloud.

For VMs that communicate only with other endpoints within a private cloud, you can use MTU settings up to 8800 bytes.

Updates and upgrades

Google is responsible for lifecycle management of VMware software (ESXi, vCenter, PSC, and NSX) in the private cloud.

Software updates include:

  • Patches: security patches or bug fixes released by VMware
  • Updates: minor version change of a VMware stack component
  • Upgrades: major version change of a VMware stack component

Google tests a critical security patch as soon as it becomes available from VMware. Per SLA, Google rolls out the security patch to private cloud environments within a week.

Google provides quarterly maintenance updates to VMware software components. For a new major version of VMware software version, Google works with customers to coordinate a suitable maintenance window for upgrade.

Happy Learning !!!


Share At:
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
Back To Top

Contact Us