Google Cloud: Cloud Interconnect – Considerations, Design and Configuration

Share At:

Google Cloud Interconnect | Secure & reliable connectivity to Google Cloud

Overview 

Cloud Interconnect provides low latency, high availability connections that enable you to reliably transfer data between your on-premises and Google Cloud Virtual Private Cloud (VPC) networks. Also, Interconnect connections provide internal IP address communication, which means internal IP addresses are directly accessible from both networks.

Cloud Interconnect offers two options for extending your on-premises network:

  • Dedicated Interconnect provides a direct physical connection between your on-premises network and Google’s network.
  • Partner Interconnect provides connectivity between your on-premises and VPC networks through a supported service provider.

Benefits

Using Cloud Interconnect provides the following benefits:

  • Traffic between your on-premises network and your VPC network doesn’t traverse the public internet. Traffic traverses a dedicated connection or goes through a service provider with a dedicated connection. By bypassing the public internet, your traffic takes fewer hops, so there are fewer points of failure where your traffic might get dropped or disrupted.
  • Your VPC network’s internal IP addresses are directly accessible from your on-premises network. You don’t need to use a NAT device or VPN tunnel to reach internal IP addresses. For details, see IP addressing and dynamic routes.
  • You can scale your connection capacity to meet your particular requirements.
    For Dedicated Interconnect, connection capacity is delivered over one or more 10-Gbps or 100-Gbps Ethernet connections, with the following maximum capacities supported per Interconnect connection:
    • 8 x 10-Gbps connections (80 Gbps total)
    • 2 x 100-Gbps connections (200 Gbps total)
  • For Partner Interconnect, the following connection capacities for each VLAN attachment are supported:
    • 50-Mbps to 50-Gbps VLAN attachments. The maximum supported attachment size is 50 Gbps, but not all sizes might be available, depending on what’s offered by your chosen partner in the selected location.
  • You can request 100-Gbps connections at any of the locations listed on Choosing colocation facility locations.
  • Dedicated Interconnect, Partner Interconnect, Direct Peering, and Carrier Peering can all help you optimize egress traffic from your VPC network and reduce your egress costs. Cloud VPN by itself does not reduce egress costs.
  • You can use Cloud Interconnect with Private Google Access for on-premises hosts so that on-premises hosts can use internal IP addresses rather than external IP addresses to reach Google APIs and services. For more information, see Private access options for services in the VPC documentation.

Considerations

Using Cloud VPN instead

If you don’t require the low latency and high availability of Cloud Interconnect, consider using Cloud VPN to set up IPsec VPN tunnels between your networks. IPsec VPN tunnels encrypt data by using industry-standard IPsec protocols as traffic traverses the public internet.

A Cloud VPN tunnel doesn’t require the overhead or costs associated with a direct, private connection. Cloud VPN only requires a VPN device in your on-premises network.

IP addressing and dynamic routes

Note: The information in this section applies to Cloud VPN using dynamic (BGP) routing, Dedicated Interconnect, and Partner Interconnect.

When you connect your VPC network to your on-premises network, you allow communication between the IP address space of your on-premises network and some or all of the subnets in your VPC network. Which VPC subnets are available depends on the dynamic routing mode of your VPC network. Subnet IP ranges in VPC networks are always internal IP addresses.

The IP address space on your on-premises network and on your VPC network must not overlap, or traffic is not routed properly. Remove any overlapping addresses from either network.

Your on-premises routers share the routes to your on-premises network to the Cloud Routers in your VPC network. This action creates custom dynamic routes in your VPC network, each with a next hop set to the appropriate VLAN attachment.

Unless modified by custom advertisements, Cloud Routers in your VPC network share VPC network subnet IP address ranges with your on-premises routers according to the dynamic routing mode of your VPC network.

Restricting Cloud Interconnect usage

By default, any VPC network can use Cloud Interconnect. To control which VPC networks can use Cloud Interconnect, you can set an organization policy. For more information, see Restricting Cloud Interconnect usage.

Dedicated Interconnect provisioning overview

To create and configure an Interconnect connection, follow these steps:

  1. Order a Dedicated Interconnect connection
    Submit an order, specifying the details of your Interconnect connection. Google then emails you an order confirmation. After your resources have been allocated, you receive another email with your LOA-CFAs.
  2. Retrieve LOA-CFAs
    Send the LOA-CFAs to your vendor. They provision the connections between the Google peering edge and your on-premises network. Google automatically starts testing the light levels on each allocated port after 24 hours.
  3. Test the connection
    Google sends you automated emails with configuration information for two different tests. First, Google sends an IP address configuration to test light levels on every circuit in an Interconnect connection. After those tests pass, Google sends the final IP address configuration to test the IP connectivity of each connection’s production configuration.
    Apply these configurations to your routers so that Google can confirm connectivity. If you don’t apply these configurations (or apply them incorrectly), Google sends an automated email with troubleshooting information. After all tests have passed, your Interconnect connection is ready to use.
  4. Create VLAN attachments
    When your Interconnect connection is ready to use, you need to connect Virtual Private Cloud (VPC) networks to your on-premises network. To do that, first create a VLAN attachment, specifying an existing Cloud Router that’s in the VPC network that you want to reach.
  5. Configure on-premises routers
    After you create a VLAN attachment, to start sending traffic between networks, you need to configure your on-premises router to establish a BGP session with your Cloud Router. To configure your on-premises router, use the VLAN ID, interface IP address, and peering IP address provided by the VLAN attachment.

1. Ordering a Dedicated Interconnect connection 

Ordering a Dedicated Interconnect connection starts the creation process of your Interconnect connection. When you order an Interconnect connection, you specify details such as the capacity and the location of your connection.

You can request the following capacities:

  • 1 x 10-Gbps (10 Gbps) circuit up to 8 x 10-Gbps (80 Gbps) circuits
  • 1 x 100-Gbps (100 Gbps) circuit up to 2 x 100-Gbps (200 Gbps) circuits

Permissions required for this task

To perform this task, you must have been granted the following permissions or the following Identity and Access Management (IAM) roles.

Permissions

  • compute.interconnects.create

Roles

  • roles/owner
  • roles/editor
  • roles/compute.networkAdmin

Please perform below steps on Google Cloud Console:

  1. In the Google Cloud Console, go to the Cloud Interconnect Physical connections tab.
    Go to Physical connections
  2. Click Set up connection.
  3. Select Dedicated Interconnect, and then click Continue.
  4. Select Order new Dedicated Interconnect, and then click Continue.
  5. Specify the details of the Interconnect connection:
    • Name: A name for the Interconnect connection. This name is displayed in the Cloud Console and is used by the gcloud command-line tool to reference the connection, such as my-interconnect.
    • Location: The physical location where the Interconnect connection is created. Your on-premises network must meet Google Cloud’s network in this location.
    • Capacity: The total capacity of your Interconnect connection, which is determined by the number and size of the circuits that you order.
      Note: You can view the estimated cost of your choice in the upper-right corner of the Cloud Console page.
      Select one of the following options:
      • 1 x 10-Gbps circuit in 10-Gbps increments up to 8 x 10-Gbps (80 Gbps) circuits
      • 1 x 100-Gbps (100 Gbps) circuit
      • 2 x 100-Gbps (200 Gbps) circuits
  6. Click Next.
  7. If you require redundancy, specify details for your duplicate Interconnect connection, and then click Next.
  8. Specify your contact information:
    • Company name: The name of your organization to put in the LOA as the party authorized to request a connection.
    • Technical contact: An email address where notifications about this connection are sent. You don’t need to enter your own address; you are included in all notifications. You can specify only one address.
  9. Review your order. Check that your Interconnect connection details and contact information are correct. If everything is correct, click Place order. If not, go back and edit the connection details.
  10. On the order confirmation page, review the next steps, and then click Done.

After you order an Interconnect connection, Google emails you a confirmation and allocates ports for you. When the allocation is complete, Google generates LOA-CFAs for your connections and emails them to you.

All the automated emails are sent to the NOC contact and the email address of the Google Account used when ordering the Interconnect connection. You can also get your LOA-CFAs by using the Cloud Console.

You can use the Interconnect connection only after your connections have been provisioned and tested for light levels and IP connectivity.

2. Retrieving LOA-CFAs 

After you order a Dedicated Interconnect connection, Google sends you and the NOC (technical contact) an email with your Letter of Authorization and Connecting Facility Assignment (LOA-CFA) (one PDF file per connection). You must send these LOA-CFAs to your vendor so that they can install your connections. If you don’t, your connections won’t get connected.

If you can’t find the LOA-CFAs in your email, retrieve them from the Google Cloud Console. You can also respond to your order confirmation email for additional assistance.

After the status of an Interconnect connection changes to PROVISIONED, the LOA-CFA is no longer valid, necessary, or available in the Cloud Console.

Permissions required for this task

To perform this task, you must have been granted the following permissions or the following Identity and Access Management (IAM) roles.

Permissions

  • compute.interconnects.create

Roles

  • roles/owner
  • roles/editor
  • roles/compute.networkAdmin
  1. In the Google Cloud Console, go to the Cloud Interconnect Physical connections tab.
    Go to Physical connections
  2. For the Interconnect connection that contains the LOA-CFAs that you need, select the options button, and then select Download LOA-CFA.
  3. Submit your LOA-CFAs to your vendor so that they can provision your connections. For more information, contact the colocation facility that you’re working with.

To help you solve common issues that you might encounter when using Dedicated Interconnect, see Troubleshooting.

3. Testing connections 

Before you can use your Dedicated Interconnect connections, Google must verify that your connections to Google’s edge network are working. To do this, Google sends you an IP address configuration that you must apply to your on-premises router.

This configuration differs depending on whether you order one circuit (one 10-Gbps circuit or one 100-Gbps circuit) or more than one circuit (multiple 10-Gbps circuits or multiple 100-Gbps circuits).

The following sections describe two different testing procedures. The first is for a single-circuit connection and the second is for a multi-circuit connection.

Testing a single-circuit connection (one 10-Gbps or 100-Gbps circuit)

  1. Google polls its edge device every 24 hours, checking for light on the port to your on-premises router. Receiving light indicates that your connection has been installed. After detecting this light, Google sends you an email containing an IP address that Google uses to ping your on-premises router to test the circuit.
  2. Configure the interface of your on-premises router with the correct link-local IP address and configure LACP on that interface. Even though there is only one circuit in your Interconnect connection, you must still use LACP.
    The following example shows an IP address configuration similar to the one that Google sends you for the test. Replace these values with the values that Google sends you for your network.
  1. Apply the test IP address that Google sends you to the interface of your on-premises router that connects to Google. For testing, you must configure this interface in access mode with no VLAN tagging. For a sample configuration, see Configuring on-premises routers for testing.
  2. Google tests your connection by pinging the link-local IP address with LACP enabled. Google tests once, 30 minutes after detecting light, and then every 24 hours thereafter.
    • After a successful test, Google sends you an email notifying you that your connection is ready to use.
    • If a test fails, Google automatically retests the connection once a day for a week.

Testing a multi-circuit connection (multiple 10-Gbps or 100-Gbps circuits)

When you order an Interconnect connection that has multiple circuits, Google performs two separate ping tests. The first tests each individual circuit without LACP, and the second tests the final bundled connection with LACP enabled.

Ping test 1

  1. Google polls its edge device every 24 hours, checking for light on the port to your on-premises router. Receiving light indicates that your connections have been installed. After detecting this light, Google sends you an email with instructions for the first test.
  2. For the first test, configure the interfaces of your on-premises router with the public IP addresses from the email. Do not enable LACP at this time.
    The following example shows an IP address configuration similar to the one that Google sends you for the test, assuming that you have ordered two circuits. Replace these values with the values that Google sends you for your network.
  1. Apply the test IP addresses that Google sends you to the appropriate interfaces of your on-premises router that connects to Google. For testing, you must configure these interfaces in access mode with no VLAN tagging.
  2. Google tests your connection by pinging the IP addresses with LACP disabled. Google tests once, 30 minutes after detecting light, and then every 24 hours thereafter.
    1. If the test succeeds, move on to the section for Ping test 2.
    2. If a test fails, Google automatically retests the connection once a day for a week.

Ping test 2

  1. After a successful test using public IP addresses, Google sends you a link-local IP address to use for a second ping test.
  2. On your on-premises router, configure all the circuits into a bundle with LACP enabled and configure the IP address on the bundled interface.
    The following example shows an IP address configuration similar to the one that Google sends you for the test. Replace these values with the values that Google sends you for your network.
  1. Apply the test IP address that Google sends you to the bundled interface of your on-premises router that connects to Google. For testing, you must configure this interface in access mode with no VLAN tagging. For a sample configuration, see Configuring on-premises routers for testing.
  2. Google tests each connection once every 24 hours. After a successful test, Google notifies you that your connection is ready to use.

Using your Interconnect connection

After all tests have passed, your Interconnect connection is ready to use, and Google starts billing for it. To view the status of your connection, see Viewing Interconnect connection details.

At this stage, remove the test IP addresses from the interface or interfaces on your on-premises router. To reconfigure your router for production, see Configuring on-premises routers for production.

Your connection can now carry traffic, but it isn’t associated with any Google Virtual Private Cloud (VPC) networks.

Note: When you configure VLAN attachments on your on-premises router, the subnet size is different than when you configure an Interconnect connection for a test. Make sure that you specify the correct subnet size, depending on what you’re configuring. Google provides these values when you order a connection or create a VLAN attachment.

4. Creating VLAN attachments 

VLAN attachments (also known as interconnectAttachments) determine which Virtual Private Cloud (VPC) networks can reach your on-premises network through a Dedicated Interconnect connection. You can create VLAN attachments over connections that have passed all tests and are ready to use.

Billing for VLAN attachments starts when you create them and stops when you delete them.

If you need to create a VLAN attachment for a connection in another Google Cloud project, see Using Dedicated Interconnect connections in other projects.

For VLAN attachments for Partner Interconnect, see Creating VLAN attachments for Partner Interconnect.

Associating VLAN attachments with a Cloud Router

For Dedicated Interconnect, the VLAN attachment allocates a VLAN on an Interconnect connection and associates that VLAN with the specified Cloud Router. It is possible to associate multiple, different VLAN attachments to the same Cloud Router.

When you create the VLAN attachment, specify a Cloud Router that’s in the region that contains the subnets that you want to reach. The VLAN attachment automatically allocates a VLAN ID and BGP peering IP addresses. Use that information to configure your on-premises router and establish a BGP session with your Cloud Router.

Optionally, you can manually specify the IP address range for the BGP session. The BGP IP address range that you specify must be unique among all Cloud Routers in all regions of a VPC network

Utilizing multiple VLAN attachments

Each VLAN attachment supports a maximum bandwidth of 50 Gbps in increments described on the Pricing page, and a maximum packet rate as documented in Cloud Interconnect limits. This is true even if the attachment is configured on an Interconnect connection that has a greater bandwidth capacity than the attachment.

To fully utilize the bandwidth of a connection, you might need to create multiple VLAN attachments.

Note: Creating VLAN attachments with a combined bandwidth greater than the Interconnect connection doesn’t give you more than the maximum stated bandwidth of the connection:

  • To use a 20-Gbps attachment, you need at least a 2 x 10-Gbps Interconnect connection, or a 100-Gbps connection.
  • To use a 50-Gbps attachment, you need at least a 5 x 10-Gbps Interconnect connection, or a 100-Gbps connection.

To utilize multiple VLAN attachments simultaneously for egress traffic in a VPC network, create them in the same region. Then configure your on-premises router to advertise routes with the same MED. The custom dynamic routes, learned through BGP sessions on one or more Cloud Routers that manage the VLAN attachments, are applied to your VPC network with a route priority corresponding to the MED.

When multiple available routes have the same priority, Google Cloud distributes traffic among them by using a five-tuple hash for affinity, implementing an equal-cost multipath (ECMP) routing design. For more information, see Applicability and order in the VPC documentation.

Creating VLAN attachments

Permissions required for this task

To perform this task, you must have been granted the following permissions or the following Identity and Access Management (IAM) roles.

Permissions

  • compute.interconnectAttachments.create
  • compute.interconnectAttachments.get
  • compute.routers.create
  • compute.routers.get
  • compute.routers.update

Roles

  • roles/owner
  • roles/editor
  • roles/compute.networkAdmin
  1. In the Google Cloud Console, go to the Cloud Interconnect VLAN attachments tab.
    Go to VLAN attachments
  2. Click Add VLAN attachment.
  3. Select Dedicated Interconnect, and then click Continue.
  4. Select In this project to create attachments in your project. For other projects, see Using Dedicated Interconnect connections in other projects.
  5. Select an existing Interconnect connection in your project, and then click Continue.
  6. Select Add VLAN attachment, and then specify the following details:
    • Name: A name for the attachment. This name is displayed in the Cloud Console and is used by the gcloud command-line tool to reference the attachment, such as my-attachment.
    • Router: A Cloud Router to associate with this attachment. The Cloud Router must be in the VPC network that you want to connect to. If you don’t have an existing Cloud Router, select Create new router. For the BGP AS number, use any private ASN (64512-65535 or 4200000000-4294967294) or 16550.
  7. To specify a VLAN ID, a specific IP address range for the BGP session, the VLAN attachment’s capacity, or the MTU, click VLAN ID, BGP IPs, capacity, MTU.
    • To specify a VLAN ID, in the VLAN ID section, select Customize.
      By default, Google automatically generates a VLAN ID. You can specify a VLAN ID in the range 2-4093. You cannot specify a VLAN ID that is already in use on the Interconnect connection. If your VLAN ID is in use, you are asked to choose another one.
      If you don’t enter a VLAN ID, an unused, random VLAN ID is automatically selected for the VLAN attachment.
    • To specify an IP address range for the BGP session, in the Allocate BGP IP address section, select Manually.
      The BGP IP address range that you specify must be unique among all Cloud Routers in all regions of a VPC network.
      IP addresses used for the BGP session between a Cloud Router and your on-premises router are allocated from the link-local IP address space (169.254.0.0/16). By default, Google selects unused IP addresses from the link-local IP address space.
      To restrict the IP range that Google selects from, you can specify up to 16 IP prefixes from the link-local IP address space. All prefixes must reside within 169.254.0.0/16 and must be a /29 or shorter, for example, /28 or /27. An unused /29 is automatically selected from your specified range of prefixes. The address allocation request fails if all possible /29 prefixes are in use by Google Cloud.
      If you don’t supply a range of prefixes, Google Cloud picks a /29 CIDR from 169.254.0.0/16 that is not already used by any BGP session in your VPC network. If you supply one or more prefixes, Google Cloud picks an unused /29 CIDR from the supplied prefixes.
      After the /29 is selected, Google Cloud assigns the Cloud Router one address and your on-premises router another address. The rest of the address space in the /29 is reserved for Google’s use.
    • To specify the maximum bandwidth, in the Capacity field, select a value. If you don’t select a value, Cloud Interconnect uses 10 Gbps.
      If you have multiple VLAN attachments on an Interconnect connection, the capacity setting helps you control how much bandwidth each attachment can use. The maximum bandwidth is approximate, so it’s possible for VLAN attachments to use more bandwidth than the selected capacity.
    • To specify the maximum transmission unit (MTU) for the attachment, select a value from the field.
      To make use of the 1500-byte MTU, the VPC network using the attachment must have an MTU set to 1500. In addition, the on-premises VMs and routers must have an MTU set to 1500. If your network has the default MTU of 1460, leave the field at 1440.
  8. If you want to connect multiple VPC networks (for example, to build redundancy), click + Add VLAN attachment to attach additional VLANs to your Interconnect connection. Choose a different Cloud Router for each VLAN attachment. For more information, see the Redundancy section in the overview.
  9. When you have created all needed VLAN attachments, click Create. The attachment takes a few moments to create.
    The Configure Cloud Routers page shows each VLAN attachment and its configuration status.
  10. For each VLAN attachment, to create a BGP session to exchange BGP routes between your Cloud Router network and your on-premises router, click Configure, and then enter the following information:
    • Name: A name for the BGP session.
    • Peer ASN: The public or private ASN of your on-premises router.
    • Advertised route priority (optional): The base value that Cloud Router uses to calculate route metrics. All routes advertised for this session use this base value. For more information, see Advertised prefixes and priorities.
  11. Click Save and continue.
  12. After you add BGP sessions for all your VLAN attachments, click Save configuration. The BGP sessions that you configured are inactive until you configure BGP on your on-premises router.

Restricting Dedicated Interconnect usage

By default, any VPC network can use Cloud Interconnect. To control which VPC networks can use Cloud Interconnect, you can set an organization policy. 

5. Configuring on-premises routers 

This document describes how to configure on-premises routers for Dedicated Interconnect. If you are creating a Partner Interconnect connection, see Configuring on-premises routers for Partner Interconnect.

After you create a VLAN attachment, you need to configure your on-premises router to establish a BGP session with your Cloud Router. To configure your on-premises router, use the VLAN ID, interface IP address, and peering IP address provided by the VLAN attachment.

Using sample topologies

This document provides the following sample topologies and configurations that you can use as a guide when configuring your on-premises router:

  • Layer 3 only topology (recommended): A Dedicated Interconnect connection or connections terminating on an on-premises router. The router performs BGP peering with Cloud Router.
  • Layer2/Layer3 topology: A Dedicated Interconnect connection or connections terminating on an on-premises switch connected to an on-premises router. The router performs BGP peering with Cloud Router.

For values for third-party platforms that you might use for your on-premises router, see vendor-specific notes. For definite values, see your on-premises router documentation.

The sample topologies in this document use the following Google Cloud resources:

  • The project Sample Interconnect Project
  • The network my-network
  • The region us-east1

There are two Dedicated Interconnect connections, my-interconnect1 and my-interconnect2. These connections are already provisioned and have a status of ready to use.

Layer 3 only topology

In this topology, the Interconnect connections terminate on an on-premises router, which performs BGP peering with Cloud Router.

The following diagrams show both the physical and logical Layer 3 only topology.

Sample physical, on-premises Layer 3 only topology (click to enlarge)

Layer 2/Layer 3 topology

In this topology, the Interconnect connections terminate on an on-premises switch, which then connects to an on-premises router. The router performs BGP peering with Cloud Router.

The following diagrams show the physical and logical Layer 2/Layer 3 topology.

Sample logical Layer 2/Layer 3 topology 

Configuring on-premises devices for testing

The following section describes how to configure on-premises devices for testing your Interconnect connection. For a Layer 2/Layer 3 configuration, this example describes configuring the test interface on one or more Google Cloud-facing switches, but not on the routers.

Before Google starts testing your new Dedicated Interconnect connection, configure your interfaces without VLAN tagging, which is sometimes referred to as access mode.

Configuring on-premises routers for production

This section describes how to configure the Layer 3 only topology and the Layer 2/Layer 3 topology for production use. Each sample configuration describes all device settings.

For information about how to configure on-premises devices for testing your Interconnect connection, see Configuring on-premises routers for testing.

Production on-premises router settings for both topologies

Based on the configuration in the sample Google Cloud project, the following table summarizes the on-premises router settings to use for the example topologies.

For the sample project name, VPC network, and region used on the Google Cloud side, see the topology reference.

The hold timer and keepalive timer values allow Google to quickly transfer traffic to redundant connections in the event of an issue. Set their values as shown in the table.

Graceful restart prevents BGP sessions from packet drops and route withdrawal during Cloud Router maintenance. If your on-premises device supports BGP graceful restart, enable it and set the graceful restart and stalepath timers as shown in the table.

For more information about BGP timer settings, see the recommended values for BGP timers in the Cloud Router documentation.

Configuring Layer 3 only topology for production

Use the following guidelines when configuring the Layer 3 only topology:

  • The on-premises router port (0/0 in the diagram) or ports facing Cloud Router must be part of a port channel, even if there is only one port.
  • The port channel must have LACP enabled in either active or passive mode.
  • The maximum transmission unit (MTU) of the router interface (0/0 in the diagram) should be either 1440 or 1500 bytes, depending on the MTU of the attachment and the MTU of the connected VPC network.
  • The EBGP neighbor must have multihop configured. The recommended value for this setting is 4.

Device configuration

The following listing shows a Layer 3 only sample configuration for on-premises Router1 (Cisco) on VLAN 1010:

 interface E0/0
          description connected_to_google_edge_device
          channel-group 2 mode active
          no shut
 
        interface Po2
          description my-interconnect1
          no shut
 
        interface Po2.1010
          description attachment_vlan1010
          encapsulation dot1Q 1010
          ip address 169.254.10.2 255.255.255.248
          ip mtu 1440
 
        ip prefix-list TO_GCP seq 5 permit 192.168.12.0/24
 
        route-map TO_GCP_OUTBOUND permit 10
          match ip address prefix-list TO_GCP
 
        router bgp 64500
          bgp graceful-restart restart-time 1
           neighbor 169.254.10.1 description peering_to_cloud_router
           neighbor 169.254.10.1 remote-as 65200
           neighbor 169.254.10.1 ebgp-multihop 4
           neighbor 169.254.10.1 timers 20 60
           neighbor 169.254.10.1 update-source Po2.1010
           neighbor 169.254.10.1 route-map TO_GCP_OUTBOUND out

The following listing shows a Layer 3 only sample configuration for on-premises Router2 (Juniper) on VLAN 1020:

        set interfaces xe-0/0/0 ether-options 802.3ad ae1
        set interfaces xe-0/0/0 description "connected_to_google_edge_device"
 
        set interfaces ae1 description my-interconnect2
        set interfaces ae1 flexible-vlan-tagging
        set interfaces ae1 aggregated-ether-options minimum-links 1
        set interfaces ae1 aggregated-ether-options lacp active
        set interfaces ae1 unit 1020 family inet mtu 1440
        set interfaces ae1 unit 1020 vlan-id 1020
        set interfaces ae1 unit 1020 family inet address 169.254.20.2/29
 
        set routing-options autonomous-system 64500
 
        set policy-options prefix-list TO_GCP 192.168.12.0/24
 
        set policy-options policy-statement TO_GCP_OUTBOUND term 1 from protocol direct
        set policy-options policy-statement TO_GCP_OUTBOUND term 1 from prefix-list TO_GCP
        set policy-options policy-statement TO_GCP_OUTBOUND term 1 then accept
        set policy-options policy-statement TO_GCP_OUTBOUND term 2 then reject
 
        set protocols bgp group config_vlan_1020 type external
        set protocols bgp group config_vlan_1020 multihop ttl 4
        set protocols bgp group config_vlan_1020 local-address 169.254.20.2
        set protocols bgp group config_vlan_1020 peer-as 65200
        set protocols bgp group config_vlan_1020 neighbor 169.254.20.1 export TO_GCP_OUTBOUND
        set protocols bgp group config_vlan_1020 neighbor 169.254.20.1 graceful-restart restart-time 1

Configuring Layer 2/Layer 3 topology for production

Use the following guidelines for your on-premises switch and routers when configuring the Layer 2/Layer 3 topology:

  • VLANs must be configured on the switch.
  • The switch port (1/1 as shown in the diagram) or ports facing toward Cloud Router must be part of a port channel.
    • The port channel must have LACP enabled, in either active or passive mode.
    • The port channel must be configured in 802.1Q trunk mode, and all VLAN IDs used by the Interconnect connection must be allowed.
    • The port channel must have 802.1Q VLAN tagging enabled.
  • The switch port (1/2 as shown in the diagram) facing toward the on-premises router can be a trunk port or an access port. This covers the case where a router port is dedicated to a single VLAN.
  • When enabling trunk mode on the switch side, the on-premises router must support subinterfaces with necessary encapsulation (dot1q tags).
  • This configuration uses a 1440-byte MTU. However, you can use a 1500-byte MTU if you adjust the router interface configuration accordingly, and if the MTU of the attachment and the MTU of the connected VPC network are also 1500 bytes.
  • The EBGP neighbor must have multihop configured. The recommended value for this setting is 4.

Device configuration

The following listing shows a Layer 2/Layer 3 sample configuration for on-premises Switch1 (Cisco) on VLAN 1010:

          vlan 1010
          name cloud_vlan1010
 
          interface E1/1
            description connected_to_google_edge_device
            Channel-group 1 mode active
 
          interface port-channel1
            description connected_to_google_edge_device
            Switchport trunk encapsulation dot1q
            Switchport mode trunk
            Switchport trunk allowed vlan 1,1010
 
          interface E1/2
            description connected_to_onprem_router
            channel-group 2 mode active
 
          interface port-channel2
            description connected_to_onprem_router
            Switchport trunk encapsulation dot1q
            Switchport mode trunk
            Switchport trunk allowed vlan 1,1010

The following listing shows a Layer 2/Layer 3 sample configuration for on-premises Router1 (Cisco) on VLAN 1010:

        interface E0/0
          description connected_to_onprem_switch
          channel-group 2 mode active
          no shut
 
        interface Po2
          description my-interconnect1
          no shut
 
        interface Po2.1010
          description attachment_vlan1010
          encapsulation dot1Q 1010
          ip address 169.254.10.2 255.255.255.248
          ip mtu 1440
 
        ip prefix-list TO_GCP seq 5 permit 192.168.12.0/24
 
        route-map TO_GCP_OUTBOUND permit 10
          match ip address prefix-list TO_GCP
 
        router bgp 64500
          bgp graceful-restart restart-time 1
          neighbor 169.254.10.1 description peering_to_cloud_router
          neighbor 169.254.10.1 remote-as 65200
          neighbor 169.254.10.1 ebgp-multihop 4
          neighbor 169.254.10.1 timers 20 60
          neighbor 169.254.10.1 update-source Po2.1010
          neighbor 169.254.10.1 route-map TO_GCP_OUTBOUND out

The following listing shows a Layer 2/Layer 3 sample configuration for on-premises Switch2 (Juniper) on VLAN 1020:

        set vlans cloud_vlan1020 vlan-id 1020
 
        set interfaces xe-0/1/1 description "connected_to_google_edge_device"
        set interfaces xe-0/1/1 ether-options 802.3ad ae1
 
        set interfaces ae1 aggregated-ether-options lacp active
        set interfaces ae1 unit 0 description "connected_to_google_edge_device"
        set interfaces ae1 unit 0 family ethernet-switching port-mode trunk
        set interfaces ae1 unit 0 family ethernet-switching vlan member cloud_vlan1020
 
        set interfaces xe-0/1/2 description "connected_to_onprem_router"
        set interfaces xe-0/1/2 ether-options 802.3ad ae2
 
        set interfaces ae2 unit 0 description "connected_to_onprem_router"
        set interfaces ae2 unit 0 family ethernet-switching port-mode trunk
        set interfaces ae2 unit 0 family ethernet-switching vlan member cloud_vlan1020

The following listing shows a Layer 2/Layer 3 sample configuration for on-premises Router2 (Juniper) on VLAN 1020:

      set interfaces xe-0/0/0 ether-options 802.3ad ae1
      set interfaces xe-0/0/0 description connected_to_onprem_switch
 
      set interfaces ae1 description my-interconnect2
      set interfaces ae1 flexible-vlan-tagging
      set interfaces ae1 aggregated-ether-options minimum-links 1
      set interfaces ae1 aggregated-ether-options lacp active
      set interfaces ae1 unit 1020 family inet mtu 1440
      set interfaces ae1 unit 1020 vlan-id 1020
      set interfaces ae1 unit 1020 family inet address 169.254.20.2/29
 
      set routing-options autonomous-system 64500
 
      set policy-options prefix-list TO_GCP 192.168.12.0/24
 
      set policy-options policy-statement TO_GCP_OUTBOUND term 1 from protocol direct
      set policy-options policy-statement TO_GCP_OUTBOUND term 1 from prefix-list TO_GCP
      set policy-options policy-statement TO_GCP_OUTBOUND term 1 then accept
      set policy-options policy-statement TO_GCP_OUTBOUND term 2 then reject
 
      set protocols bgp group config_vlan_1020 type external
      set protocols bgp group config_vlan_1020 multihop ttl 4
      set protocols bgp group config_vlan_1020 local-address 169.254.20.2
      set protocols bgp group config_vlan_1020 peer-as 65200
      set protocols bgp group config_vlan_1020 neighbor 169.254.20.1 export TO_GCP_OUTBOUND
      set protocols bgp group config_vlan_1020 neighbor 169.254.20.1 graceful-restart restart-time 1

Best practices

Follow these best practices to ensure effective connectivity to Google Cloud from your on-premises devices when using Cloud Interconnect 99.9% and 99.99% topologies.

Configuring devices for active/active forwarding

  • Ensure that the same MED values are exchanged across all BGP sessions.
  • Enable equal-cost multipath (ECMP) routing in your BGP configuration.
  • Enable graceful restart on your BGP sessions to minimize the impact of Cloud Router task restarts. When you connect two attachments through different edge availability domains, as described in the recommended topologies, the Cloud Router uses one task per edge availability domain. To avoid downtime, software tasks are scheduled independently.
  • If you are configuring two on-premises devices, use any routing protocol to connect both devices to each other. If you are configuring your device to use redistribution, use either IBGP or IGP.

Configuring devices for active/passive forwarding

  • To avoid asymmetric routing, make sure that higher MED values are applied on the Cloud Router side and on the on-premises device side.
  • Enable graceful restart on your BGP sessions to minimize the impact of Cloud Router task restarts. When you connect two attachments through different edge availability domains, as described in the recommended topologies, the Cloud Router uses one task per edge availability domain. To avoid downtime, software tasks are scheduled independently.
  • If you are configuring two on-premises devices, make sure that both devices have Layer 3 connectivity to each other. If you are configuring your device to use redistribution, use either IBGP or IGP.

Check that your BGP sessions are working between your on-premises network and your Google Virtual Private Cloud (VPC) network. For more information, see Viewing Cloud Router status and routes in the Cloud Router documentation.

Happy Learning !!!


Share At:
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
Back To Top

Contact Us