Thursday 20 August 2015

Google's Global Network Infrastructure

http://googleresearch.blogspot.com.au/2015/08/pulling-back-curtain-on-googles-network.html
http://googlecloudplatform.blogspot.com.au/2015/08/a-visual-look-at-googles-innovation-in.html
http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43837.pdf

Today, at the ACM SIGCOMM conference, Google presented a paper with the technical details on five generations of our in-house data center network architecture. This paper presents the technical details behind a talk presented at Open Network Summit a few months ago.

A generic 3 tier Clos architecture with edge switches (ToRs), aggregation blocks and spine blocks. All generations of Clos fabrics deployed in Google datacenters follow variants of this architecture: 


Google hardware innovations over the years:
Firehose (2005-2006)
  • Chassis based solution (but no backplane)
  • Bulky CX4 copper cables restrict scale

WatchTower (2008)
  • Chassis with backplane
  • Fiber (10G) in all stages
  • Scale to 82 Tbps fabric
  • Global deployment
Saturn (2009)
  • 288x10G port chassis
  • Enables 10G to hosts
  • Scales to 207 Tbps fabric
  • Reuse in WAN








Jupiter (2012)
  • Enables 40G to hosts
  • External control servers
  • OpenFlow



"Rather than use commercial switches targeting small-volume, large feature sets, and high reliability, we targeted general-purpose merchant switch silicon, commodity priced, off the shelf, switching components. To keep pace with server bandwidth demands which scale with cores per server and Moore’s Law, we emphasized bandwidth density and frequent refresh cycles. Regularly upgrading network fabrics with the latest generation of commodity switch silicon allows us to deliver exponential growth in bandwidth capacity in a cost-effective manner"

 

No comments:

Post a Comment