What is a Distributed Disaggregated Chassis (DDC)?
By Will Chang | August. 05. 2020
The Challenges for Future Networking
The chassis dominated switch and routing market for decades, it offered multiple sockets for pluggable service modules to provide switching and routing functions or network services. Service providers took advantage of the chassis design to allocate resources on demand.
However, as we come upon the 5G era, the chassis becomes a roadblock on the path to enable future service applications and network innovations. According to GSMA, global mobile data usage will grow almost four-fold by 2025. So, when it comes to upgrading switching capacity to fulfill increasing demand, you will be faced with the dilemma on whether to get a smaller chassis or bigger one. To go small, you will end up buying a bunch of small chassis as traffic grows, which means management becomes an issue; to go big, you will waste space and resources on ports you don't need right now.
DDC, A Disruptive Innovation
DDC stands for Distributed Disaggregated Chassis, it breaks up the traditional monolithic chassis into separated building blocks in order to create and scale a switch cluster according to the needs of the network. Physical dimensions are no longer a problem, because you can increase service capacity gradually by adding additional building blocks while keeping CAPEX and OPEX at a minimal level.
Let's take a closer look at the distributed disaggregated chassis to see how each term brings more depth into reinventing the traditional chassis.
A traditional chassis switching or routing system would typically have several components locked within its chassis, such as: Line cards, Fabric cards, controllers, function modules and protocol software. These components are connected to a single backplane, which is similar to that of a motherboard in a personal computer. Just like a motherboard, there are a limited number of slots available for each component, so expansion is quite limited.
Line cards are what is added when capacity needs to be increased. The fabric cards link the line cards together, so as more line cards are added, the fabric cards would be increased as needed. Of course, more fans and power supplies are needed as well in order to support more line and fabric cards. The other major components of a switching or routing chassis do not affect switching and routing capacity. However, once the chassis runs out of line card ports, getting another chassis means you’ll be repurchasing those components again.
What the “distributed” in DDC does is, we distribute each component into standalone boxes. Each box is equipped with its own power supplies, cooling fans, CPU, chipsets and protocol software pertaining to their specific functions. Each box is 2RU in height and can be fitted onto industry standard 19” racks. This way, when we want to scale up our chassis’ capacity, we only need to add on the components that will improve our capacity, which are the line and fabric cards.
Additionally, instead of a single backplane connecting everything together, we use industry standard QSFP-28 and QSFP-DD transceiver as well as optic cables like DACs, AOCs, AECs, and ACCs. Therefore, our distributed disaggregated chassis breaks through the backplane limitations of a traditional chassis. Each addon are connected with cables instead of plugging onto a “board,” therefore, leading to a very, very large chassis capacity.
A traditional chassis system would typically come with proprietary software and equipment, which comes at a hefty price. Furthermore, the features and services available are limited to what the vendor can provide. If there is a feature outside of their offering, it takes a lot of resources to get a third-party integration. By that time, something new is available and you’ve just spent all that time and money for old technology.
A disaggregated chassis would separate the software and hardware so that the best of both worlds can come together to maximize the value and benefits for the application scenario. Each of the distributed components are disaggregated white boxes, which are compatible with various open standard software. So, you can choose the software vendor of your liking and maintain the same distributed disaggregated chassis architecture.
The disaggregated white box switch and router architecture also brings another benefit to the table, which is utilizing the same white box hardware infrastructure for multiple positions within the network. Traditional chassis models will vary depending on its use case. Proprietary systems will usually have a specific purpose and will require a different product family or model extension for other applications. However, a disaggregated white box not only has its own CPU and merchant silicon, but it is also compatible with any vendor’s software. The same distributed disaggregated chassis can be positioned in the aggregation network, edge network, core network and even in the data center.
This is perhaps the most straightforward part of the DDC. Despite how it looks, the distributed disaggregated chassis is considered one single chassis. Using the DDC architecture, a single switch or router chassis has the potential to be scaled from 4Tbs up to 768Tbs. Not only that, it can be scaled in piecemeal based on the network capacity needs now and in the future.
The Building Blocks of DDC
The distributed disaggregated chassis architecture consists of two major white box components which correspond to fabric and line interface in traditional chassis.
Line Card White Box
Line card white box, also known as a network cloud processor or NCP, serves as the line card module in the distributed disaggregated chassis to provide interfaces to the network. At UfiSpace, we have two types of line card white boxes or NCPs, both of which are powered by the Broadcom Jericho2. One is equipped with 40x100GE service ports (NCP1-1) and the other with 10x400GE service ports (NCP2-1). They both have 13x400GE fabric ports. When just utilizing the service ports without connecting the fabric ports, the NCP1-1 and NCP2-1 can function as standalone units. The fabric ports are used for connecting to the fabric card white box (NCF) in order to build a DDC cluster.
Fabric Card White Box
The fabric card white box, also known as a Network Cloud Fabric or NCF, only has fabric ports and works as part of the backplane to connect line card white boxes (NCPs) and forwards traffic between them. The UfiSpace fabric card white box has 48x400GE fabric ports and is powered by the Broadcom Ramon.
Building a DDC Cluster
Although the line card white boxes can work as standalone switches or routers, to unleash the true potential of the distributed disaggregated chassis architecture, the DDC cluster is the way to go. As mentioned above, each line card white box has 13x400GE fabric ports, which are specifically used to connect to the fabric white box.
The DDC cluster is built using the CLOS topology by connecting the fabric ports found on each line card white boxes to the fabric white boxes in order to create a full mesh interconnection. In this way, traffic from one node will always have a predictable and fixed number of hops to the other node.
The distributed disaggregated chassis also utilizes cell switching, instead of packet switching between the line and fabric card boxes. Packets are chopped into cells and distributed randomly over the fabric interface, which guarantees reliable and efficient routing.
If you want to serve more network connections due to growing traffic demands, just connect a new line card white box to the fabric white box. When the fabric capacity is not enough, simply add another one. Using this method of expansion, one of our distributed disaggregated chassis can be built into a cluster up to 192Tb service capacity while in non-blocking and redundant configuration.
Benefits of the Distributed Disaggregated Chassis
With the capability of being able to build small clusters to meet current capacity and scale out as demand grows, the CAPEX can be maintained at the lowest level. With only two building blocks (the line card white box and fabric white box) needed to scale out capacity, configuration efforts are minimized and the management complexity in the back office can be simplified.
The service capacity is no longer limited to the physical dimensions of the chassis. Just add additional node to expand the capacity.
Redundant designs make sure network availability and that no single point of failure will compromise the service.
Powered by merchant silicon and compliant to open standards, the distributed disaggregated chassis could be easily integrated with any NOS, automation and orchestration software.
The future of networking is driven by applications and with 5G becoming more widely available, the demand potential can go beyond our imagination. To prepare for uncertainty, an agile and flexible infrastructure is a key. The distributed disaggregated chassis is a solution designed for the next generation of networking to help service providers to take the lead on the race to a 5G era.
Telecoms are Realizing the Benefits of Disaggregated Cell Site Gateway Routers
By Andrew Lui | July. 31. 2020
One of the big four telecoms in Taiwan recently announced that they have chosen our disaggregated cell site gateway solution for their 5G network rollout. Although, 5G is nothing new in Taiwan, I do believe this will be Taiwan's first installation of a white box cell site gateway router. In their press release, they mentioned that they chose to use our disaggregated cell site gateway routers because it allows them to build a next generation network architecture for 5G. This really says something about the reality of how disaggregated white box platforms and open network solutions benefits telecommunication companies (telecoms).
Just in case some of these terms sound unfamiliar, here’s a short description on white box and open network. Then you will start to see the benefits for telecoms by using disaggregated cell site gateway routers in their network.
What is a white box? Think non-proprietary. Back in the day, most if not all, telecom networking equipment were proprietary, which meant the hardware and software components were built and supplied by a single vendor. Over the years, this led to a very inflexible and expensive networking system. In turn, this led to the development of white box hardware, or equipment built using merchant silicon that does not run proprietary software, rather it can run any type of software. That's because the software is separated, or disaggregated, from the hardware.
Why would white box platforms and open networking be of any interest to telecoms?
I believe the telecom industry has always kept an eye on the potential of implementing white boxes and open networking during its popularity growth in data centers. But I’d like to think that it was the 5G rollout that really put it into motion. And it boils down to speed, flexibility and economy of scale.
Anyone who has been following 5G news knows how expensive it will be for telecoms to implement 5G. Earlier this year, Taiwan telecoms $4.6B on 5G spectrum alone. That’s $4.6 billion US dollars, not New Taiwan Dollars! At the time, it was stated by Taipei Times to be the third highest priced auction for 5G in the world. Below is a chart of the top 15 prices paid for C-band spectrum since April 2020, based on a report from the Global Mobile Suppliers Association.
Additionally, due to the mmWave technology used by 5G, hundreds of thousands of small cells need to be installed. Not to mention telecoms need to upgrade the antennas at all the cell sites, increase backhaul network capacity, and prepare their core and edge networks for the development of all the new 5G services.
With all that being said, I feel the last thing telecoms need is to be spending high capex on proprietary equipment that may limit flexibility and solution choices. These are exciting times as more and more 5G services and technologies are becoming available. Now is the time to introduce flexibility into the network infrastructure and foster innovation, but within a reasonable budget so that it doesn’t slow down the 5G rollout. To accomplish that, our 5G networking solutions can lead the way.
To summarize, our disaggregated cell site gateway routers are white box platforms integrated with open networking solutions that will provide flexibility and economies of scale for 5G network rollouts. In the use case of our disaggregated cell site gateway router by the telecom in Taiwan, they will be able to accelerate network transformation, expand the depth and breadth of 5G applications, and realize more business opportunities in a more flexible manner.
By Marketing | July. 24. 2020
The best part of this celebration is allowed us to see a more human part of our coworkers and revel in personal connections. We even had touching speeches from our CEO and Chairman, cheering us on and giving us guidance for the direction of the company. Fortunately, it is never routine at UfiSpace as our work isn’t just a job that needed to be done, it is something that wanted to be done. It was a unique experience and a wonderful environment to be in where everyone was relaxed and having fun. We are deeply thankful and blessed for being together with the special, unique, amazing UfiSpace team.
Technology and Innovation Fueled by Covid19
By Andrew Lui| June. 30. 2020
I feel it goes without saying that the world will never be the same after Covid19. It has fostered the wide-spread implementation of the work at home mentality and social distancing, which made internet-based services that were “nice-to-have” become a “must-need”. During this pandemic, the importance of technology and innovation within the telecommunication industry was brought into the public light. With all that’s been going on lately, I’d thought it would be interesting to look at how telecoms are using innovations in technology to get through the trials put forth by Covid19.
Just a bit of background on the types of data surges telecoms have seen and just how important it is to always have a strong and capable network infrastructure. According to a GSMA survey, telecoms worldwide have seen voice and data traffic surges by more than 50% due to the effect of Covid19. In the United States we saw up to 52% increase in VPN traffic and core network traffic surging 23%. In Europe, mobile traffic grew 63% and broadband usage surged up 60%. As well as in Japan, there were reports of data usage increasing 40%. Without all the telecoms’ efforts of consistently improving their network capabilities, the stay-at-home economy could have turned out very differently.
At first, I was holding my breath. To all of a sudden have the majority of the population using the internet for most of their necessities must have put a tremendous amount of pressure on the telecoms’ networks. I was able to sigh relief when the announcements of how well their networks were coping with the situation. I can’t help but smile a little bit to know that UfiSpace was part of how well some telecom networks handled it. Last year we launched our Distributed Disaggregated Chassis solution, which increased our customers’ network capacity to heights never seen before. I could not be prouder of the fact that our 5G solutions were part of the answer for telecommunication networks coping with the data surges in the first few months of Covid19.
Our Distributed Disaggregated Chassis (DDC) solution breaks apart (i.e. disaggregates) the line card and fabric components of the traditional monolithic chassis router into easier to manage standalone, open network white boxes called NCP (“line card” white boxes) and NCF (“fabric” white boxes). The NCP and NCF are connected via standard QSFP-DD optics. This not only provides our customers with the means to increase their network capacity up to hundreds of Tb per second, it also provides flexibility in capacity scaling using a “pay-as-you-grow” model as well. For example, whereas the traditional chassis is limited by how many slots it has and can only be upgraded by purchasing another large chassis with a set number of slots, our DDC can scale horizontally by connecting additional NCPs and NCFs.
Another benefit from our DDC solution is that the hardware and software are disaggregated as well. By disaggregating hardware from software and utilizing our open networking interfaces, Telecoms not only enabled more disruptive technologies to come in, they were also able to yield significant capex savings as well as opex savings such as from SDN automation. One instance of how these new technologies were being utilized during Covid19 was in the case of Telecom enterprise customers needing to increase video conferencing, instead of physically going to the premise to install hardware components with these features, they could just spin up more software instances to meet the demands.
My heart goes out to all of those around the world who were affected by such a tragic pandemic and I wish the best for everyone who is trying to make the best of the situation they are in. But I believe it’s times like these where we will push ourselves to new heights. I look forward to seeing all the new innovations and technologies that will be brought forth to improve our way of life.