Application Scenarios for 5G Fronthaul Gateways
By Andrew Lui | October 05, 2020
The 5G fronthaul gateway allows telecoms to transition smoothly from a 4G RAN (Radio Access Network) to a 5G RAN by enabling the communications between the RRU (Remote Radio Unit) and BBU (Baseband Unit) using both CPRI and eCPRI. However, a standard 5G RAN is still being defined and there are several different application scenarios being explored, which got me wondering. What are the potential application scenarios of the 5G fronthaul?
There are four major types of scenarios identified for the 5G fronthaul:
- RoE Fronthaul Conversion
- LowPHY Fronthaul Conversion
- RoE to LowPHY Fronthaul Conversion
- PICO Fronthaul Conversion
Let’s go more into details for each scenario to see how it’s being applied and how we designed a flexible 5G fronthaul gateway with our partner, Altran (part of the Capgemini Group), that can potentially be used for any type of 5G fronthaul scenario.
RoE (Radio over Ethernet) Fronthaul Conversion
Let’s say you’re rolling out 5G and you want to install 5G RRUs at your cell sites. You want to implement a BBU hotel to cut down on costs, but your 4G and 5G BBUs haven’t been integrated yet, so they are still separate at the hub site. This means, not only do you need to aggregate both the 4G/5G RRUs, you also need to revert the signals back to their respective BBUs. In this scenario the RoE conversion feature of our 5G fronthaul gateway would come in handy.
Our 5G fronthaul gateway utilizes the Radio over Ethernet (RoE) protocol, which allows 4G radio traffic to be sent over standard Ethernet. This is done by encapsulating 4G into Ethernet frames at the cell site with the fronthaul gateway and using another fronthaul gateway to decapsulate it back to CPRI at hub site before feeding it to the 4G BBU. The decapsulation is needed at the hub site because the 4G BBUs are still using CPRI. At the same time, the 5G traffic is sent directly to the 5G BBU using eCPRI. The Converged Access Switch (CAS) acts as an aggregation switch to aggregate multiple fronthaul gateways at the hub site.
Using RoE conversions at the fronthaul gateway, you do not need to wait until your 4G and 5G BBUs are completely pooled together to take advantage of the BBU hotel, which leads to faster time to market for your cost down solution.
LowPHY Fronthaul Conversion
As the RAN develops, there will most likely be a time when all the BBUs are pooled together while the 4G RRUs are still active. It’s not like 5G will replace 4G right away, so very likely most brownfield projects will end up in a scenario like this one. When this happens a CPRI to eCPRI conversion would be ideal for 4G fronthaul to the 5G DU/CUs.
In this case, our 5G fronthaul gateway takes on the BBU LowPHY processing and converts 4G CPRI to eCPRI at the cell site, which allows for a fronthaul of just eCPRI to the hub site DU/CU. You’ll notice a major difference in this scenario compared to the RoE conversion is that since the 4G RRU’s radio is already converted to eCPRI, there is no need for another fronthaul gateway conversion at the hub site. Therefore, the CAS will aggregate all the eCPRI signals from multiple fronthaul gateways and switch it to the 5G DU/CU pool.
This scenario would be more geared towards the end goal of a 4G/5G RAN with optimized BBU pooling. Since the BBU LowPHY processing is done at the 5G fronthaul gateway, there is also significant bandwidth savings and costs. This also supports the functional 7.2x split from ORAN standard.
RoE (Radio over Ethernet) to LowPHY Fronthaul Conversion
Here’s where things get a little tricky and we start to explore alternative scenarios that may happen during the transition to 5G DU/CUs. In this scenario, Ethernet to Ethernet transport protocol is required between the 4G cell site and 5G DU/CU at the hub site. This may come into play in areas where 5G DU/CU pooling was completed before all the cell sites were upgraded with 5G or at cell sites simply for 4G coverage.
In this scenario, the CPRI is encapsulated at the cell site with a RoE enabled fronthaul gateway and aggregated at the hub site by the converged access switch. Then the CPRI will be converted to eCPRI using a LowPHY enabled fronthaul gateway before sending it back to the CAS to be switched to the DU/CU pool.
Deploying the LowPHY fronthaul gateway at the hub site will not provide the bandwidth benefits realized from deploying it at the cell site, however, it is a secondary solution for utilizing 5G DU/CU pooling if LowPHY conversion at the cell site is not available.
PICO Fronthaul Conversion
At some cell sites, there just isn’t enough room or power to install a fronthaul gateway. In these scenarios, CPRI can be converted at the hub site similar to the scenarios above. It’ll be converted to eCPRI from a LowPHY fronthaul gateway at the hub site and sent to the 5G DU/CU pool.
5G Fronthaul Challenges and Potential Future Application Scenarios
It goes without saying, that the scenarios described above may not encompass all that is to come for 5G fronthaul application scenarios. (For example, we didn’t even mention varying port requirements or how the placement of DUs and CUs would affect the fronthaul.) But it should give you an idea of some of the major scenarios identified and begin to think which one would suit your RAN architecture.
One question you may be asking is, with all these scenarios, just how many fronthaul gateway variations would I need? Here’s a list of some 5G fronthaul challenges:
- Choosing between RoE vs LowPHY
- Varying quantities of RRUs that need to be aggregated at each cell site
- Multiple timing protocols
- All the different vendors for RRU and BBU
Would you be surprised if I told you that you only need one fronthaul gateway to meet all these challenges?
Defining Characteristics of a Flexible 5G Fronthaul Gateway
Working with our partner, Altran, we set forth on a journey to design and develop a 5G fronthaul gateway that would fit into most, if not all, of the scenarios mentioned above and much more. Our goal is to provide Telecoms with a solution to implement a 5G fronthaul now that has the flexibility to meet future 5G application scenarios. Thus, allowing our customers to invest in their RAN knowing that they will save time, money, and management headaches by utilizing one fronthaul gateway to meet any scenario.
To create such a solution, our flexible 5G fronthaul gateway has meet the following characteristics:
For 5G, the timing accuracy is more critical than ever. In a fronthaul network, the time difference must be under 3 microseconds and for some critical applications, it will be within nanoseconds. Our in-house Network Timing Module provides ITU-T Class C precision to guarantee the time error within 10 nanoseconds.
A flexible 5G fronthaul gateway should meet the multiple standards that are defined to enhance the time precision over the network. Synchronous Ethernet (SyncE) is an ITU-T standard to provide frequency synchronization over a physical interface while Precision Time Protocol (PTP, IEEE1588v2) is designed to exchange time information over the packet network.
Telecommunication is a highly geographical-dependent segment, especially in the fronthaul. Various configurations are needed for different deployment scenarios (in addition to those mentioned above), which means there is no one-size-fits-all solution. However, to get around that, we designed the service ports of our fronthaul gateway to be modularized built with power field programmable gate array (FPGAs), which can be customized to support different pluggable standards, ports count and new features.
Due to the complexity of the 5G fronthaul, not all standards will be suitable for mass commercialization, nor will it be readily available via mainstream vendors. Equipped with field upgradable FPGA, UfiSpace and our partner, Altran, are working together to realize cutting-edge features such as RoE encapsulation with support for LowPHY eCPRI conversion, when the technology and infrastructure is ready.
Furthermore, our comprehensive SDKs can enable front panel ports to be programmed for different functions according to the field application. For example, in an urban city we might require more eCPRI interfaces while at more rural areas CPRI is still the majority. Therefore, the service ports on the 5G fronthaul gateway can be dynamically configured to have more eCPRI interfaces when deployed in cities and more CPRI in a rural setting.
4. Multi-Vendor Openness
The vision for the future of network is openness, where all the solutions from different vendors can be integrated seamlessly. Working with Altran, we are moving towards a multi-vendor compliance to embrace the open world. From a hardware perspective, UfiSpace guarantees the interoperability on the physical layer with other devices while Altran’s ISS platform is responsible for the communication on control and management plane with other systems.
With such characteristics our 5G fronthaul gateway isn’t limited to just packet fronthaul. Our modularized and programmable design enables our fronthaul gateways to be programmed for various purposes, such as leveraging the SerDes on board to develop different applications such as R-HUB, RF transmitter, OTN or even transponders. This truly allows telecoms to get the best value from their fronthaul investment and create a flexible infrastructure to meet all the possible scenarios that will evolve from 5G.
What is a Distributed Disaggregated Chassis (DDC)?
By Will Chang | August. 05. 2020
The Challenges for Future Networking
The chassis dominated switch and routing market for decades, it offered multiple sockets for pluggable service modules to provide switching and routing functions or network services. Service providers took advantage of the chassis design to allocate resources on demand.
However, as we come upon the 5G era, the chassis becomes a roadblock on the path to enable future service applications and network innovations. According to GSMA, global mobile data usage will grow almost four-fold by 2025. So, when it comes to upgrading switching capacity to fulfill increasing demand, you will be faced with the dilemma on whether to get a smaller chassis or bigger one. To go small, you will end up buying a bunch of small chassis as traffic grows, which means management becomes an issue; to go big, you will waste space and resources on ports you don't need right now.
DDC, A Disruptive Innovation
DDC stands for Distributed Disaggregated Chassis, it breaks up the traditional monolithic chassis into separated building blocks in order to create and scale a switch cluster according to the needs of the network. Physical dimensions are no longer a problem, because you can increase service capacity gradually by adding additional building blocks while keeping CAPEX and OPEX at a minimal level.
Let's take a closer look at the distributed disaggregated chassis to see how each term brings more depth into reinventing the traditional chassis.
A traditional chassis switching or routing system would typically have several components locked within its chassis, such as: Line cards, Fabric cards, controllers, function modules and protocol software. These components are connected to a single backplane, which is similar to that of a motherboard in a personal computer. Just like a motherboard, there are a limited number of slots available for each component, so expansion is quite limited.
Line cards are what is added when capacity needs to be increased. The fabric cards link the line cards together, so as more line cards are added, the fabric cards would be increased as needed. Of course, more fans and power supplies are needed as well in order to support more line and fabric cards. The other major components of a switching or routing chassis do not affect switching and routing capacity. However, once the chassis runs out of line card ports, getting another chassis means you’ll be repurchasing those components again.
What the “distributed” in DDC does is, we distribute each component into standalone boxes. Each box is equipped with its own power supplies, cooling fans, CPU, chipsets and protocol software pertaining to their specific functions. Each box is 2RU in height and can be fitted onto industry standard 19” racks. This way, when we want to scale up our chassis’ capacity, we only need to add on the components that will improve our capacity, which are the line and fabric cards.
Additionally, instead of a single backplane connecting everything together, we use industry standard QSFP-28 and QSFP-DD transceiver as well as optic cables like DACs, AOCs, AECs, and ACCs. Therefore, our distributed disaggregated chassis breaks through the backplane limitations of a traditional chassis. Each addon are connected with cables instead of plugging onto a “board,” therefore, leading to a very, very large chassis capacity.
A traditional chassis system would typically come with proprietary software and equipment, which comes at a hefty price. Furthermore, the features and services available are limited to what the vendor can provide. If there is a feature outside of their offering, it takes a lot of resources to get a third-party integration. By that time, something new is available and you’ve just spent all that time and money for old technology.
A disaggregated chassis would separate the software and hardware so that the best of both worlds can come together to maximize the value and benefits for the application scenario. Each of the distributed components are disaggregated white boxes, which are compatible with various open standard software. So, you can choose the software vendor of your liking and maintain the same distributed disaggregated chassis architecture.
The disaggregated white box switch and router architecture also brings another benefit to the table, which is utilizing the same white box hardware infrastructure for multiple positions within the network. Traditional chassis models will vary depending on its use case. Proprietary systems will usually have a specific purpose and will require a different product family or model extension for other applications. However, a disaggregated white box not only has its own CPU and merchant silicon, but it is also compatible with any vendor’s software. The same distributed disaggregated chassis can be positioned in the aggregation network, edge network, core network and even in the data center.
This is perhaps the most straightforward part of the DDC. Despite how it looks, the distributed disaggregated chassis is considered one single chassis. Using the DDC architecture, a single switch or router chassis has the potential to be scaled from 4Tbs up to 768Tbs. Not only that, it can be scaled in piecemeal based on the network capacity needs now and in the future.
The Building Blocks of DDC
The distributed disaggregated chassis architecture consists of two major white box components which correspond to fabric and line interface in traditional chassis.
Line Card White Box
Line card white box, also known as a network cloud processor or NCP, serves as the line card module in the distributed disaggregated chassis to provide interfaces to the network. At UfiSpace, we have two types of line card white boxes or NCPs, both of which are powered by the Broadcom Jericho2. One is equipped with 40x100GE service ports (NCP1-1) and the other with 10x400GE service ports (NCP2-1). They both have 13x400GE fabric ports. When just utilizing the service ports without connecting the fabric ports, the NCP1-1 and NCP2-1 can function as standalone units. The fabric ports are used for connecting to the fabric card white box (NCF) in order to build a DDC cluster.
Fabric Card White Box
The fabric card white box, also known as a Network Cloud Fabric or NCF, only has fabric ports and works as part of the backplane to connect line card white boxes (NCPs) and forwards traffic between them. The UfiSpace fabric card white box has 48x400GE fabric ports and is powered by the Broadcom Ramon.
Building a DDC Cluster
Although the line card white boxes can work as standalone switches or routers, to unleash the true potential of the distributed disaggregated chassis architecture, the DDC cluster is the way to go. As mentioned above, each line card white box has 13x400GE fabric ports, which are specifically used to connect to the fabric white box.
The DDC cluster is built using the CLOS topology by connecting the fabric ports found on each line card white boxes to the fabric white boxes in order to create a full mesh interconnection. In this way, traffic from one node will always have a predictable and fixed number of hops to the other node.
The distributed disaggregated chassis also utilizes cell switching, instead of packet switching between the line and fabric card boxes. Packets are chopped into cells and distributed randomly over the fabric interface, which guarantees reliable and efficient routing.
If you want to serve more network connections due to growing traffic demands, just connect a new line card white box to the fabric white box. When the fabric capacity is not enough, simply add another one. Using this method of expansion, one of our distributed disaggregated chassis can be built into a cluster up to 192Tb service capacity while in non-blocking and redundant configuration.
Benefits of the Distributed Disaggregated Chassis
With the capability of being able to build small clusters to meet current capacity and scale out as demand grows, the CAPEX can be maintained at the lowest level. With only two building blocks (the line card white box and fabric white box) needed to scale out capacity, configuration efforts are minimized and the management complexity in the back office can be simplified.
The service capacity is no longer limited to the physical dimensions of the chassis. Just add additional node to expand the capacity.
Redundant designs make sure network availability and that no single point of failure will compromise the service.
Powered by merchant silicon and compliant to open standards, the distributed disaggregated chassis could be easily integrated with any NOS, automation and orchestration software.
The future of networking is driven by applications and with 5G becoming more widely available, the demand potential can go beyond our imagination. To prepare for uncertainty, an agile and flexible infrastructure is a key. The distributed disaggregated chassis is a solution designed for the next generation of networking to help service providers to take the lead on the race to a 5G era.
Telecoms are Realizing the Benefits of Disaggregated Cell Site Gateway Routers
By Andrew Lui | July. 31. 2020
One of the big four telecoms in Taiwan recently announced that they have chosen our disaggregated cell site gateway solution for their 5G network rollout. Although, 5G is nothing new in Taiwan, I do believe this will be Taiwan's first installation of a white box cell site gateway router. In their press release, they mentioned that they chose to use our disaggregated cell site gateway routers because it allows them to build a next generation network architecture for 5G. This really says something about the reality of how disaggregated white box platforms and open network solutions benefits telecommunication companies (telecoms).
Just in case some of these terms sound unfamiliar, here’s a short description on white box and open network. Then you will start to see the benefits for telecoms by using disaggregated cell site gateway routers in their network.
What is a white box? Think non-proprietary. Back in the day, most if not all, telecom networking equipment were proprietary, which meant the hardware and software components were built and supplied by a single vendor. Over the years, this led to a very inflexible and expensive networking system. In turn, this led to the development of white box hardware, or equipment built using merchant silicon that does not run proprietary software, rather it can run any type of software. That's because the software is separated, or disaggregated, from the hardware.
Why would white box platforms and open networking be of any interest to telecoms?
I believe the telecom industry has always kept an eye on the potential of implementing white boxes and open networking during its popularity growth in data centers. But I’d like to think that it was the 5G rollout that really put it into motion. And it boils down to speed, flexibility and economy of scale.
Anyone who has been following 5G news knows how expensive it will be for telecoms to implement 5G. Earlier this year, Taiwan telecoms $4.6B on 5G spectrum alone. That’s $4.6 billion US dollars, not New Taiwan Dollars! At the time, it was stated by Taipei Times to be the third highest priced auction for 5G in the world. Below is a chart of the top 15 prices paid for C-band spectrum since April 2020, based on a report from the Global Mobile Suppliers Association.
Additionally, due to the mmWave technology used by 5G, hundreds of thousands of small cells need to be installed. Not to mention telecoms need to upgrade the antennas at all the cell sites, increase backhaul network capacity, and prepare their core and edge networks for the development of all the new 5G services.
With all that being said, I feel the last thing telecoms need is to be spending high capex on proprietary equipment that may limit flexibility and solution choices. These are exciting times as more and more 5G services and technologies are becoming available. Now is the time to introduce flexibility into the network infrastructure and foster innovation, but within a reasonable budget so that it doesn’t slow down the 5G rollout. To accomplish that, our 5G networking solutions can lead the way.
To summarize, our disaggregated cell site gateway routers are white box platforms integrated with open networking solutions that will provide flexibility and economies of scale for 5G network rollouts. In the use case of our disaggregated cell site gateway router by the telecom in Taiwan, they will be able to accelerate network transformation, expand the depth and breadth of 5G applications, and realize more business opportunities in a more flexible manner.
By Marketing | July. 24. 2020
The best part of this celebration is allowed us to see a more human part of our coworkers and revel in personal connections. We even had touching speeches from our CEO and Chairman, cheering us on and giving us guidance for the direction of the company. Fortunately, it is never routine at UfiSpace as our work isn’t just a job that needed to be done, it is something that wanted to be done. It was a unique experience and a wonderful environment to be in where everyone was relaxed and having fun. We are deeply thankful and blessed for being together with the special, unique, amazing UfiSpace team.
Technology and Innovation Fueled by Covid19
By Andrew Lui| June. 30. 2020
I feel it goes without saying that the world will never be the same after Covid19. It has fostered the wide-spread implementation of the work at home mentality and social distancing, which made internet-based services that were “nice-to-have” become a “must-need”. During this pandemic, the importance of technology and innovation within the telecommunication industry was brought into the public light. With all that’s been going on lately, I’d thought it would be interesting to look at how telecoms are using innovations in technology to get through the trials put forth by Covid19.
Just a bit of background on the types of data surges telecoms have seen and just how important it is to always have a strong and capable network infrastructure. According to a GSMA survey, telecoms worldwide have seen voice and data traffic surges by more than 50% due to the effect of Covid19. In the United States we saw up to 52% increase in VPN traffic and core network traffic surging 23%. In Europe, mobile traffic grew 63% and broadband usage surged up 60%. As well as in Japan, there were reports of data usage increasing 40%. Without all the telecoms’ efforts of consistently improving their network capabilities, the stay-at-home economy could have turned out very differently.
At first, I was holding my breath. To all of a sudden have the majority of the population using the internet for most of their necessities must have put a tremendous amount of pressure on the telecoms’ networks. I was able to sigh relief when the announcements of how well their networks were coping with the situation. I can’t help but smile a little bit to know that UfiSpace was part of how well some telecom networks handled it. Last year we launched our Distributed Disaggregated Chassis solution, which increased our customers’ network capacity to heights never seen before. I could not be prouder of the fact that our 5G solutions were part of the answer for telecommunication networks coping with the data surges in the first few months of Covid19.
Our Distributed Disaggregated Chassis (DDC) solution breaks apart (i.e. disaggregates) the line card and fabric components of the traditional monolithic chassis router into easier to manage standalone, open network white boxes called NCP (“line card” white boxes) and NCF (“fabric” white boxes). The NCP and NCF are connected via standard QSFP-DD optics. This not only provides our customers with the means to increase their network capacity up to hundreds of Tb per second, it also provides flexibility in capacity scaling using a “pay-as-you-grow” model as well. For example, whereas the traditional chassis is limited by how many slots it has and can only be upgraded by purchasing another large chassis with a set number of slots, our DDC can scale horizontally by connecting additional NCPs and NCFs.
Another benefit from our DDC solution is that the hardware and software are disaggregated as well. By disaggregating hardware from software and utilizing our open networking interfaces, Telecoms not only enabled more disruptive technologies to come in, they were also able to yield significant capex savings as well as opex savings such as from SDN automation. One instance of how these new technologies were being utilized during Covid19 was in the case of Telecom enterprise customers needing to increase video conferencing, instead of physically going to the premise to install hardware components with these features, they could just spin up more software instances to meet the demands.
My heart goes out to all of those around the world who were affected by such a tragic pandemic and I wish the best for everyone who is trying to make the best of the situation they are in. But I believe it’s times like these where we will push ourselves to new heights. I look forward to seeing all the new innovations and technologies that will be brought forth to improve our way of life.