We use cookies to provide the best user experience for the visitors. By continuing to browse the website, you agree to our use of cookies.

logo
  • Solutions
    • Telecoms
      • Mobile Backhaul
      • Core Backbone
      • Broadband Network Gateway
      • Private 5G
      • Metro Aggregation
    • Cloud & Data Centers
      • Hyperscale Data Centers
      • Data Center Interconnect
  • Products
    • Telecoms
    • Cloud & Data Centers
  • Support
    • Post Sales Service
    • Technical Support
    • Frequently Asked Questions
  • Partners
    • Partner with UfiSpace  
    • Our Ecosystem
  • Company
    • About UfiSpace
    • ESG
    • Blog
    • Resources
    • News & Events
  • Contact Us
    • Sales Inquiry
    • Join Us
TW

Frequently Asked Questions

Still have questions?
We're happy to answer any questions you may have.

Send Us Your Questions
DDC DCSG
DDC stands for Distributed Disaggregated Chassis, it breaks up the traditional monolithic chassis routing system into standalone and manageable white boxes, each with their own power supplies, fans and controllers. UfiSpace's DDC solution separates the “Line cards” and “Fabric cards” from a traditional chassis into standalone white boxes called NCPs (Network Cloud Packet Forwarders or “line card” white boxes, models S9700-53DX and S9700-23D) and NCFs (Network Cloud Fabrics or “fabric card” white boxes, model S9705-48D). The NCPs and NCFs are interconnected via 400GE cables, which replaces the backplane of traditional chassis.
 
The distributed disaggregated chassis solution removes the system capacity limits imposed by traditional chassis. No longer bound to a single chassis, the physical size of the system is no longer an issue. Additionally, the backplane’s electrical conductance would no longer be a limiting factor since our system’s backplane has been transitioned to industry standard copper or optical cables, which can be increased as needed. The S9700 Series also allow for convenient horizontal expansions, from small (16Tbs), medium (96Tbs) to large clusters (192Tbs) for core applications that is more cost effective and flexible compared to a traditional chassis system.
 
Savings Item DDC Legacy
CAPEX Chassis Use what you need, pay as you grow Upfront, buy unused space
Cabling Increase with network, unlimited Fixed backplane, limited
OPEX Fabric upgrade Easy, connect additional NCF Limited by fixed backplane
Capacity upgrade Easy, connect additional NCP Limited by chassis slots
Installation Easy, fit onto standard 19” server rack Forklift installation, space consuming
Deployment Easy Need to reconfigure the system
HW/SW Open platform Proprietary, vendor locked
The DDC breaks apart the traditional chassis into different components, in which the internal backplane is replaced by industry standard QSFP-28 and QSFP-DD interfaces and connected with 400GE cables such as DAC (Direct Attach Cable), AOC (Active Optical Cable), AEC (Active Electrical Cable) or ACC (Active Copper Cable).



Both the S9700-53DX and S9700-23D have 13 fabric ports and the S9705-48D has 48 fabric ports.

Our DDC fabric interface is a cell-based switching interface. It breaks up packets into cells and distributes them dynamically through multiple fabric ports to the destination switch, which is then reassembled into packets. The benefit of cell switching is to effectively load balance all fabric links and guarantee maximum fabric links utilization among all fabric connections within the cluster.
 
The S9705-48D also known as an NCF, is a fabric router, equipped with 400GE fabric ports, it supports 8 billion cells forwarding per second.

S9705-48D fabric couples with the S9700-53DX and S9700-23D routers in small, medium and large clusters enabling up to 192Tb switching capacity. Thus, providing more flexibility to the core router in order to meet the needs in the industry.

 

S9700-53DX is carrier grade disaggregated IP/MPLS router that can be deployed in the core or edge to transport services over a scalable next-gen service provider network.

With 40x100GE service ports, S9700-53DX can be positioned as standalone or coupled with fabric router in small, medium and large clusters enabling 4Tb to 192Tb maximum switching capacity.

 

S9700-23D is carrier grade disaggregated IP/MPLS router that can be deployed in the core or edge to transport services over a scalable next-gen service provider network.

S9700-23D can be positioned as standalone switch providing 10x400GE service connections for 4Tbs switching capacity.

S9700-23D has 13x400GE fabric ports, which are used to create a switching cluster by connecting the fabric ports to the S9705-48D. A cluster can provide up to 192Tbs service capacity just by interconnecting more S9700-23D to S9705-48D as you need it.

 
No, S9705-48D is a fabric router, equipped with only fabric port, can only connect to fabric ports of S9700-23D or S9700-53DX to create a switch cluster.
 
Yes, S9700-53DX has 40x100GE service ports and S9700-23D has 10x400GE service ports, they can both be used as standalone switch or coupled with S9705-48D to build a switch cluster. When they are being used as a standalone device, the 13x400GE fabric ports will not be used.
 
In the DDC solution, S9705-48D acts like a switching fabric in traditional chassis and S9700-53DX/23D plays the role of line cards. Every S9700-48D should connect to every S9700-53DX/23D forming a CLOS topology. All the traffic from the service ports of S9700-53DX/23D will travel through the S9705-48D to their destination via the fabric interface.

How to build UfiSpace DDC small cluster
Diagram above is a reference topology of a small cluster, consist of one S9705-48D and four S9700-23D/53DX.

How to build a UfiSpace DDC medium cluster
Diagram above is a reference topology of a medium cluster, consist of seven S9705-48D (NCF) and twenty-four S9700-23D/53DX (NCP). Each NCP will have two fabric links connecting to each of the first six NCFs. The last fabric port connects to the 7th NCF to enable the redundancy.
 
Broadcom Ramon supports up to 24 400G fabric ports, the S9705-48D is equipped with 2 Ramon to support 48 fabric ports.

Broadcom Jericho2 used in S9700-23D and S9700-53DX has 112x53.125G fabric SerDes but 8 SerDes are used for external TCAM, therefore only 13 fabric ports are available [(112-8)*53.125/400)].



With such port configuration, the maximum number of devices in a large cluster would be limited in 12+1 S9705-48D and 48 S9700-53DX/23D.
 
One S9705-48D is enough for general scenario, but if you would like to ensure the network availability, two S9705-48D would be a better configuration.

UfiSpace small cluster DDC with redundancy

In a topology with two S9705-48D, we will utilize 24 fabric ports in one S9705-48D (left one in above diagram) and use 28 fabric ports in another (right one in above diagram). The port number difference is due to the number of fabric ports on S9700-53DX/23D, and to be able to maximize the fabric capacity by using all of the fabric ports.
 
For the small cluster with one S9705-48D, there will be 12 fabric links between S9700-53DX/23D and S9705-48D, the routing path is calculated dynamically based on the status of all links. If there is one link failure, the system will ignore the fail one and pick another routing path.

For a small cluster with two S9705-48D, there will be 6 or 7 fabric links between S9700-53DX/23D and S9705-48D, it will also provide link redundancy as mentioned above.

If there is one S9705-48D down, the other will maintain the network availability with reduced fabric capacity (9.6T or 11.2T).

For the client link redundancy, it is always recommended that connect server to two different switches, in the case of any switch failure, the other one can still provide service to the client server.

 
The NIF and FAB ports are compatible with a wide range of industry standard transceivers and cables. Since the S9700 Series breaks apart the traditional chassis, it will require two types of connections for service interfaces (NIF) and fabric interfaces (FAB). Therefore, our compatibility chart will have compatibilities for service port connection and fabric port connection. Service port connections are for transceivers and/or cables going from the NIF ports of the NCP to a network service such as a server. Fabric port connections are specifically for the connection between the NCP’s FAB to NCF’s FAB ports. 

Below is a list of compatible transceivers and cables for the S9700 Series routing system.


UfiSpace DDC Compatible Cables and Transceivers
The S9700-53DX has 40x100GE network service ports (NIF) with the capability to directly connect both 100GE and 40GE transceivers, but due to the port mapping design, 40GE can only be supported on the ports with Green Labelling. In order to connect 10GE/25GE transceivers a breakout is needed. Typically, for 10GE/20GE transceivers a 1:4 breakout cable is used to split 1x100GE port into 4x10GE or 4x25GE ports. Due to the nature of the port mapping design within the S9700-53DX, only the ports with Green Labelling will be able to use breakout cables. Thus, the total quantity of breakouts available per S9700-53DX is 80x10GE or 80x25GE.

UfiSpace DDC breakout

 
S9700-23D support break-mode on port 0 to port 9 with 400GE QSFP-DD DR4 to 100GE QSFP28 DR transceiver or 400GE QSFP-DD DR4+ to 100GE QSFP28 FR1 transceiver.
 
S9705-48D, S9700-53DX and S9700-23D all are equiped with two console ports (RJ45 Serial or Micro USB), 2x10G SFP+ management ports, 1x1GBase-T OOB port and one general purpose Type-A USB port.

UfiSpace core router management ports introduction
S9705-48D, S9700-53DX and S9700-23D are all equiped with 2x10G SFP+ management ports. To enable the management link, connect any one of the management ports to the management switch using a 10G SFP+ transciever.
UfiSpace Core Router connect to Network Manager

An extra management switch may be introduced to secure the availability of management connection. In this case, both of the 10G SFP+ management ports from each device would be used to connect to both management switches.


UfiSpace Core Router with Redundant Network Manager
 
S9705-48D, S9700-53DX and S9700-23D all are equiped with one 1GBase-T RJ45 OOB port. There are two options when connecting the OOB port.
Option 1: To leverage the same management switch used for SFP+ management connection, use a SFP to RJ45 coverter to connect the management switch to the OOB port.
UfiSpace Core Router OOB with SFP+

Option 2: A separate ToR switch with RJ45 ports could be used for OOB connection. In this case, no extra converter is required for the OOB connection.


UfiSpace Core Router OOB RJ45
The typical application of DDC solution would be p-router within the core network. With the high availability, low latency and scalability CLOS topology design, DDC solution can easily handle the heavy traffic in the core network, and adopt to quick increasing data demand.

DDC solution could also fit into other applications such as network peering, aggregation or data center. Build from single switch or small cluster and scale out when the traffic grows, allow service provider minimizes the initial investment. 

 
Solutions
Telecoms Cloud & Data Centers
Products
Telecoms Cloud & Data Centers
Support
Post Sales Service Technical Support Frequently Asked Questions
Partners
Our Ecosystem
Company
About UfiSpace Blog Resources News & Events
Contact Us
Sales Inquiry Join Us
+886-2-5572-4260 sales@ufispace.com
3F., No.109, Zhongcheng Rd., Tucheng Dist., New Taipei City 23674, Taiwan
Copyright © 2023 Ufi Space Co., Ltd. All rights reserved.
Privacy Policy | Cookie Policy Designed by Weya