Accelerate 5G project addresses the design of innovative FPGA based solutions for physical layer (L1) digital processing acceleration. The solution solves major issues in 5G infrastructure deployment, such as high network deployment costs (CAPEX), high network operation and management costs (OPEX), high throughput and high flexibility demands.
Novel communication services require evolution of radio access networks (RAN), which were traditionally based on monolithic architecture. In 5G, a RAN evolved to a concept of virtualized RAN (vRAN), which introduces disaggregation of a base station (gNB) to separate, often physically distant elements. A single gNB may consist of a central unit (CU) and one or more distributed units (DUs), each communicating with radio units (RUs). The RAN disaggregation enabled a concept of open RAN, where multiple vendors can provide a specific part of the infrastructure hardware and software.
Most RAN workloads can now run on the off-the-shelf processors (CPUs) integrated in server stations. However, high throughput demands set a challenge for deployment costs and energy consumption of these servers since L1 processing requires an extreme amount of computing resources. Accelerator cards with PCIe connection improve cost and energy efficiency by offloading certain workloads from the main CPU. Still, PCIe accelerators must be flexible since standardization and computational workload increasingly evolve with new use cases. Such flexibility can be achieved by using field programmable gate arrays (FPGAs), programmable chips easily reconfigurable for specific computing tasks.
During this project and during our previous collaboration, we are developing IP cores of L1 processing blocks, written as hardware description language (HDL) code, which can be compiled for any FPGA chip. By offloading computationally intensive calculations to an FPGA card, a DU can be deployed using a much smaller number of CPU cores and obtain an essential power reduction. This significantly reduces the total cost of ownership (TCO). Equivalently, CPUs can finish remaining tasks much faster, which increases the communication throughput. We already confirmed these claims with the partial functionality L1 accelerator prototype developed during our previous two years long collaboration. The key beneficiaries of such a solution are system integrators, who integrate the RAN hardware equipment, software, and IP cores. Our current prototype obtains a 65% higher throughput for the same FPGA resources compared to the existing market solutions. Equivalently, this results in lower hardware deployment costs for the same network capacity. Since our solution significantly reduces the RAN TCO, network operators can get high-performance systems at lower price, which translates to the end user's internet traffic cost and speed.
Our group actively participates in algorithm development and hardware architectures design for channel coding and various signal processing blocks in the physical layer chain.
Assistant Professor
Project lead