The FunKI project, funded with 516 742,63 € under the grant number 16KIS1185, ran from 15 May 2020 to 14 May 2023. Its aim was to investigate artificial‑intelligence driven techniques for 5G and beyond wireless communication, focusing on model‑based improvements, data‑driven parameter estimation, and the hardware implementation of AI algorithms in transmit and receive chains. The University of Applied Sciences RPTU led the effort, concentrating on the tight coupling between algorithm design and efficient ASIC and FPGA realization. Partners from the University of Stuttgart (UST), the University of Bremen (UB), and the industrial company Creonic contributed algorithmic expertise, system‑level integration, and demonstrator development, respectively.
Technically, the project delivered two key transceiver components. First, an LDPC decoder was redesigned to exploit the Information‑Bottleneck Method (IBM) for efficient quantization of edge messages in the Tanner graph. The IBM approach replaces elementary node operations with look‑up tables (LUTs), but suffers from exponential LUT growth at higher node degrees. A serialisation technique was introduced to keep the number of literals linear in the node degree, yet the IBM decoder still showed limited gains at low quantization levels and lost efficiency at higher quantization. To overcome these limitations, a new Minimum‑Integer‑Computation (MIC) decoder was developed in collaboration with UB. The MIC architecture combines conventional mapping rules with LUTs, dramatically reducing node complexity while preserving the same error‑rate performance as the IBM decoder. Comparative measurements in 22 nm technology demonstrate that the MIC decoder outperforms both the IBM and the conventional Min‑Sum (NMS) decoders across all relevant metrics. For example, at a 3‑bit quantization level the MIC decoder achieves a coded throughput of 218 Gb/s, an area of 3.66 mm², an area efficiency of 141.1 Gb/s/mm², a latency of 41.1 ns, a power consumption of 5.61 W, and an energy efficiency of 10.9 pJ/bit. In contrast, the IBM decoder at the same quantization level shows a throughput of 149 Gb/s, an area of 40.51 mm², an area efficiency of 5.4 Gb/s/mm², a latency of 97.2 ns, a power of 11.85 W, and an energy efficiency of 54.3 pJ/bit. The MIC decoder’s superior area, power, and energy figures are attributed to its reduced node complexity and more efficient LUT usage.
Second, an autoencoder‑based demapper was implemented. By replacing traditional demapping stages with a neural network, the system can be globally optimized independent of the channel model, yielding improved bit‑error‑rate performance compared to conventional demappers. The autoencoder was integrated into a full transceiver prototype and evaluated on both virtual silicon and FPGA platforms, confirming the feasibility of AI‑based demapping in real‑time systems.
The project’s design‑space exploration phase produced parametric hardware‑architecture templates that allowed systematic trade‑off analysis between quantization, memory usage, and throughput. Prototyping on virtual silicon and FPGAs enabled early validation of the architectures and informed the final ASIC synthesis. In the final phase, a demonstrator was built in cooperation with Creonic to showcase the hardware‑accelerated AI components to a broader audience.
Overall, FunKI achieved its objectives by delivering high‑performance, energy‑efficient LDPC decoders and AI‑based demappers, while demonstrating a scalable methodology for integrating neural‑network accelerators into next‑generation wireless transceivers. The collaboration among academia and industry, supported by the German federal funding, enabled the translation of cutting‑edge AI research into practical, hardware‑ready solutions for 5G and beyond.
