portrait
Daniel Becking
(he/him)

I am a research associate and Ph.D. student in the Efficient Deep Learning Group at the Fraunhofer Heinrich-Hertz-Institute (HHI) and the Technical University of Berlin, supervised by Wojciech Samek.

My current research interests lie in the domain of neural codecs like language model-based general-purpose compressors. I am also working on methods specifically tailored to compression of neural networks and the efficient transmission of incremental neural data, e.g., within distributed learning scenarios like federated or split learning. My methods leverage explainable AI (XAI) techniques and concepts of information theory.
As a regular attendee of Moving Picture Experts Group (MPEG) meetings since 2020, I have contributed several compression tools and syntax to the first and second editions of the ISO/IEC 15938-17 standard for Neural Network Coding (NNC).
I completed my M.Sc. in Biomedical Engineering in 2020, under the guidance of Klaus-Robert Müller at the Technical University of Berlin.
Before joining Fraunhofer HHI, and during my bachelor's studies in Microsystems Engineering, I was part of the Sensor Nodes & Embedded Microsystems group at Fraunhofer IZM. This experience was invaluable while working on our FantastIC4 low-power neural network accelerator.


Latest News


Selected Publications and Projects

TMM

Neural Network Coding of Difference Updates for Efficient Distributed Learning Communication
IEEE Transactions on Multimedia Vol. 26, 2024

Co-authors: Karsten Müller, Paul Haase, Heiner Kirchhoffer, Gerhard Tech, Wojciech Samek, Heiko Schwarz, Detlev Marpe, Thomas Wiegand
To improve the efficiency of frequent neural data communication in distributed learning, we present a set of new compression tools specifically tailored for coding incremental neural updates. These include Federated BatchNorm folding (FedBNF), structured and unstructured sparsification, tensor row skipping, quantization optimization and temporal adaptations for improved context-adaptive binary arithmetic coding (CABAC). Furthermore, we introduce the parameter update tree (PUT) syntax, enabling identification of different neural network parameter subsets and their relationships in (a)synchronous communication paradigms from the bitstream. We benchmark these tools in multiple federated and SplitFed scenarios using a variety of models including ViTs.


NNCodec

[Spotlight] Paper at the Neural Compression Workshop
ICML 2023

Co-authors: Paul Haase, Heiner Kirchhoffer, Karsten Müller, Wojciech Samek, Detlev Marpe
NNCodec is the first open-source, standard-compliant implementation of the Neural Network Coding (NNC) standard (ISO/IEC 15938-17). The paper outlines the software architecture and main coding tools. We analyzed the underlying neural network weight distributions and information content to examine higher compression gains compared to other entropy codes like Huffman coding. Notably, the average codeword length of NNCodec often falls below the Shannon entropy bound. Furthermore, by introducing specially trained local scaling parameters, NNCodec can effectively compensate for quantization errors to a certain extent.


BerDiBa

BerDiBa: Berlin Digital Rail Operations
funded by the Investitionsbank Berlin and the EU

Co-authors: Nico Harder, Maximilian Dreyer
In the BerDiBa project, I developed algorithms to generate highly efficient models for semantic segmentation of image data from the driver's cab perspective in autonomous trains. I enhanced my explainability-driven, entropy-constrained quantization method (ECQx), reducing DeepLabV3 precision to 2-4 bits and introducing high sparsity without compromising performance. Additionally, we applied a specialized semantic knowledge distillation technique to extract compact student networks. Using Concept Relevance Propagation (CRP), we identified network subgraphs highly relevant for specific classes—or concepts within classes—, which can be jointly activated or deactivated through a gating mechanism: Mixture-of-Relevant-Experts.


BerDiBa

ECQx: Explainability-Driven Quantization for Low-Bit and Sparse DNNs
xxAI - Beyond Explainable AI, Lecture Notes in Computer Science Vol. 13200

Co-authors: Maximilian Dreyer, Wojciech Samek, Karsten Müller, Sebastian Lapuschkin
The ECQx method leverages concepts of explainable AI (XAI) and of information theory in the quantization-aware training paradigm: Instead of assigning weight values solely based on their proximity to the quantization clusters, the method considers weight relevances obtained from Layer-wise Relevance Propagation (LRP) and the information content of the clusters (entropy optimization). A key insight is that weight magnitude is not necessarily correlated with weight relevance. This allows us to preserve relevant weights from being quantized to zero while simultaneously deactivating irrelevant high-magnitude weights or neurons.


Find a complete list of publications on GScholar.