Tutorials can be attended by participants registering either in full registration or in student registration, by paying the corresponding fee of 30 Euros per tutorial.

- T-AM1–T-AM5: Morning tutorials (9:00–12:30)
- T-PM1–T-PM5: Afternoon tutorials (14:00–17:30)

## Morning tutorials

### T-AM1 - Learning under Requirements: Supervised and Reinforcement Learning with Constraints

**Organizers:** Miguel Calvo-Fullana (Universitat Pompeu Fabra), Luiz F.O. Chamon (University of Stuttgart), Santiago Paternain (Rensselaer Polytechnic Institute) and Alejandro Ribeiro (University of Pennsylvania)

**Abstract:** Requirements are integral to engineering tasks in which data plays an increasingly central role. They are therefore also of growing interest in signal processing and machine learning (ML), as evidenced by the advancement towards designing fair, robust, and/or safe data-driven systems. Such statistical, data-driven constraints are often induced by combining the learning objective and requirement violation metrics into a single training loss. To guarantee that the solution satisfies the requirements, however, this approach requires careful tuning of hyperparameters (penalty coefficients) using cross-validation, a computationally intensive and time consuming process. Constrained learning incorporates requirements as statistical constraints rather than by modifying the training objective. Though less typical in ML practice, constrained formulations are not uncommon and are in fact equivalent to penalty methods in convex optimization. Modern ML problems, however, are typically non-convex. What is more, classical learning theory only provides generalization guarantees for unconstrained learning, i.e., for the aggregated training loss value rather than its components (learning objective and requirement). Despite these challenges, constrained learning can effectively impose requirements on ML systems, both during training and at test time.

This tutorial is geared towards researchers and practitioners of signal processing and ML interested in imposing requirements to data-driven systems. It provides an overview of theoretical and algorithmic advances from the past 5 years that show when and how it is possible to learn under constraints. More specifically, it explores the role of different types of requirements in supervised learning, robust learning, and reinforcement learning (RL). To illustrate the effectiveness and flexibility of constrained learning, it showcases its use in several relevant applications from adversarial robustness, federated learning, invariant learning, and safe RL. Ultimately, this tutorial provides a general tool that can be used to tackle a variety of problems in ML and sequential decision-making.

* *

### T-AM2 - On the Advances in Message-Passing Algorithms and Practical Iterative Receiver Design for Digital Communications

**Organizers:** Serdar Sahin (Thales), Antonio Maria Cipriano (Thales) and Charly Pouillat (INP-ENSEEIHT Toulouse)

**Abstract:** In recent years, we witnessed a renewed interest of the wireless communications and signal processing research communities about iterative algorithms based on approximate Bayesian inference and message passing.

Such techniques have previously shined through the success of probabilistic channel decoding algorithms throughout the 90s, leading to the widespread investigations on the “turbo principle” and on the belief propagation (BP) algorithm in early 2000s, in many digital receiver design problems involving estimation, equalization, detection and decoding.

A decade later, the increased popularity of expectation propagation (EP) for handling intractable variational inference problems and other emerging practical approximate message passing (AMP) techniques have led to novel iterative signal estimation algorithms with attractive complexity-performance trade-offs. In particular, these algorithms are able to produce signal estimates uncorrelated with the observations and the priors, a property which considerably reduces error propagation in the iterative estimation process, and whose performance in the asymptotic regime can be accurately predicted.

Reinforced with other practical variational inference techniques such as the expectation maximization (EM) or mean-field (MF), and with emerging deep learning aided design approaches, advanced iterative algorithm design remains a hot-topic in various signal processing and machine learning communities. Indeed, researchers have been proposing competing strategies such as the sparse Bayesian learning (SBL), expectation consistent optimisation framework, memory AMP (MAMP), orthogonal AMP (OAMP) or vector AMP (VAMP) and many others to seek nearoptimal estimation performance, with significant variations both in algorithmic complexity and in the underlying theoretical backgrounds. Consequently, it has become significantly more difficult to get a grasp on this rapidly developing literature, which incorporates many closely-related contributions as well as some important overlooked developments.

Our tutorial aims to provide a synthetic view on the major developments on variational Bayesian inference techniques, mainly from a message-passing algorithm perspective, and to assess the capabilities of this arsenal based on EP and other related AMP techniques for addressing stochastic signal processing problems. In particular, in the context of digital communication receiver design, we make emphasis on the importance of factor graph assumptions and scheduling strategies, for achieving reasonable performance-complexity tradeoff. Indeed, recent years have shown that these techniques lead to the practical implementation of iterative receivers for non-orthogonal multiple access (NOMA), for multiple-input multiple-output (MIMO) and singlecarrier (SC) systems among others. With the help of additional optimization through deep unfolding, and with optimized channel code design, such techniques can be expected to play an important role in next-generation communications transceivers.

* *

### T-AM3 - An introduction to bivariate signal processing: polarization, quaternions and geometric représentations

**Organizers:** Nicolas Le Bihan (Gipsa Lab) and Julien Flamant (Université de Lorraine)

**Abstract: **An important task of data science is to represent and evidence the interrelation between coupled observables. The simple case of two observables that vary in time or space leads to bivariate signals. Those appear in virtually all fields of physical sciences, whenever two quantities of interest are related and jointly measured, such as in seismology (e.g. horizontal vs vertical ground motion), optics (transverse coordinates of the electric field), oceanography (components of current velocities), or underwater acoustics (horizontal vs vertical particle displacements) to cite a few.

Bivariate signals describe trajectories in a 2D plane whose geometric properties (e.g. directionality) have a natural interpretation in terms of the physical notion of polarization usually used for waves. As an example, according to Einstein’s theory of general relativity, the recently detected gravitational waves (GWs) are characterized by two degrees of freedom, that are connected to the bivariate space-time strain signal measured by the detectors. The polarization state of the observed signal is directly connected to that of the wave, which in turn provides key insights into the underlying physics of the source. Polarization is thus a central concept for the analysis of bivariate signals.

Two usual representations exist for analyzing and processing bivariate signals. The first one is based on 2D real-valued vectors, where a bivariate signal is seen as a special case of a more general multivariate signal. The second one relies on a complex representation, where the real and imaginary parts of a univariate complex signal correspond to the two components of the above 2D real-valued vector signal. Both representations have their merits; however, these two approaches do not provide straightforward descriptions of bivariate signals or filtering operations in terms of polarization properties. To this purpose, the tutorial speakers (and their collaborators) have introduced a new and generic approach for the analysis and filtering of bivariate signals. It relies on the one hand on the natural embedding of bivariate signals – viewed as complex-valued signals – into the set of quaternions and, on the other hand, the definition of a dedicated quaternion Fourier transform to enable a meaningful spectral representation and analysis of bivariatesignals. This quaternionrepresentation lays down the first building blocks of asignal processingmethodology that simultaneously provides: (i) straightforward geometric and physical interpretations (ii) mathematically grounded tools and (iii) efficient numerical implementations.

This tutorial aims at providing insights into the specificity of the bivariate signal processing and its deep connections with the physics of polarization. Emphasis on the geometric interpretation of bivariate signal processing will be made and illustrated with tutorial examples based on specific python toolbox.

* *

### T-AM4 - Graph learning in Signal Processing & Machine Learning

**Organizers:** Florent Bouchard (CentraleSupélec), Arnaud Breloy (CNAM), Ammar Mian (Université Savoie Mont-Blanc) and Alexandre Hippert-Ferrer (Université Gustave Eiffel)

**Abstract:** Graphs are ubiquitous and fundamental structures that offer a meaningful way to represent links between entities. These structures have been successfully leveraged in many emerging fields such as graph signal processing (GSP), optimal transport, and graph neural networks (GNN). In any case, most works deal with a known graph, i.e., developing algorithms that account for prior knowledge of the structure of relationships between the entities. Upstream, the problematic of graph learning (or graph structure learning), aims at unveiling an unknown underlying graph topology behind the data in order to apply the aforementioned techniques. This problem also motivated numerous works, both in the unsupervised (learning a graph to reflect the data structure), and supervised (learning a graph to optimize a task such as classification). This tutorial will present a general introduction to the topic, as well as a panorama of recent advances in the matter. The talk is divided into four parts. We will first motivate graph learning problems by application setups, e.g., clustering without graph knowledge, learning a graph to perform GSP, or setting up the structure of a GNN. Second, we will present some fundamental notions that provide necessary

tools for the rest of the talk: statistical models (dealing with uncertainties and missing data), problem formulation, optimization related to graph structures (notably basics of Riemannian optimization), etc. Third, a focus will be put on unsupervised graph learning, where we will cover statistical formulations of graph learning based on conditional correlation structures, as well as more recent Laplacian learning methods. The last part will concern the supervised case, and present some recent works addressing the problem of refining the graph of a GNN according to a task.

* *

### T-AM5 - Signal Detection: Model-Based, Data-Driven, and Hybrid Statistical Approaches

**Organizer:** Angelo Coluccia (Università del Salento)

**Abstract:** There is a recent trend in adopting data-driven techniques, namely machine learning (including shallow or deep neural networks), to design novel solutions for signal processing problems traditionally addressed by model-based approaches. On the other hand, the latter has been adopted with effective results for decades, providing at the same time control and interpretability. Within the quest to find a good balance between these two worlds, an interesting approach is the combination of both, by means of hybrid techniques. The tutorial will give an overview on some recent work in this respect for the problem of signal detection, with focus on multi-dimensional signal processing. It will be discussed how data-driven tools can be coupled with traditional detection statistics for both unstructured and structured signal models, in white and correlated noise. Classical hypothesis testing approaches, namely NeymanPearson and GLRT, will be revisited under the lens of machine learning classification. As application field, adaptive detection of (radar) signals in noise is selected. The whole topic is very timely, for the signal processing community at large and for people working more specifically in signal detection, not limited to the radar domain.

## Afternoon tutorials

### T-PM1 - Self-Supervised Learning Methods for Imaging

**Organizers:** Julián Tachella (ENS de Lyon) and Mike Davies (University of Edinburgh)

**Abstract:** This tutorial will cover core concepts and recent advances in the emerging field of self-supervised learning methods for solving imaging inverse problems with deep neural networks. Self-supervised learning is a fundamental tool deploying deep learning solutions in scientific and medical imaging applications where obtaining a large dataset of ground-truth images is very expensive or impossible. The tutorial will provide a comprehensive summary of different self-supervised methods, discuss their theoretical underpinnings and present practical self-supervised imaging applications.

* *

### T-PM2 - Theory and Applications of Phase Retrieval in Synthetic Aperture Imaging and Sensing

**Organizers:** Kumar Vijay Mishra (United States Army Research Laboratory) and Samuel Pinilla (Science and Technology Facilities Council)

**Abstract:** Synthetic aperture (SA) systems generate a larger aperture with greater spatial/temporal resolution than is inherently possible from the physical dimensions of a single sensor alone. These apertures are found in various signal processing applications such as optics, radar, remote sensing, microscopy, acoustics, and tomography. The SA processing often involves phase retrieval (PR), wherein a complex signal is to be recovered from the phaseless data. In general, both convex and nonconvex approaches have been suggested to solve the generic PR problem. However, these techniques are not readily applicable to various SA problems. In this tutorial, we provide a deep dive into the recent advances in PR for contemporary synthetic aperture imaging and sensing applications. Depending on the linear propagation model, diverse and scattered applications can be grouped into four main categories: Fourier PR, coded illumination, coded detection, and random. We cover the respective theories, algorithms, and use cases, including applications of machine learning in this area. This tutorial also aims to foster interaction between various SA disciplines thereby leading to a better understanding of the PT problems.

* *

### T-PM3 - Computational MRI in the Deep Learning Era: The Two Facets of Acquisition and Image Reconstruction

**Organizer:** Philippe Ciuciu (CEA)

**Abstract:** This tutorial aims to summarize recent learning-based advances in MRI, concerning both accelerated data acquisition and image reconstruction strategies. It is specifically tailored to graduate students, researchers and industry professionals working in the medical imaging field who want to know more about the radical shift machine learning has introduced for MRI during the last few years. As MRI is the most widely used medical imaging technique for non-invasively probing soft tissues in the human body (brain, heart, breast, liver, etc), training PhD students, postdocs and researchers in electrical and biomedical engineering is strategic for cross-fertilizing the fields and for understanding the ML-related needs and expectations from the MRI side.

### T-PM4 - Privacy-Preserving Distributed Optimization: Theory, Methods and Applications

**Organizers:** Richard Heusdens (TU Delft) and Qiongxiu Li (Fudan University)

**Abstract:** In the last decades, distributed optimization has drawn increasing attention due to the demand for either distributed signal processing or massive data processing over (large-scale) pear-to-pear networks of ubiquitous devices. Motivated by the increase in computational power of low cost microprocessors, the range of applications for these networks has grown rapidly. Distributed optimization forms the core of numerous modern applications, include training machine learning models, target localization and tracking, healthcare monitoring, power grid management, and environmental sensing. Typically, due to the lack of infrastructure, the paradigm of distributed optimization is to solve the problem decentralized by exchanging data only between neighbouring sensors/agents over wireless channels. This data exchange is a major concern regarding privacy, because the exchanged data usually contain sensitive information, and traditional distributed optimization schemes do not address this privacy issue. In addition, since most algorithms are implemented in an iterative fashion, communication costs should be limited. The primal-dual method of multipliers (PDMM) emerges as a critical framework in the landscape of distributed optimization, offering a means to tackle complex problems while safeguarding the privacy and security of the underlying data. Our objective is to provide a comprehensive introduction to PDMM, shedding light on its synergies with well-established optimization methods such as the alternating direction method of multipliers (ADMM). We aim to illustrate the adaptability, privacy-preservation capabilities, and communication efficiency of PDMM across a variety of conditions. Furthermore, this tutorial will underscore the significance of PDMM by delving into its practical applications in a range of contexts, from semidefinite programming to federated learning. We will unfold the multifaceted nature of PDMM through various lenses, demonstrating its broad applicability and inherent flexibility.

### T-PM5 - Computational Sensing: From Shannon Sampling to Hardware-Software Co-Design

**Organizers:** Ayush Bhandari (Imperial College) and Ruiming Guo (Imperial College)

**Abstract:** Digital data capture is the backbone of all modern day systems and “Digital Revolution” has been aptly termed as the Third Industrial Revolution. Underpinning the digital representation is the Shannon-Nyquist sampling theorem followed by more recent developments such as compressive sensing approaches. Almost all such approaches follow pointwise sampling strategies that have revolutionized how we capture signals and process them using sophisticated mathematical signal processing algorithms. These pointwise sampling strategies have transformed signal capture and processing through advanced mathematical algorithms, also laying the groundwork for the next wave of machine learning (ML) and artificial intelligence (AI) applications.

Yet, as the hardware is being pushed to its limits and computational resources become more accessible and efficient, it remains questionable if pointwise sampling is the best one can do. Across various scientific and engineering domains, researchers have explored alternative signal encoding methods, yielding unprecedented benefits. For instance, event-based or neuromorphic sampling in computational imaging and computer vision, one-bit ADCs in digital communications and massive MIMO systems, and the Unlimited Sensing Framework1 (USF) in signal processing have all demonstrated significant advantages of unconventional encoding over the traditional Shannon-Nyquist method.

Despite the proliferation of these novel encoding techniques—neuromorphic, one-bit, and modulo encoding—a comprehensive resource is missing that (i) provides a thorough introduction to these breakthroughs, (ii) unifies these concepts within the framework of sampling theory and SP in general, and (iii) fosters an end-to-end perspective by bridging theoretical concepts with their practical applications. Our tutorial aims to fill this gap for the signal processing (SP) community, offering an insightful introduction into the emerging field of Computational Sensing that lies at the convergence of theory, algorithms, hardware, and experiments. Starting from foundational principles, we aim to guide both beginners and experts through the theoretical, algorithmic, hardware, and experimental aspects of Computational Sensing. Leveraging our decade-long experience with computational sensing and imaging including the development of the USF, the tutorial will feature case studies complete with Matlab/Python code and experimental data from recent research, enabling participants to gain practical insights and stay abreast of cutting-edge developments.

In spirit of reproducible research and rich learning experience, we will provide Matlab code and Hardware Data.