Program > Tutorials

Tutorials can be attended by participants registering either in full registration or in student registration, by paying the corresponding fee of 30 Euros per tutorial.

  • T-AM1–T-AM5: Morning tutorials (9:00–12:30)
  • T-PM1–T-PM5: Afternoon tutorials (14:00–17:30)

 

Morning tutorials

T-AM1 - Learning under Requirements: Supervised and Reinforcement Learning with Constraints

Organizers: Miguel Calvo-Fullana (Universitat Pompeu Fabra), Luiz F.O. Chamon (University of Stuttgart), Santiago Paternain (Rensselaer Polytechnic Institute) and Alejandro Ribeiro (University of Pennsylvania)

Abstract: Requirements are integral to systems engineering tasks which are always defined as compromises between competing specifications such as accuracy, robustness, safety, and efficiency. As data plays an increasingly central role in systems design, requirements have also become of growing interest in machine learning (ML). Learning to satisfy requirements is, however, antithetical to the standard ML practice of minimizing individual losses. Constrained learning overcomes this challenge by incorporating requirements as statistical constraints rather than modifying the training objective. This tutorial provides an overview of theoretical and algorithmic developments that establish when and how it is possible to learn with constraints. We describe how theoretical guarantees and viable learning algorithms are hindered by lack of convexity of typical ML optimization problems and derive new non-convex duality results to circumvent these hurdles. Throughout this tutorial, we explore the impact of these results on supervised learning, robust learning, and reinforcement learning with constraints. We emphasize the breadth of potential applications of these tools by showcasing examples in fairness, federated learning, robust classification, learning under invariance, safe navigation, and wireless resource allocation. Ultimately, this tutorial provides a general tool that can be used to tackle a variety of problems in ML and sequential decision-making and prepare attendees to start conducting research in this emerging frontier.

Organizers' Bios:

Miguel Calvo-Fullana received the B.Sc. degree in electrical engineering from the Universitat de les Illes Balears in 2010 and the M.Sc. and Ph.D. degrees in electrical engineering from the Universitat Politecnica de Catalunya in 2013 and 2017, respectively. He joined Universitat Pompeu Fabra (UPF) in 2023, where he is a Ramon y Cajal fellow. Prior to joining UPF, he held postdoctoral appointments at the University of Pennsylvania and the Massachusetts Institute of Technology and was a research assistant at the Centre Tecnologic de Telecomunicacions de Catalunya (CTTC). His research interests include learning, optimization, multi-agent systems, and wireless communication. He is the recipient of best paper awards at ICC 2015, IEEE GlobalSIP 2015, and IEEE ICASSP 2020.

Luiz F. O. Chamon received the B.Sc. and M.Sc. degrees in electrical engineering from the University of São Paulo in 2011 and 2015 and the Ph.D. degree in electrical and systems engineering from the University of Pennsylvania in 2020. Until 2022, he was a postdoctoral fellow at the Simons Institute of the University of California, Berkeley. He is currently an independent research group leader at the University of Stuttgart. In 2009, he was an undergraduate exchange student of the Masters in Acoustics of the École Centrale de Lyon, France. He received both the best student paper and the best paper awards at IEEE ICASSP 2020 and was recognized by the IEEE Signal Processing Society for his distinguished work for the editorial board of the IEEE Transactions on Signal Processing in 2018. His research interests include optimization, signal processing, machine learning, statistics, and control.

Santiago Paternain received the B.Sc. degree in electrical engineering from Universidad de la Republica Oriental del Uruguay in 2012, the M.Sc. in Statistics from the Wharton School in 2018, and the Ph.D. in Electrical and Systems Engineering from the University of Pennsylvania in 2018. He is currently an Assistant Professor at the Rensselaer Polytechnic Institute (RPI). Prior to joining RPI, he was a postdoctoral researcher at the University of Pennsylvania. He was the recipient of the 2017 CDC Best Student Paper Award and the 2019 Joseph and Rosaline Wolfe Best Doctoral Dissertation Award from the Electrical and Systems Engineering Department at the University of Pennsylvania. His research interests lie at the intersection of machine learning and control of dynamical systems.

Alejandro Ribeiro received the B.Sc. degree in electrical engineering from the Universidad de la Republica Oriental del Uruguay in 1998 and the M.Sc. and Ph.D. degrees in electrical engineering from the University of Minnesota in 2005 and 2007. He joined the University of Pennsylvania in 2008 where he is currently Professor of Electrical and Systems Engineering. Papers co-authored by Dr. Ribeiro received the 2022 IEEE Signal Processing Society Best Paper Award, the 2022 IEEE Brain Initiative Student Paper Award, the 2021 Cambridge Ring Publication of the Year Award, the 2020 IEEE Signal Processing Society Young Author Best Paper Award, the 2014 O. Hugo Schuck best paper award, and awards at EUSIPCO, IEEE ICASSP, IEEE CDC, IEEE SSP, IEEE SAM, Asilomar SSC Conference, and ACC. His teaching has been recognized with the 2017 Lindback award for distinguished teaching and the 2012 S. Reid Warren, Jr. Award presented by Penn’s undergraduate student body for outstanding teaching. He received an Outstanding Researcher Award from Intel University Research Programs in 2019. He is a Penn Fellow class of 2015 and a Fulbright scholar class of 2003. His research is in wireless autonomous networks, machine learning on network data and distributed collaborative learning.

 

T-AM2 - On the Advances in Message-Passing Algorithms and Practical Iterative Receiver Design for Digital Communications

Organizers: Serdar Sahin (Thales), Antonio Maria Cipriano (Thales) and Charly Pouillat (INP-ENSEEIHT Toulouse)

Abstract: In recent years, we witnessed a renewed interest of the wireless communications and signal processing research communities about iterative algorithms based on approximate Bayesian inference and message passing.
Such techniques have previously shined through the success of probabilistic channel decoding algorithms throughout the 90s, leading to the widespread investigations on the “turbo principle” and on the belief propagation (BP) algorithm in early 2000s, in many digital receiver design problems involving estimation, equalization, detection and decoding.
A decade later, the increased popularity of expectation propagation (EP) for handling intractable variational inference problems and other emerging practical approximate message passing (AMP) techniques have led to novel iterative signal estimation algorithms with attractive complexity-performance trade-offs. In particular, these algorithms are able to produce signal estimates uncorrelated with the observations and the priors, a property which considerably reduces error propagation in the iterative estimation process, and whose performance in the asymptotic regime can be accurately predicted.
Reinforced with other practical variational inference techniques such as the expectation maximization (EM) or mean-field (MF), and with emerging deep learning aided design approaches, advanced iterative algorithm design remains a hot-topic in various signal processing and machine learning communities. Indeed, researchers have been proposing competing strategies such as the sparse Bayesian learning (SBL), expectation consistent optimisation framework, memory AMP (MAMP), orthogonal AMP (OAMP) or vector AMP (VAMP)  and many others to seek nearoptimal estimation performance, with significant variations both in algorithmic complexity and in the underlying theoretical backgrounds. Consequently, it has become significantly more difficult to get a grasp on this rapidly developing literature, which incorporates many closely-related contributions as well as some important overlooked developments.
Our tutorial aims to provide a synthetic view on the major developments on variational Bayesian inference techniques, mainly from a message-passing algorithm perspective, and to assess the capabilities of this arsenal based on EP and other related AMP techniques for addressing stochastic signal processing problems. In particular, in the context of digital communication receiver design, we make emphasis on the importance of factor graph assumptions and scheduling strategies, for achieving reasonable performance-complexity tradeoff. Indeed, recent years have shown that these techniques lead to the practical implementation of iterative receivers for non-orthogonal multiple access (NOMA), for multiple-input multiple-output (MIMO)  and singlecarrier (SC) systems among others. With the help of additional optimization through deep unfolding, and with optimized channel code design, such techniques can be expected to play an important role in next-generation communications transceivers.

 

T-AM3 - An introduction to bivariate signal processing: polarization, quaternions and geometric représentations

Organizers: Nicolas Le Bihan (Gipsa Lab) and Julien Flamant (Université de Lorraine)

Abstract: An important task of data science is to represent and evidence the interrelation between coupled observables. The simple case of two observables that vary in time or space leads to bivariate signals. Those appear in virtually all fields of physical sciences, whenever two quantities of interest are related and jointly measured, such as in seismology (e.g. horizontal vs vertical ground motion), optics (transverse coordinates of the electric field), oceanography (components of current velocities), or underwater acoustics (horizontal vs vertical particle displacements) to cite a few.
Bivariate signals describe trajectories in a 2D plane whose geometric properties (e.g. directionality) have a natural interpretation in terms of the physical notion of polarization usually used for waves. As an example, according to Einstein’s theory of general relativity, the recently detected gravitational waves (GWs) are characterized by two degrees of freedom, that are connected to the bivariate space-time strain signal measured by the detectors. The polarization state of the observed signal is directly connected to that of the wave, which in turn provides key insights into the underlying physics of the source. Polarization is thus a central concept for the analysis of bivariate signals.
Two usual representations exist for analyzing and processing bivariate signals. The first one is based on 2D real-valued vectors, where a bivariate signal is seen as a special case of a more general multivariate signal. The second one relies on a complex representation, where the real and imaginary parts of a univariate complex signal correspond to the two components of the above 2D real-valued vector signal. Both representations have their merits; however, these two approaches do not provide straightforward descriptions of bivariate signals or filtering operations in terms of polarization properties. To this purpose, the tutorial speakers (and their collaborators) have introduced a new and generic approach for the analysis and filtering of bivariate signals. It relies on the one hand on the natural embedding of bivariate signals – viewed as complex-valued signals – into the set of quaternions and, on the other hand, the definition of a dedicated quaternion Fourier transform to enable a meaningful spectral representation and analysis of bivariatesignals. This quaternionrepresentation lays down the first building blocks of asignal processingmethodology that simultaneously provides: (i) straightforward geometric and physical interpretations (ii) mathematically grounded tools and (iii) efficient numerical implementations.
This tutorial aims at providing insights into the specificity of the bivariate signal processing and its deep connections with the physics of polarization. Emphasis on the geometric interpretation of bivariate signal processing will be made and illustrated with tutorial examples based on specific python toolbox.

 

T-AM4 - Graph learning in Signal Processing & Machine Learning

Organizers: Florent Bouchard (CentraleSupélec), Arnaud Breloy  (CNAM), Ammar Mian (Université Savoie Mont-Blanc) and Alexandre Hippert-Ferrer (Université Gustave Eiffel)

Abstract: Graphs are ubiquitous and fundamental structures that offer a meaningful way to represent links between entities. These structures have been successfully leveraged in many emerging fields such as graph signal processing (GSP), optimal transport, and graph neural networks (GNN). In any case, most works deal with a known graph, i.e., developing algorithms that account for prior knowledge of the structure of relationships between the entities. Upstream, the problematic of graph learning (or graph structure learning), aims at unveiling an unknown underlying graph topology behind the data in order to apply the aforementioned techniques. This problem also motivated numerous works, both in the unsupervised (learning a graph to reflect the data structure), and supervised (learning a graph to optimize a task such as classification). This tutorial will present a general introduction to the topic, as well as a panorama of recent advances in the matter. The talk is divided into four parts. We will first motivate graph learning problems by application setups, e.g., clustering without graph knowledge, learning a graph to perform GSP, or setting up the structure of a GNN. Second, we will present some fundamental notions that provide necessary
tools for the rest of the talk: statistical models (dealing with uncertainties and missing data), problem formulation, optimization related to graph structures (notably basics of Riemannian optimization), etc. Third, a focus will be put on unsupervised graph learning, where we will cover statistical formulations of graph learning based on conditional correlation structures, as well as more recent Laplacian learning methods. The last part will concern the supervised case, and present some recent works addressing the problem of refining the graph of a GNN according to a task.

 

T-AM5 - Signal Detection: Model-Based, Data-Driven, and Hybrid Statistical Approaches

Organizer: Angelo Coluccia (Università del Salento)

Abstract: There is a recent trend in adopting data-driven techniques, namely machine learning (including shallow or deep neural networks), to design novel solutions for signal processing problems traditionally addressed by model-based approaches. On the other hand, the latter has been adopted with effective results for decades, providing at the same time control and interpretability. Within the quest to find a good balance between these two worlds, an interesting approach is the combination of both, by means of hybrid techniques. The tutorial will give an overview on some recent work in this respect for the problem of signal detection, with focus on multi-dimensional signal processing. It will be discussed how data-driven tools can be coupled with traditional detection statistics for both unstructured and structured signal models, in white and correlated noise. Classical hypothesis testing approaches, namely NeymanPearson and GLRT, will be revisited under the lens of machine learning classification. As application field, adaptive detection of (radar) signals in noise is selected. The whole topic is very timely, for the signal processing community at large and for people working more specifically in signal detection, not limited to the radar domain.

 

Afternoon tutorials

T-PM1 - Self-Supervised Learning Methods for Imaging

Organizers: Julián Tachella (ENS de Lyon) and Mike Davies (University of Edinburgh)

Abstract: This tutorial will cover core concepts and recent advances in the emerging field of self-supervised learning methods for solving imaging inverse problems with deep neural networks. Self-supervised learning is a fundamental tool deploying deep learning solutions in scientific and medical imaging applications where obtaining a large dataset of ground-truth images is very expensive or impossible. The tutorial will provide a comprehensive summary of different self-supervised methods, discuss their theoretical underpinnings and present practical self-supervised imaging applications.

Tutorial outline:

  • Introduction to imaging inverse problems with deep learning
  • Learning from noisy measurement data: a tour of unbiased risk estimators, including SURE, Noise2Noise and more.
  • Learning from incomplete measurements: leveraging operator diversity or invariance to groups of transformations.
  • Model identification theory: when can we expect to learn from measurement data alone?
  • Perspectives: open problems in the field.
  • Coding tutorial: self-supervised learning with the deepinverse library (https://deepinv.github.io/).

Organizers' Bios:

Julián Tachella received the electronic engineering degree from Instituto Tecnológico de Buenos Aires, Argentina, in 2016, and the Ph.D. degree from Heriot-Watt University, U.K., in 2020. He currently holds a Centre National de Recherche Scientifique (CNRS) research scientist position at the École Normale Supérieure de Lyon, France. His research interests include computational imaging, inverse problems and deep learning. He is also interested in various imaging problems, such as single-photon lidar and non-line-of-sight imaging.

Mike Davies received his M.A. degree from the Queens' College Cambridge and his Ph.D. from University College London. He currently holds the Jeffrey Collins Chair in Signal and Image Processing at the University of Edinburgh. He is a recipient of a Royal Society University Research Fellowship, and a European Research Council (ERC) advanced grant on Computational Sensing. He is a fellow of the IEEE, the European Association for Signal Processing, and the Royal Academy of Engineering. His research focuses on low-dimensional signal models, compressed sensing, computational imaging and machine learning.

 

T-PM2 - Theory and Applications of Phase Retrieval in Synthetic Aperture Imaging and Sensing

Organizers: Kumar Vijay Mishra (United States Army Research Laboratory) and Samuel Pinilla (Science and Technology Facilities Council)

Abstract: Synthetic aperture (SA) systems generate a larger aperture with greater spatial/temporal resolution than is inherently possible from the physical dimensions of a single sensor alone. These apertures are found in various signal processing applications such as optics, radar, remote sensing, microscopy, acoustics, and tomography. The SA processing often involves phase retrieval (PR), wherein a complex signal is to be recovered from the phaseless data. In general, both convex and nonconvex approaches have been suggested to solve the generic PR problem. However, these techniques are not readily applicable to various SA problems. In this tutorial, we provide a deep dive into the recent advances in PR for contemporary synthetic aperture imaging and sensing applications. Depending on the linear propagation model, diverse and scattered applications can be grouped into four main categories: Fourier PR, coded illumination, coded detection, and random. We cover the respective theories, algorithms, and use cases, including applications of machine learning in this area. This tutorial also aims to foster interaction between various SA disciplines thereby leading to a better understanding of the PT problems.

 

T-PM3 - Computational MRI in the Deep Learning Era: The Two Facets of Acquisition and Image Reconstruction

Organizer: Philippe Ciuciu (CEA)

Abstract: This tutorial aims to summarize recent learning-based advances in MRI, concerning both accelerated data acquisition and image reconstruction strategies. It is specifically tailored to graduate students, researchers and industry professionals working in the medical imaging field who want to know more about the radical shift machine learning has introduced for MRI during the last few years. As MRI is the most widely used medical imaging technique for non-invasively probing soft tissues in the human body (brain, heart, breast, liver, etc), training PhD students, postdocs and researchers in electrical and biomedical engineering is strategic for cross-fertilizing the fields and for understanding the ML-related needs and expectations from the MRI side.

 

T-PM4 - Privacy-Preserving Distributed Optimization: Theory, Methods and Applications

Organizers: Richard Heusdens (TU Delft) and Qiongxiu Li (Fudan University)

Abstract: In the last decades, distributed optimization has drawn increasing attention due to the demand for either distributed signal processing or massive data processing over (large-scale) pear-to-pear networks of ubiquitous devices. Motivated by the increase in computational power of low cost microprocessors, the range of applications for these networks has grown rapidly. Distributed optimization forms the core of numerous modern applications, include training machine learning models, target localization and tracking, healthcare monitoring, power grid management, and environmental sensing. Typically, due to the lack of infrastructure, the paradigm of distributed optimization is to solve the problem decentralized by exchanging data only between neighbouring sensors/agents over wireless channels. This data exchange is a major concern regarding privacy, because the exchanged data usually contain sensitive information, and traditional distributed optimization schemes do not address this privacy issue. In addition, since most algorithms are implemented in an iterative fashion, communication costs should be limited. The primal-dual method of multipliers (PDMM) emerges as a critical framework in the landscape of distributed optimization, offering a means to tackle complex problems while safeguarding the privacy and security of the underlying data. Our objective is to provide a comprehensive introduction to PDMM, shedding light on its synergies with well-established optimization methods such as the alternating direction method of multipliers (ADMM). We aim to illustrate the adaptability, privacy-preservation capabilities, and communication efficiency of PDMM across a variety of conditions. Furthermore, this tutorial will underscore the significance of PDMM by delving into its practical applications in a range of contexts, from semidefinite programming to federated learning. We will unfold the multifaceted nature of PDMM through various lenses, demonstrating its broad applicability and inherent flexibility.

 

T-PM5 - Computational Sensing: From Shannon Sampling to Hardware-Software Co-Design

Organizers: Ayush Bhandari (Imperial College) and Ruiming Guo (Imperial College)

Abstract: Digital data capture is the backbone of all modern day systems and “Digital Revolution” has been aptly termed as the Third Industrial Revolution. Underpinning the digital representation is the Shannon-Nyquist sampling theorem followed by more recent developments such as compressive sensing approaches. Almost all such approaches follow pointwise sampling strategies that have revolutionized how we capture signals and process them using sophisticated mathematical signal processing algorithms. These pointwise sampling strategies have transformed signal capture and processing through advanced mathematical algorithms, also laying the groundwork for the next wave of machine learning (ML) and artificial intelligence (AI) applications.
Yet, as the hardware is being pushed to its limits and computational resources become more accessible and efficient, it remains questionable if pointwise sampling is the best one can do. Across various scientific and engineering domains, researchers have explored alternative signal encoding methods, yielding unprecedented benefits. For instance, event-based or neuromorphic sampling in computational imaging and computer vision, one-bit ADCs in digital communications and massive MIMO systems, and the Unlimited Sensing Framework1 (USF) in signal processing have all demonstrated significant advantages of unconventional encoding over the traditional Shannon-Nyquist method.
Despite the proliferation of these novel encoding techniques—neuromorphic, one-bit, and modulo encoding—a comprehensive resource is missing that (i) provides a thorough introduction to these breakthroughs, (ii) unifies these concepts within the framework of sampling theory and SP in general, and (iii) fosters an end-to-end perspective by bridging theoretical concepts with their practical applications. Our tutorial aims to fill this gap for the signal processing (SP) community, offering an insightful introduction into the emerging field of Computational Sensing that lies at the convergence of theory, algorithms, hardware, and experiments. Starting from foundational principles, we aim to guide both beginners and experts through the theoretical, algorithmic, hardware, and experimental aspects of Computational Sensing. Leveraging our decade-long experience with computational sensing and imaging including the development of the USF, the tutorial will feature case studies complete with Matlab/Python code and experimental data from recent research, enabling participants to gain practical insights and stay abreast of cutting-edge developments.
In spirit of reproducible research and rich learning experience, we will provide Matlab code and Hardware Data.

Requirements are integral to systems engineering tasks which are always defined as compromises between competing specifications such as accuracy, robustness, safety, and efficiency. As data plays an increasingly central role in systems design, requirements have also become of growing interest in machine learning (ML). Learning to satisfy requirements is, however, antithetical to the standard ML practice of minimizing individual losses. Constrained learning overcomes this challenge by incorporating requirements as statistical constraints rather than modifying the training objective. This tutorial provides an overview of theoretical and algorithmic developments that establish when and how it is possible to learn with constraints. We describe how theoretical guarantees and viable learning algorithms are hindered by lack of convexity of typical ML optimization problems and derive new non-convex duality results to circumvent these hurdles. Throughout this tutorial, we explore the impact of these results on supervised learning, robust learning, and reinforcement learning with constraints. We emphasize the breadth of potential applications of these tools by showcasing examples in fairness, federated learning, robust classification, learning under invariance, safe navigation, and wireless resource allocation. Ultimately, this tutorial provides a general tool that can be used to tackle a variety of problems in ML and sequential decision-making and prepare attendees to start conducting research in this emerging frontier.

Online user: 2 Privacy
Loading...