Talks and presentations

Injecting Hamiltonian Architectural Bias into Deep Graph Networks for Long-Range Propagation

August 19, 2024

Invited Talk, AI4Science Talks, Stuttgart, Germany

Link to Slides
Very happy to be invited by the Machine Learning for Simulation Science lab of the University of Stuttgart and NEC Labs Europe GmbH. Abstract: The dynamics of information diffusion within graphs is a critical open issue that heavily influences graph representation learning, especially when considering long-range propagation. This calls for principled approaches that control and regulate the degree of propagation and dissipation of information throughout the neural flow. Motivated by this, we introduce (port-)Hamiltonian Deep Graph Networks, a novel framework that models neural information flow in graphs by building on the laws of conservation of Hamiltonian dynamical systems. We reconcile under a single theoretical and practical framework both non-dissipative long-range propagation and non-conservative behaviors, introducing tools from mechanical systems to gauge the equilibrium between the two components. Our approach can be applied to general message-passing architectures, and it provides theoretical guarantees on information conservation in time. Empirical results prove the effectiveness of our port-Hamiltonian scheme in pushing simple graph convolutional architectures to state-of-the-art performance in long-range benchmarks.

Rediscovering Chaos? Analysis of GPU Computing Effects in Graph-coupled NeuralODEs

July 11, 2024

Poster, WSOM+MiWoCI'24, UAS Mittweida, Mittweida, Germany

Download Poster.pdf
One of the fundamental guidelines in research is to publish reproducible results. Without this core assumption, conclusions and decisions cannot be trusted. Also in the last years, GPU computing became the defacto default choice of computing platform. This paper studies the confounding effects of utilizing GPU accelerations, computing with finite precision and modeling chaotic systems. Especially when it comes to end-to-end learned NeuralODE models, the forward as well as the backward pass dynamics need to be stable and reliable reproducible. In this work, we investigate the phenomenon of non-reproducible computations despite fixing random seeds and data batches and reason how to retain the sweet spot of computing highly efficient, modeling complex dynamics and being deterministically reproducible.

A Primer on Over-Squashing and Over-Smoothing Phenomena in Graph Neural Networks

August 23, 2023

Talk, MiWoCI'23, UAS Mittweida, Mittweida, Germany

In the last years, graph neural networks (GNN) have gained a lot of attraction in research. Thanks to their ability to embed graph structures, numerous downstream tasks can now be successfully addressed at the level of nodes, edges, and graphs. Despite their success, increasing the number of layers to capture long-term dependencies leads to over-squashing and over-smoothing of feature representations. After gaining intuition about why and where the problems occur, this talk focuses on strategies to overcome over-squashing. Inspired by the formulation of deep neural architectures as stable and non-dissaptive ODEs, long-term dependencies in the information flow can be preserved. Finally, an outlook on a different formulation in terms of Hamiltonian dynamics is given.