Seminars
AI + High Energy Physics research presentations and discussions
Our online joint seminars are at the intersection of AI and fundamental physics open to all participants. We are open to all interested participants across East Asia and beyond. For zoom links, please check out our Slack channel or contact organizers.
Upcoming Seminars
- Apr. 28 (Tue)
-
Energy-information trade-off optimizes the cortical critical power law coding
- Speaker: Jun-nosuke Teramae (Kyoto University)
- Time: 1:30 PM JST/KST, 12:30 PM Beijing
- link to the seminar page: UTokyo
-
How neurons in the brain represent sensory information is one of the central questions in neuroscience. Recent experiments addressing this problem have revealed that the stimulus responses of cortical neurons exhibit a critical power law. This criticality is hypothesized to balance expressivity and robustness in neural encoding by avoiding the so-called fractal regime, where neural responses become overly sensitive to input perturbations. However, contrary to this assumption, we mathematically prove that neural coding is more robust than previously believed. We develop a theory that provides an analytical expression for the Fisher information in population coding and show that, due to its intrinsic high dimensionality, population coding does not degrade even in the fractal regime. Furthermore, we show that the trade-off between energy consumption and the efficiency of information coding results in the critical power law being the optimal population coding for sensory information.
Reference: Tatsukawa & Teramae, The cortical critical power law balances energy and information in an optimal fashion, PNAS 122.21, e2418218122 (2025)
- May 20 (Wed)
-
TBA
- Speaker: Marie Hein (RWTH Aachen)
- Time: 3:00 PM JST/KST, 2:00 PM Beijing
Past Seminars 2026
- Apr. 16 (Thu)
-
Searching For Anomalies with Foundation Models
- Speaker: Vinicius Mikuni (Nagoya University)
- Time: 2:00 PM JST/KST, 1:00 PM Beijing
- link to the seminar page: UTokyo
- link to the seminar page: RIKEN
-
Anomaly detection relaxes the assumptions of how new physics should look and extends the reach of what we can discover. However, interpreting the data and estimating backgrounds remains a challenge. In this new work, we investigate anomalous events selected by the OmniLearned Foundation model across different model sizes, performing a full analysis using CMS Open Data. Surprisingly, models of different sizes, trained on the same data with the same loss functions, select entirely different collisions. In particular, the large OmniLearned model (500M parameters) selects events that are not well described by our background model.
- Apr. 15 (Wed)
-
Toward AI for Physics: From Physical Law Discovery to Scientific Research Agents
- Speaker: Xiang Li (Peking University)
- Time: 2:00 PM Beijing, 3:00 PM JST/KST
-
Artificial intelligence in physics should not be viewed merely as a tool for narrow tasks such as formula fitting or benchmark problem solving. More fundamentally, it offers the possibility of enabling AI systems to participate in the formation of physical knowledge and in the actual practice of scientific research. In this talk, I will present our recent efforts toward this broader goal. Using AI-Newton as an example, I will argue that the discovery of universal physical laws requires structured conceptual representations and domain-specific language formulations that go beyond standard symbolic regression. I will then discuss LOCA, which shows how explicit logical frameworks can improve the ability of large language models to perform precise scientific reasoning. Finally, I will introduce Aether, our ongoing effort to develop a more general scientific research agent system equipped with domain expertise, tool-use capabilities, and flexible human-AI interaction. Taken together, these works suggest a promising emerging paradigm for scientific discovery, centered on the integration of knowledge representation, logical reasoning, and research workflows.
-
Xiang Li is a Boya Postdoctoral Fellow at Peking University, working with Prof. Yan-Qing Ma. His research includes AI for science and perturbative quantum field theory. He is particularly interested in how AI can be used to support scientific discovery, with current work spanning AI-driven law discovery, research-oriented AI agent systems, and finding new methods in perturbative QFT. He is actively involved in the development of Aether, an open-source research agent system designed to support scientists in real research workflows. His broader goal is to explore how AI systems can assist human researchers and accelerate progress on frontiers of science.
- Apr. 1 (Wed)
-
Neural Networks from the Perspective of Physics
- Speaker: Jaeok Yi (KAIST)
- Time: 3:00 PM JST/KST, 2:00 PM Beijing
- link to the seminar page
-
Despite the remarkable empirical success of deep learning, a comprehensive theoretical understanding of why and how neural networks learn remains a mystery. In this talk, we discuss physics-inspired approaches to understanding neural networks. We present synaptic field theory, a framework that reformulates the gradient descent dynamics of synaptic weights as classical field dynamics in de Sitter spacetime, constructing an action whose metric naturally matches that of a universe with a positive cosmological constant. This framework faces a challenge related to the non-locality of the cost function. To address this issue, we explore the idea of promoting neurons to dynamical degrees of freedom. Leveraging properties of stochastic gradient descent, the Lagrangian can be decomposed into a data-independent bulk part and a data-dependent boundary part. This decomposition is expected to separate the architectural structure from the stochastic properties of neural networks, enabling independent analysis of each. Through this line of research, we aim to provide physicists with a familiar language to investigate the theoretical foundations of machine learning.
- Mar. 24 (Tue)
-
Connecting Simulations and Observations with Differentiable Simulations and Field Level Inference
- Speaker: Benjamin Horowitz (IPMU)
- Time: 5:00 PM JST/KST, 4:00 PM Beijing
- link to the seminar page
-
The rapid growth of both astrophysical data and simulation capabilities is creating a new opportunity. Instead of being tied to summary statistics (like correlation functions and power spectra), we can begin to connect simulations and observations directly at the field level. In this talk, I will present a framework for field-level, multi-probe inference built around differentiable simulations, where gradients can be propagated through the forward model itself. I will focus on diffhydro, a differentiable hydrodynamics framework written in JAX that combines modern multiphysics solvers with end-to-end automatic differentiation. Starting from simple dark-matter models, we can incrementally add more realistic physics, including turbulence, radiative heating and cooling, and self-gravity, while retaining the ability to optimize directly through the simulation. This makes it possible to connect observations to initial conditions (i.e. latent fields), physical parameters, and unresolved processes in a unified way. I will show how these ideas open the door to reconstructing the history of the Universe and how the same framework can be a platform for embedded machine learning models for additional acceleration and new physics discovery.
- Mar. 18 (Wed)
-
How LLM can help particle physicists
- Speaker: Mihoko Nojiri (KEK)
- Time: 5:00 PM JST/KST, 4:00 PM Beijing
- link to the seminar page
-
In this talk, I want to discuss evolving field of application of LLM to the scientific coding. The HEP analysis often require lengthy coding of high reliability. We introduce CoLLM, which allows to generate analysis code from the LLM prompts quickly. The package include the automatic bug fixing, and it is now quickly evolving toward code reviews and refinements. I also comments the possible application to the other field and implication from brain functions.
- Jan. 29 (Thu)
-
Status on Generative Unfolding
- Speaker: Sofia Palacios (Rutgers University)
- Time: 5:00 PM JST/KST, 4:00 PM Beijing
- link to the seminar page
-
Generative machine learning has become a powerful tool for unbinned, high-dimensional unfolding at the LHC. This talk highlights recent progress on key open challenges: scaling to hundreds of dimensions, prior-independent parameter estimation, and the path toward fully analysis-ready unfolding.
- Jan. 29 (Thu)
-
Storage Capacity of Perceptron with Variable Selection
- Speaker: Yingying Xu (University of Helsinki)
- Time: 4:00 PM JST/KST, 3:00 PM Beijing
- link to the seminar page
-
A central challenge in machine learning is to distinguish genuine structure from chance correlations in high-dimensional data. In this work, we address this issue for the perceptron, a foundational model of neural computation. Specifically, we investigate the relationship between the pattern load α and the variable selection ratio ρ for which a simple perceptron can perfectly classify P = αN random patterns by optimally selecting M = ρN variables out of N variables. While the Cover–Gardner theory establishes that a random subset of ρN dimensions can separate αN random patterns if and only if α < 2ρ, we demonstrate that optimal variable selection can surpass this bound by developing a method, based on the replica method from statistical mechanics, for enumerating the combinations of variables that enable perfect pattern classification. This not only provides a quantitative criterion for distinguishing true structure in the data from spurious regularities, but also yields the storage capacity of associative memory models with sparse asymmetric couplings.
- Jan. 29 (Thu)
-
Physics of Machine Learning
- Speaker: Gert Aarts (Swansea University)
- Time: 2:30 PM JST/KST, 1:30 PM Beijing
- link to the seminar page
-
In recent years machine learning (ML) has started to make impact in lattice field theory (LFT), e.g. for the generation of ensembles of configurations. In this talk I will explore potential impact in the opposite direction, i.e. using theoretical physics to understand ML approaches. I will relate stochastic gradient descent to random matrix theory and then make the connection between neural networks and disordered systems, leading to a neural network phase diagram in the plane spanned by hyper parameters. I will conclude with the possible impact of our findings for practical ML applications.
- Jan. 28 (Wed)
-
Understanding Galactic Dark Matter with Generative Models
- Speaker: Sung Hak Lim (IBS)
- Time: 2:30 PM JST/KST, 1:30 PM Beijing
- link to the seminar page
-
Mapping the Milky Way’s dark matter requires moving beyond traditional, rigid dynamical models. In this talk, generative models — specifically Normalizing Flows — are used to learn the stellar phase space distribution directly from Gaia data. This approach enables a flexible, model-independent reconstruction of the Galactic gravitational potential and local dark matter density. These data-driven techniques provide a promising avenue to handle complex observational biases and what they reveal about the dark sector’s influence on our Galaxy.
- Jan. 13 (Tue)
-
Generative AI in Cosmology
- Speaker: Leander Thiele (IPMU)
- Time: 3:00 PM JST/KST, 2:00 PM Beijing
- link to the seminar page
-
Increasing data volumes, pushing to non-linear scales, create opportunities for machine learning in cosmology. One primary challenges is the inverse problem implicitly defined through simulations. Neural simulation-based inference is increasingly being recognized as a tool. I will review this technique and present some work both on observational data as well as on methodological development, specifically multi-fidelity inference. In the second part of the talk, I will present recent work on probabilistic identification of cosmic voids.
Past Seminars 2025
- Dec. 19 (Fri)
-
Physics-Driven Learning for Solving Inverse Problems in QCD Physics
- Speaker: Lingxiao Wang (RIKEN)
- Time: 5:00 PM JST/KST, 4:00 PM Beijing
- link to the seminar page
-
Discovery in the physical sciences relies on inverse modeling of observations. The combination of deep learning and physics-driven designs is reshaping how we solve inverse problems for extracting physical properties from data. This is particularly relevant for quantum chromodynamics (QCD), where non-trivial symmetries make both data interpretation and computation challenging. In this talk, I will present physics-driven learning from a probabilistic perspective, with a focus on applications in QCD physics. Examples include learning spectral functions and hadron forces from lattice QCD data, reconstructing hadron emission sources from Femtoscopy, and extracting the equation of state from neutron-star observations. If time permits, I will also introduce the physics of diffusion models and discuss physics-driven designs that enable expandable and reliable sampling for accelerating simulations.