Journal Clubs
📅 Meeting Details
- Frequency: Bi-weekly sessions
- Format: Virtual
- Duration: 60 minutes with Q&A
- Language: English
- Time: Monday 3 PM Beijing, 4 PM JST/KST
- Access Link: Please check out journal club channel on our Slack.
Past Sessions - 2026
- 2026-04-27
-
Cosmological Model Inference with CMB and SHAP-based Interpretation
-
Parametrized Classifiers for Model Inference
- Parametrized classifiers for optimal EFT sensitivity (arXiv:2007.10356)
-
Shapley Values for Post-hoc Interpretation
- Shapley explainability on the data manifold (arXiv:2006.01272)
- The many Shapley values for model explanation (arXiv:1908.08474)
- Using Shapley Values and Variational Autoencoders to Explain Predictive Models with Dependent Mixed Features (arXiv:2111.13507)
- Causal Shapley Values: Exploiting Causal Knowledge to Explain Individual Predictions of Complex Models (arXiv:2011.01625)
- Explaining individual predictions when features are dependent: More accurate approximations to Shapley values (arXiv:1903.10464)
- 2026-04-06
-
Field Level Inferences in Cosmology
- focused Field-Level Inference of Primordial Non-Gaussianity with the Quijote Simulation Suite (arXiv:2603.20855)
- Bayesian physical reconstruction of initial conditions from large scale structure surveys (arXiv:1203.3639)
- Cosmological inference from Bayesian forward modelling of deep galaxy redshift surveys (arXiv:1808.07496)
- Bayesian field-level inference of primordial non-Gaussianity using next-generation galaxy surveys (arXiv:2203.08838)
- 2026-03-30
-
Causal chats
- 2026-03-23
-
Agent models for collider physics
- MadAgents (arXiv:2601.21015)
- An End-to-end Architecture for Collider Physics and Beyond (arXiv:2603.14553)
- CoLLM: AI engineering toolbox for end-to-end deep learning in collider analyses (arXiv:2602.06496)
- LangGraph
- Agent Skills
-
LLM experience with galaxy studies
- 2026-03-16
-
Discussion on LLM and Simulation-based Inference in Astrophysics
- A simulation-based inference of the Milky Way merger history (arXiv:2603.12317)
- IllustrisTNG
- Auriga Project
- Why do we do astrophysics (arXiv:2602.10181)
-
Latent Space and Non-Euclidean Geometry
- Universal New Physics Latent Space (arXiv:2407.20315)
- t-SNE
- 2026-03-09
-
Agent models
- Agentic AI – Physicist Collaboration in Experimental Particle Physics: A Proof-of-Concept Measurement with LEP Open Data (arXiv:2603.05735)
- The Denario project: Deep knowledge AI agents for scientific discovery (arXiv:2510.26887)
- MadAgents (arXiv:2601.21015)
- CoLLM: AI engineering toolbox for end-to-end deep learning in collider analyses (arXiv:2602.06496)
-
Fine-tuning and Low Rank Adaptations (LoRA)
- LoRA: Low-Rank Adaptation of Large Language Models (arXiv:2106.09685)
- QLoRA: Efficient Finetuning of Quantized LLMs (arXiv:2305.14314)
- Hands-on Large Language Models
- 2026-02-23
-
Causal chats
- Bayesian Cosmic Void Finding with Graph Flows (arXiv:2602.14630)
- Revealing the Dark Threads of the Cosmic Web (arXiv:2003.04393)
- AstroAI Asian Network
- 2026-01-05
-
QnA on AI for Physics and Physics for AI
Past Sessions - 2025
- 2025-12-08
-
Physics, Language Models, and Data
- 2025-11-24
-
Lorentz-equivariant Neural Networks
- focused Lorentz Local Canonicalization: How to Make Any Network Lorentz-Equivariant (arXiv:2505.20280)
-
Fundamental Limit of Jet Tagging Performance
- SURFing to the Fundamental Limit of Jet Tagging (arXiv:2511.15779)
-
Flow-matching with non-uniform dimension data
- 2025-10-27
-
Quantum-Informed Neural Networks
- QINNs: Quantum-Informed Neural Networks (arXiv:2510.17984)
-
Neural Networks and Renormalization Groups
- focused Dynamic neuron approach to deep neural networks: Decoupling neurons for renormalization group analysis (arXiv:2410.00396)
- The Principles of Deep Learning Theory (arXiv:2106.10165)
-
Unfolding Detector Effects in Collider Physics
- Generative Unfolding of Jets and Their Substructure (arXiv:2510.19906)
- 2025-09-29
-
Classification Performance vs. Robustness in Jet Tagging
- focused The Pareto Frontier of Resilient Jet Tagging (arXiv:2509.19431)
-
Foundation models for high-energy physics
- Foundation models for high-energy physics (arXiv:2509.21434)
- 2025-09-15
-
Double Descent and Overparameterization in Particle Physics Data
- focused Double Descent and Overparameterization in Particle Physics Data (arXiv:2509.01397)
- Double Descent
- Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets (arXiv:2201.02177)
-
Other Papers
- 2025-09-01
-
Inductive Bias in Foundation Models and World Models (Continued)
- focused What Has a Foundation Model Found? Using Inductive Bias to Probe for World Models (arXiv:2507.06952)
-
Optical Generative Models
- Optical generative models (arXiv:2410.17970)
-
Foundation models for Collider Physics - OmniLearn
- Solving Key Challenges in Collider Physics with Foundation Models (arXiv:2404.16091)
- 2025-08-27 (Wed)
-
Inductive Bias in Foundation Models and World Models
- focused What Has a Foundation Model Found? Using Inductive Bias to Probe for World Models (arXiv:2507.06952)
- 2025-08-11
-
Symbolic Regressions
- focused Angular Coefficients from Interpretable Machine Learning with Symbolic Regression (arXiv:2508.00989)
- Deep symbolic regression for physics guided by units constraints: toward the automated discovery of physical laws (arXiv:2303.03192)
- Class Symbolic Regression: Gotta Fit ‘Em All (arXiv:2312.01816)
- 2025-07-14
-
Learning at Criticality
- focused Learning-at-Criticality in Large Language Models for Quantum Field Theory and Beyond (arXiv:2506.03703)
- 2025-06-30
-
New Architecture Design for More Human-like NN & Dynamic Sparsity
- focused Continuous Thought Machines (arXiv:2505.05522)
- IAFormer: Interaction-Aware Transformer network for collider data analysis (arXiv:2505.03258)
- 2025-06-16
-
Neural Network Compositionality & Mechanistic Interpretability
- focused Break It Down: Evidence for Structural Compositionality in Neural Networks (arXiv:2301.10884)
-
Other Papers
- Interpreting the structure of multi-object representations in vision encoders (arXiv:2406.09067)
- Tracing Thoughts in Language Models (anthropic)
- Mapping the Mind of Language Models (anthropic)
- Domain Separation Networks (arXiv:1608.06019)
- 2025-06-02
-
Neural Thermodynamic Laws for Large Language Model Training
- Exploring how thermodynamic principles can be applied to understand and optimize large language model training processes.
- focused Neural Thermodynamic Laws for LLM Training (arXiv:2505.10559)
-
Spectral Bias of Neural Networks
- On the Spectral Bias of Neural Networks (arXiv:1806.08734)
- Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains (2006.10739)