Journal Clubs
📅 Meeting Details
- Frequency: Bi-weekly sessions
- Format: Virtual
- Duration: 60 minutes with Q&A
- Language: English
- Time: Monday 3 PM Beijing, 4 PM JST/KST
- Access link: Please check out journal club channel on our slack.
Past Sessions - 2026
- 2025-03-16
-
TBA
- 2026-03-09
-
Agent models
- Agentic AI – Physicist Collaboration in Experimental Particle Physics: A Proof-of-Concept Measurement with LEP Open Data (arXiv:2603.05735)
- The Denario project: Deep knowledge AI agents for scientific discovery (arXiv:2510.26887)
- MadAgents (arXiv:2601.21015)
- CoLLM: AI engineering toolbox for end-to-end deep learning in collider analyses (arXiv:2602.06496)
-
Fine-tuning and Low Rank Adaptations (LoRA)
- LoRA: Low-Rank Adaptation of Large Language Models (arXiv:2106.09685)
- QLoRA: Efficient Finetuning of Quantized LLMs (arXiv:2305.14314)
- Hands-on Large Language Models
- 2026-02-23
-
Causal chats
- Bayesian Cosmic Void Finding with Graph Flows (arXiv:2602.14630)
- Revealing the Dark Threads of the Cosmic Web (arXiv:2003.04393)
- AstroAI Asian Network
- 2026-01-05
-
QnA on AI for Physics and Physics for AI
Past Sessions - 2025
- 2025-12-08
-
Physics, Language Models, and Data
- 2025-11-24
-
Lorentz-equivariant Neural Networks
- focused Lorentz Local Canonicalization: How to Make Any Network Lorentz-Equivariant (arXiv:2505.20280)
-
Fundamental Limit of Jet Tagging Performance
- SURFing to the Fundamental Limit of Jet Tagging (arXiv:2511.15779)
-
Flow-matching with non-uniform dimension data
- 2025-10-27
-
Quantum-Informed Neural Networks
- QINNs: Quantum-Informed Neural Networks (arXiv:2510.17984)
-
Neural Networks and Renormalization Groups
- focused Dynamic neuron approach to deep neural networks: Decoupling neurons for renormalization group analysis (arXiv:2410.00396)
- The Principles of Deep Learning Theory (arXiv:2106.10165)
-
Unfolding Detector Effects in Collider Physics
- Generative Unfolding of Jets and Their Substructure (arXiv:2510.19906)
- 2025-09-29
-
Classification Performance vs. Robustness in Jet Tagging
- focused The Pareto Frontier of Resilient Jet Tagging (arXiv:2509.19431)
-
Foundation models for high-energy physics
- Foundation models for high-energy physics (arXiv:2509.21434)
- 2025-09-15
-
Double Descent and Overparameterization in Particle Physics Data
- focused Double Descent and Overparameterization in Particle Physics Data (arXiv:2509.01397)
- Double Descent
- Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets (arXiv:2201.02177)
-
Other Papers
- 2025-09-01
-
Inductive Bias in Foundation Models and World Models (Continued)
- focused What Has a Foundation Model Found? Using Inductive Bias to Probe for World Models (arXiv:2507.06952)
-
Optical Generative Models
- Optical generative models (arXiv:2410.17970)
-
Foundation models for Collider Physics - OmniLearn
- Solving Key Challenges in Collider Physics with Foundation Models (arXiv:2404.16091)
- 2025-08-27 (Wed)
-
Inductive Bias in Foundation Models and World Models
- focused What Has a Foundation Model Found? Using Inductive Bias to Probe for World Models (arXiv:2507.06952)
- 2025-08-11
-
Symbolic Regressions
- focused Angular Coefficients from Interpretable Machine Learning with Symbolic Regression (arXiv:2508.00989)
- Deep symbolic regression for physics guided by units constraints: toward the automated discovery of physical laws (arXiv:2303.03192)
- Class Symbolic Regression: Gotta Fit ‘Em All (arXiv:2312.01816)
- 2025-07-14
-
Learning at Criticality
- focused Learning-at-Criticality in Large Language Models for Quantum Field Theory and Beyond (arXiv:2506.03703)
- 2025-06-30
-
New Architecture Design for More Human-like NN & Dynamic Sparsity
- focused Continuous Thought Machines (arXiv:2505.05522)
- IAFormer: Interaction-Aware Transformer network for collider data analysis (arXiv:2505.03258)
- 2025-06-16
-
Neural Network Compositionality & Mechanistic Interpretability
- focused Break It Down: Evidence for Structural Compositionality in Neural Networks (arXiv:2301.10884)
-
Other Papers
- Interpreting the structure of multi-object representations in vision encoders (arXiv:2406.09067)
- Tracing Thoughts in Language Models (anthropic)
- Mapping the Mind of Language Models (anthropic)
- Domain Separation Networks (arXiv:1608.06019)
- 2025-06-02
-
Neural Thermodynamic Laws for Large Language Model Training
- Exploring how thermodynamic principles can be applied to understand and optimize large language model training processes.
- focused Neural Thermodynamic Laws for LLM Training (arXiv:2505.10559)
-
Spectral Bias of Neural Networks
- On the Spectral Bias of Neural Networks (arXiv:1806.08734)
- Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains (2006.10739)