Splash photo of Stanford
Conference on Parsimony and Learning (CPAL)
March 2025, Stanford

Oral Sessions at CPAL 2025

A select number of papers from the CPAL 2025 Proceedings Track will be presented as oral presentations at the conference. The oral presentations are listed below, in their corresponding oral sessions.

Highlight Talks 1

Time: Day 2 (Mar 25) – Tuesday – 10:00 AM to 10:30 AM

Progressive Gradient Flow for Robust N:M Sparsity Training in Transformers

Abhimanyu Rajeshkumar Bambhaniya, Amir Yazdanbakhsh, Suvinay Subramanian, Sheng-Chun Kao, Shivani Agrawal, Utku Evci, Tushar Krishna

Keywords: N:M structured sparsity, sparsity, model compression, attention-based models, sparse training recipe

Improving Neuron-level Interpretability with White-box Language Models

Hao Bai, Yi Ma

Keywords: White-box models, deep learning architectures, neuron-level interpretation

A unified framework for Sparse plus Low-Rank Matrix Decomposition for LLMs

Mehdi Makni, Kayhan Behdin, Zheng Xu, Natalia Ponomareva, Rahul Mazumder

Keywords: model compression, sparse plus low-rank, optimization, inference acceleration, 2:4 sparsity, hardware and system co-design

Approximate Nullspace Augmented Finetuning for Robust Vision Transformers

Haoyang Liu, Aditya Singh, Yijiang Li, Haohan Wang

Keywords: Robustness, Vision Transformer, Invariance

Highlight Talks 2

Time: Day 2 (Mar 25) – Tuesday – 12:00 PM to 12:30 PM

Closure Discovery for Coarse-Grained Partial Differential Equations Using Grid-based Reinforcement Learning

Jan-Philipp von Bassewitz, Sebastian Kaltenbach, Petros Koumoutsakos

Keywords: Closure Discovery, Inductive Bias, Multi-Agent Reinforcement Learning

The Computational Limits of State-Space Models and Mamba via the Lens of Circuit Complexity

Yifang Chen, Xiaoyu Li, Yingyu Liang, Zhenmei Shi, Zhao Song

Keywords: State-Space Models, Mamba, Circuit Complexity, Computational Limits

Fast John Ellipsoid Computation with Differential Privacy Optimization

Xiaoyu Li, Yingyu Liang, Zhenmei Shi, Zhao Song, Junwei Yu

Keywords: Fast Optimization, Differential Privacy, John Ellipsoid Computation

Sufficient and Necessary Explanations (and What Lies in Between)

Beepul Bharti, Paul Yi, Jeremias Sulam

Keywords: interpretability, explainability

Highlight Talks 3

Time: Day 4 (Mar 27) – Thursday – 10:00 AM to 11:00 AM

Vanishing Feature: Diagnosing Model Merging and Beyond

Xingyu Qu, Samuel Horváth

Keywords: Model Merging, Efficiency, Deep Learning, Efficient Deep Learning

A Case Study of Low Ranked Self-Expressive Structures in Neural Network Representations

Uday Singh Saini, William Shiao, Yahya Sattar, Yogesh Dahiya, Samet Oymak, Evangelos E. Papalexakis

Keywords: Subspace Clustering, Centered Kernel Alignment, Representation Similarity Measures.

Hamiltonian Mechanics of Feature Learning: Bottleneck Structure in Leaky ResNets

Arthur Jacot, Alexandre Kaiser

Keywords: Low-rank bias, NeuralODE, Hamiltonian, Bottleneck structure

You Only Debias Once: Towards Flexible Accuracy-Fairness Trade-offs at Inference Time

Xiaotian Han, Tianlong Chen, Kaixiong Zhou, Zhimeng Jiang, Zhangyang Wang, Xia Hu

Keywords: fairness, weight space, neural network subspace