Splash photo of Tübingen
Conference on Parsimony and Learning (CPAL)
March 2026, Tübingen

Oral Sessions at CPAL 2026

A select number of papers from the CPAL 2026 Proceedings Track will be presented as oral presentations at the conference. The oral presentations are listed below, in their corresponding oral sessions.

  1. Highlight Talks I
    1. Time: Day 2 (Mar 24) – Tuesday – 3:30 PM to 4:00 PM
  2. Highlight Talks II
    1. Time: Day 3 (Mar 25) – Wednesday – 1:30 PM to 2:00 PM
  3. Highlight Talks III
    1. Time: Day 4 (Mar 26) – Thursday – 1:30 PM to 2:00 PM

Highlight Talks I

Time: Day 2 (Mar 24) – Tuesday – 3:30 PM to 4:00 PM

Matrix Sensing with Kernel Optimal Loss: Robustness and Optimization Landscape

Xinyuan Song, Ziye Ma

Keywords: Matrix sensing, kernel loss function, optimization

Teaching LLMs According to Their Aptitude: Adaptive Switching Between CoT and TIR for Mathematical Problem Solving

Xin Xu, Yan Xu, Tianhao Chen, Yuchen Yan, Chengwu Liu, Zaoyu Chen, Yufei Wang, Yichun Yin, Yasheng Wang, Qun Liu, Lu Yin

Keywords: Large Language Models, math QA, chain-of-thought, tool-integrated reasoning, fine-tuning

Sparse Mixture-of-Experts for Compositional Generalization: Empirical Evidence and Theoretical Foundations of Optimal Sparsity

Jinze Zhao, Peihao Wang, Junjie Yang, Ruisi Cai, Gaowen Liu, Jayanth Srinivasa, Ramana Rao Kompella, Yingbin Liang, Zhangyang Wang

Keywords: Compositional Generalization, Sparsity, Mixture of Experts

Highlight Talks II

Time: Day 3 (Mar 25) – Wednesday – 1:30 PM to 2:00 PM

From sparse recovery to plug-and-play priors, understanding trade-offs for stable recovery with generalized projected gradient descent

Ali Joundi, Yann Traonmilin, Jean-François Aujol

Keywords: Inverse Problems, Sparse Recovery, Plug-and-Play, Deep Prior, Optimization

Data-Efficient and Robust Trajectory Generation through Pathlet Dictionary Learning

yuanbo tang, Yan Tang, Zihui Zhao, Zixuan Zhang, Yang Li

Keywords: trajectory generative model, dictionary learning, sparse representation

Learning in the Null Space: Small Singular Values for Continual Learning

Cuong Anh Pham, Praneeth Vepakomma, Samuel Horváth

Keywords: continual learning, singular value decomposition, small singular values, null space

Highlight Talks III

Time: Day 4 (Mar 26) – Thursday – 1:30 PM to 2:00 PM

Analyzing and Mitigating Model Collapse in Reflow Methods

Huminhao Zhu, Fangyikang Wang, Tianyu Ding, Qing Qu, Zhihui Zhu

Keywords: Model Collapse, Self-training, Synthetic Data, Reflow, Rectified Flow

ROSE: Reordered SparseGPT for More Accurate One-Shot Large Language Models Pruning

Mingluo Su, Huan Wang

Keywords: Large language models, Unstructured pruning, Pruning order

What Scalable Second-Order Information Knows for Pruning at Initialization

Ivo Gollini Navarrete, Nicolas Mauricio Cuadrado, Martin Takáč, Samuel Horváth

Keywords: Pruning, Hessian, One-shot, Initialization, Hutchinson, Fisher