
Proceedings Track: Accepted Papers
Accepted Proceedings Track papers are presented as posters at CPAL 2026. A select number of accepted Proceedings Track papers will be presented as orals; they are labeled below with (Oral). See the full program for the precise time and location of each oral and poster session.
From sparse recovery to plug-and-play priors, understanding trade-offs for stable recovery with generalized projected gradient descent (Oral, Best Paper Award)
Ali Joundi, Yann Traonmilin, Jean-François Aujol
Keywords: Inverse Problems, Sparse Recovery, Plug-and-Play, Deep Prior, Optimization
Efficient Temporal Consistency in Diffusion-Based Video Editing with Adaptor Modules: A Theoretical Framework
Xinyuan Song, Yangfan He, Sida Li, Jianhui Wang, Hongyang He, Xinhang Yuan, Ruoyu Wang, Jiaqi Chen, Keqin Li, Kuan Lu, Menghao Huo, Ziqian Bi, Binxu Li, Pei Liu
Keywords: Adapter-based Methods, Diffusion Models, Video Editing, Temporal Consistency, DDIM Inversion, Prompt Learning, Theoretical Analysis
What Scalable Second-Order Information Knows for Pruning at Initialization (Oral)
Ivo Gollini Navarrete, Nicolas Mauricio Cuadrado, Martin Takáč, Samuel Horváth
Keywords: Pruning, Hessian, One-shot, Initialization, Hutchinson, Fisher
Selective Collaboration for Robust Federated Learning
Nazarii Tupitsa, Samuel Horváth, Martin Takáč, Eduard Gorbunov
Keywords: federated learning, robust aggreagation
Generalized Radius and Integrated Codebook Transforms for Differentiable Vector Quantization
Haochen You, Heng Zhang, Hongyang He, Yuqi Li, Baojing Liu
Keywords: Vector Quantization, Discrete Representation Learning, Radius Surrogate, Codebook Transform, Gradient Coupling
Enhancing Long-Context Inference with Context-Position Duo-Mixture
Zhenyu Zhang, Sharath Nittur Sridhar, Zhangyang Wang, Souvik Kundu
Keywords: Long-Context; LLM; Efficiency
FLIPR: FLexible and Interpretable Prediction Regions for time series
Eshant English, Christoph Lippert
Keywords: interpretable regions, time series, conformal prediction
SonoEdit: Null-Space Constrained Knowledge Editing for Pronunciation Correction in LLM-Based TTS
Ayush Pratap Singh, Harshit Singh, Nityanand Mathur, Akshat Mandloi, Sudarshan Kamath
Keywords: Knowledge Editing, Text to Speech, LLMs, Parameter Efficiency
Data-Efficient and Robust Trajectory Generation through Pathlet Dictionary Learning (Oral)
yuanbo tang, Yan Tang, Zihui Zhao, Zixuan Zhang, Yang Li
Keywords: trajectory generative model, dictionary learning, sparse representation
Sparse Mixture-of-Experts for Compositional Generalization: Empirical Evidence and Theoretical Foundations of Optimal Sparsity (Oral)
Jinze Zhao, Peihao Wang, Junjie Yang, Ruisi Cai, Gaowen Liu, Jayanth Srinivasa, Ramana Rao Kompella, Yingbin Liang, Zhangyang Wang
Keywords: Compositional Generalization, Sparsity, Mixture of Experts
Teaching LLMs According to Their Aptitude: Adaptive Switching Between CoT and TIR for Mathematical Problem Solving (Oral)
Xin Xu, Yan Xu, Tianhao Chen, Yuchen Yan, Chengwu Liu, Zaoyu Chen, Yufei Wang, Yichun Yin, Yasheng Wang, Qun Liu, Lu Yin
Keywords: Large Language Models, math QA, chain-of-thought, tool-integrated reasoning, fine-tuning
KNIGHT: Knowledge Graph-Driven Multiple-Choice Question Generation with Adaptive Hardness Calibration
Mohammad Amanlou, Erfan Shafiee Moghaddam, Mahdi Nouri, Yasaman Amou Jafary, Farhan Farsi, Behnam Bahrak
Keywords: Multiple-Choice Question Generation, Knowledge Graph, Difficulty Calibration, Question Answering Dataset
Emergence of Auditory Receptive Fields based on Surprise
Yashaswini, Sneha Dash, Sharba Bandyopadhyay
Keywords: Auditory receptive fields, Bayesian surprise, sparse coding, Oddball paradigm, predictive inference, Autoregressive generative modeling, efficient sensory coding, biologically inspired learning
Simplex Deep Linear Discriminant Analysis
Maxat Tezekbayev, Arman Bolatov, Zhenisbek Assylbekov
Keywords: Deep LDA, Maximum likelihood, Simplex-constrained embeddings
Concept based Ambiguity Resolution in LLMs
Zhibo Hu, Chen Wang, Yanfeng Shu, Hye-young Paik, Liming Zhu
Keywords: Language Ambiguity; Large Language Model; Sparse Autoencoder; Path Kernel
A Stein identity for $q$-Gaussians with bounded support
Sophia Sklaviadis, Thomas Möllenhoff, Mario A. T. Figueiredo, Andre Martins, Mohammad Emtiyaz Khan
Keywords: Generalized Stein identities, elliptical families, bounded-support q-Gaussians
Trainable Bitwise Soft Quantization for Input Feature Compression
Karsten Schrödter, Jan Stenkamp, Nina Herrmann, Fabian Gieseke
Keywords: Soft Quantization, Trainable Quantization, Input Compression, Tiny Machine Learning, Split Inference
GRAIL: Post-hoc Compensation by Linear Reconstruction for Compressed Networks
Wenwu Tang, Dong Wang, Lothar Thiele, Olga Saukh
Keywords: Model Compression, Model Pruning, Model Folding, Model Compensation, LLM, Model Efficiency
Learning of Discretized LSTMs
Nikolaus Kopp, Franz Pernkopf
Keywords: probabilistic, QAT, discrete LSTM, Gumbel-Softmax
Effective Learning for Small Reasoning Models: An Empirical Study on 0.5B Reasoning LLMs
Xialie Zhuang, Peixian MA, Zhikai Jia, Zane Cao, Shiwei Liu
Keywords: Small Reasoning Model, Reasoning, Reinforcement Learning
Byzantine-Robust Optimization under $(L_0,L_1)$-Smoothness
Arman Bolatov, Samuel Horváth, Martin Takáč, Eduard Gorbunov
Keywords: byzantine-robust optimization, federated learning, generalized smoothness, normalized SGD
Dynamic SFT with Structured Measurements: Fast Queries, Fast Updates, Provable Guarantees
Yang Cao, Zhao Song
Keywords: sparse Fourier transform
Superclass-Guided Representation Disentanglement for Spurious Correlation Mitigation
Chenruo Liu, Hongjun Liu, Zeyu Lai, Yiqiu Shen, Chen Zhao, Qi Lei
Keywords: Spurious Correlation, Group Robustness, Domain Generalization
Beyond Greedy Decoding: Model-Specific Strategy Selection via Multi-faceted Uncertainty Decomposition
Kwangje Baeg, Yubin Lim
Keywords: Uncertainty Decomposition, Adaptive Decoding, Model Heterogeneity, Behavioral Clustering, Instruction-Tuned Models
Can Less Be More? Benchmarking Lightweight Models Against State-of-the-Art Deep Learning Architectures for Deployable Seizure Detection
Isaiah Essien, Donna-lee Ginsberg, Jesse Thornburg
Keywords: Parsimonious Learning, Mobile Health, Seizure Detection, TensorFlow Lite, Deep Learning, Resource-Constrained Deployment, Global Health Equity
ERC-SVD: Error-Controlled SVD for Large Language Model Compression
Haolei Bai, Siyong Jian, Tuo Liang, Yu Yin, Huan Wang
Keywords: Model Compression, SVD, Large Language Models
FocusDC: Real-World Scene Infusion for Robust Dataset Condensation
Youbing Hu, Yun Cheng, Olga Saukh, Firat Ozdemir, Anqi Lu, Zhiqiang Cao, Min Zhang, Zhijun Li
Keywords: Dataset Distillation and Condensation, Vision Transformer
Scalable LLM Reasoning Acceleration with Low-rank Distillation
Harry Dong, Bilge Acun, Beidi Chen, Yuejie Chi
Keywords: large language model, efficiency, distillation, reasoning, scaling, low-rank, inference
Sparsity-Aware Prompt Tuning: A Simple and Effective Way to Fine-tune High-Sparsity LLMs
Yuxin Zhang, Weizhong Huang, Yuexiao Ma, Yunshan Zhong, Xiawu Zheng, Rongrong Ji
Keywords: Large language models; Network Pruning
(PASS) Visual Prompt Locates Good Structure Sparsity through a Recurrent HyperNetwork
Tianjin Huang, Yong Tao, Meng Fang, Li Shen, Fan Liu, Yulong Pei, Mykola Pechenizkiy, Tianlong Chen
Keywords: Structure Pruning, Visual Prompt, Recurrent HyperNetwork
Learning in the Null Space: Small Singular Values for Continual Learning (Oral)
Cuong Anh Pham, Praneeth Vepakomma, Samuel Horváth
Keywords: continual learning, singular value decomposition, small singular values, null space
Beyond In-Distribution Success: Scaling Curves of CoT Granularity for Language Model Generalization
Ru Wang, Wei Huang, Selena Song, Haoyu Zhang, Qian Niu, Yusuke Iwasawa, Yutaka Matsuo, Jiaxian Guo
Keywords: Chain of Thought, Scaling Curve, Out-of-Distribution Generalization, Sample Efficiency
Deep Neural Regression Collapse
Akshay Rangamani, Altay Unal
Keywords: Neural Collapse, Low Rank, Neural Regression Collapse
Optimal $k$-Discretization Learning
Tong Wang, Zhangyang Wang
Keywords: Clustering
MMA:Benchmarking Multi-ModalLarge Language Models in Ambiguity Contexts
Ru Wang, Selena Song, Yuquan Wang, Liang Ding, Mingming Gong, Yusuke Iwasawa, Yutaka Matsuo, Jiaxian Guo
Keywords: Multi-Modal Large Language Model, Ambiguity, Benchmark, Dataset
Token-Aware Representation Augmentation for Fine-Grained Semi-Supervised Learning
Hongyang He, Yan Zhong, Xinyuan Song, Daizong Liu, Victor Sanchez
Keywords: Semi-supervised learning, FixMatch, consistency regularization, token-aware masking, token-level augmentation, high-confidence token suppression, feature diversity
Pruned Adaptation Modules: A Simple yet Strong Baseline for Continual Foundation Models
Elif Ceren Gok Yildirim, Murat Onur Yildirim, Joaquin Vanschoren
Keywords: continual learning, parameter efficient, foundation models
Matrix Sensing with Kernel Optimal Loss: Robustness and Optimization Landscape (Oral)
Xinyuan Song, Ziye Ma
Keywords: Matrix sensing, kernel loss function, optimization
Symbiotic Cooperation for Web Agents: Harnessing Complementary Strengths of Large and Small LLMs
Ruichen Zhang, Mufan Qiu, Zhen Tan, Mohan Zhang, Xiaopeng Lu, Jie Peng, Kaidi Xu, Leandro Z. Agudelo, Peter Zhenghao Qian, Tianlong Chen
Keywords: LLM, Agent, Knowledge Distillation, Web Agent, Symbiotic Cooperation, Privacy Preservation, Hybrid Mode
Prompt Stability Matters: Evaluating and Optimizing Auto-Generated Prompt in General-Purpose Systems
Ke Chen, Xucheng Yu, Yufei Zhou, Haohan Wang
Keywords: Prompt Stability, Prompt Evaluation, Multi-Agent System, General-Purpose System, Prompt Auto-Generation, Prompt Optimization
Stochastic Unrolled Neural Networks
Samar Hadou, Navid NaderiAlizadeh, Alejandro Ribeiro
Keywords: unrolled optimization, learning to learn, deep unfolding, interpretable deep architecture, constrained learning
Analyzing and Mitigating Model Collapse in Reflow Methods (Oral)
Huminhao Zhu, Fangyikang Wang, Tianyu Ding, Qing Qu, Zhihui Zhu
Keywords: Model Collapse, Self-training, Synthetic Data, Reflow, Rectified Flow
Parameter-Efficient Distributional RL via Normalizing Flows and a Geometry-Aware Cramér Surrogate
Simo Alami Chehboune, Rim Kaddah, Marie-Paule CANI, Jesse Read
Keywords: Distributional Reinforcement Learning, Generative models, Deep Learning, Optimal Transport
LLMQ: Efficient Lower-Precision LLM Training for Consumer GPUs
Erik Schultheis, Dan Alistarh
Keywords: consumer GPU, quantized training
ShapLoRA: Allocation of Low-rank Adaption on Large Language Models via Shapley Value Inspired Importance Estimation
Colin Zhao, Qinghua Yao, Xinyuan Song, Wei Zhu
Keywords: LLM LoRA
Lattice-Based Vector Quantization for Low-Bit Quantization-Aware Training
Rishika Kohli, Soma S Dhavala, Shaifu Gupta, Manoj Singh Gaur
Keywords: compression, quantization, pruning, deep learning, vector quantization, quantization aware training, post training quantization, BERT
Cannistraci-Hebb Training with N:M Semi-Structured Sparsity for Pre-Training and Re-Training
Jiaqing Lyu, Ruijie Wang, Kangyou Bao, Yingtao Zhang, Carlo Vittorio Cannistraci
Keywords: Dynamic Sparse Training; Semi-Structured Sparsity; LLM; ViT
SPIKE: Sparse Koopman Regularization for Physics-Informed Neural Networks
Jose Marie Antonio Miñoza
Keywords: Physics-Informed Neural Networks, Koopman Operator, Out-Of-Distribution Generalization, Dynamical Systems
Enhancing Low-Cost Video Editing with Lightweight Adaptors and Temporal-Aware Inversion
Yangfan He, Sida Li, Jianhui Wang, Xinyuan Song, Kun Li, Xinhang Yuan, Kuan Lu, Menghao Huo, Jingqun Tang, Yi Xin, Jiaqi Chen, Keqin Li, Miao Zhang, Xueqian Wang
Keywords: Text-to-Image (T2I) Generation, Diffusion Models, Text-to-Video (T2V) Editing, Temporal Consistency, Spatial Consistency
Panza: Investigating the Feasibility of Fully-Local Personalized Text Generation
Armand Mihai Nicolicioiu, Eugenia Iofinova, Andrej Jovanovic, Eldar Kurtic, Mahdi Nikdan, Andrei Panferov, Ilia Markov, Nir N Shavit, Dan Alistarh
Keywords: LLMs, PEFT, LoRA, personalization, efficient ML
AlphaFormer: End-to-End Symbolic Regression of Alpha Factors with Transformers
Haotong Huang, Jie Peng, Zezhen Ding, Pingzhi Li, Tianlong Chen
Keywords: Symbolic Regression, Alpha Mining, Time Series Generative Modeling
ROSE: Reordered SparseGPT for More Accurate One-Shot Large Language Models Pruning (Oral)
Mingluo Su, Huan Wang
Keywords: Large language models, Unstructured pruning, Pruning order
Improving Medical Visual Reinforcement Fine-Tuning via Perception and Reasoning Augmentation
Guangjing Yang, ZhangYuan Yu, Ziyuan Qin, Xinyuan Song, Huahui Yi, Qingbo Kang, Jun Gao, Yiyue Li, Chenlin Du, Qicheng Lao
Keywords: Reinforcement Fine-Tuning (RFT), Medical Vision-Language Models, Reward Design, Perception-Reasoning Augmentation, Visual Reinforcement Learning, Medical Image Understanding
Semantic Homogeneity As Demonstration: Batch-Structured Semi-Supervised In-Context Learning for Natural Language Understanding
Cheng Chen, Yuangang Pan, Ivor Tsang
Keywords: In-Context Learning, Natural Language Understanding, Prompt Engineering / Prompting, Aggregate Ranking