Splash photo of HKU
Conference on Parsimony and Learning (CPAL)
January 2024, HKU

Proceedings Track: Accepted Papers

Presentation Format

Accepted papers will be presented in one of six oral sessions during the conference.

Presentations are ten minutes in duration, with two minutes for Q&A.

The ordering of session numbers matches their chronological ordering, and presentations will be delivered in the order they are listed. See the full program for the precise time and location of each oral session.

Oral Session 1

Time: Day 1 (Jan 3) – Wednesday – 2:30 PM to 3:30 PM

1. Emergence of Segmentation with Minimalistic White-Box Transformers

Yaodong Yu, Tianzhe Chu, Shengbang Tong, Ziyang Wu, Druv Pai, Sam Buchanan, Yi Ma

Keywords: white-box transformer, emergence of segmentation properties

2. NeuroMixGDP: A Neural Collapse-Inspired Random Mixup for Private Data Release

Donghao Li, Yang Cao, Yuan Yao

Keywords: Neural Collapse, Differential privacy, Private data publishing, Mixup

3. HRBP: Hardware-friendly Regrouping towards Block-based Pruning for Sparse CNN Training

Haoyu Ma, Chengming Zhang, lizhi xiang, Xiaolong Ma, Geng Yuan, Wenkai Zhang, Shiwei Liu, Tianlong Chen, Dingwen Tao, Yanzhi Wang, Zhangyang Wang, Xiaohui Xie

Keywords: efficient training, sparse training, fine-grained structured sparsity, regrouping algorithm

4. Jaxpruner: A Concise Library for Sparsity Research

Joo Hyung Lee, Wonpyo Park, Nicole Elyse Mitchell, Jonathan Pilault, Johan Samir Obando Ceron, Han-Byul Kim, Namhoon Lee, Elias Frantar, Yun Long, Amir Yazdanbakhsh, Woohyun Han, Shivani Agrawal, Suvinay Subramanian, Xin Wang, Sheng-Chun Kao, Xingyao Zhang, Trevor Gale, Aart J.C. Bik, Milen Ferev, Zhonglin Han, Hong-Seok Kim, Yann Dauphin, Gintare Karolina Dziugaite, Pablo Samuel Castro, Utku Evci

Keywords: jax, sparsity, pruning, quantization, sparse training, efficiency, library, software

5. How to Prune Your Language Model: Recovering Accuracy on the ``Sparsity May Cry’’ Benchmark

Eldar Kurtic, Torsten Hoefler, Dan Alistarh

Keywords: pruning, deep learning, benchmarking

Oral Session 2

Time: Day 2 (Jan 4) – Thursday – 11:20 AM to 12:20 PM

1. Efficiently Disentangle Causal Representations

Yuanpeng Li, Joel Hestness, Mohamed Elhoseiny, Liang Zhao, Kenneth Church

Keywords: causal representation learning

2. Unsupervised Learning of Structured Representation via Closed-Loop Transcription

Shengbang Tong, Xili Dai, Yubei Chen, Mingyang Li, ZENGYI LI, Brent Yi, Yann LeCun, Yi Ma

Keywords: Unsupervised/Self-supervised Learning, Closed-Loop Transcription

3. An Adaptive Tangent Feature Perspective of Neural Networks

Daniel LeJeune, Sina Alemohammad

Keywords: adaptive, kernel learning, tangent kernel, neural networks, low rank

4. Sparse Activations with Correlated Weights in Cortex-Inspired Neural Networks

Chanwoo Chun, Daniel Lee

Keywords: Correlated weights, Biological neural network, Cortex, Neural network gaussian process, Sparse neural network, Bayesian neural network, Generalization theory, Kernel ridge regression, Deep neural network, Random neural network

5. Exploring Minimally Sufficient Representation in Active Learning through Label-Irrelevant Patch Augmentation

Zhiyu Xue, Yinlong Dai, Qi Lei

Keywords: Active Learning, Data Augmentation, Minimally Sufficient Representation

Oral Session 3

Time: Day 2 (Jan 4) – Thursday – 2:30 PM to 3:30 PM

1. Investigating the Catastrophic Forgetting in Multimodal Large Language Model Fine-Tuning

Yuexiang Zhai, Shengbang Tong, Xiao Li, Mu Cai, Qing Qu, Yong Jae Lee, Yi Ma

Keywords: Multimodal LLM, Supervised Fine-Tuning, Catastrophic Forgetting

2. WS-iFSD: Weakly Supervised Incremental Few-shot Object Detection Without Forgetting

Xinyu Gong, Li Yin, Juan-Manuel Perez-Rua, Zhangyang Wang, Zhicheng Yan

Keywords: few-shot object detection

3. Continual Learning with Dynamic Sparse Training: Exploring Algorithms for Effective Model Updates

Murat Onur Yildirim, Elif Ceren Gok, Ghada Sokar, Decebal Constantin Mocanu, Joaquin Vanschoren

Keywords: continual learning, sparse neural networks, dynamic sparse training

4. FIXED: Frustratingly Easy Domain Generalization with Mixup

Wang Lu, Jindong Wang, Han Yu, Lei Huang, Xiang Zhang, Yiqiang Chen, Xing Xie

Keywords: Domain generalization, Data Augmentation, Out-of-distribution generalization

5. Domain Generalization via Nuclear Norm Regularization

Zhenmei Shi, Yifei Ming, Ying Fan, Frederic Sala, Yingyu Liang

Keywords: Domain Generalization, Nuclear Norm, Deep Learning

Oral Session 4

Time: Day 3 (Jan 5) – Friday – 11:20 AM to 12:20 PM

1. Balance is Essence: Accelerating Sparse Training via Adaptive Gradient Correction

Bowen Lei, Dongkuan Xu, Ruqi Zhang, Shuren He, Bani Mallick

Keywords: Sparse Training, Space-time Co-efficiency, Acceleration, Stability, Gradient Correction

2. Probing Biological and Artificial Neural Networks with Task-dependent Neural Manifolds

Michael Kuoch, Chi-Ning Chou, Nikhil Parthasarathy, Joel Dapello, James J. DiCarlo, Haim Sompolinsky, SueYeon Chung

Keywords: Computational Neuroscience, Neural Manifolds, Neural Geometry, Representational Geometry, Biologically inspired vision models, Neuro-AI

3. Decoding Micromotion in Low-dimensional Latent Spaces from StyleGAN

Qiucheng Wu, Yifan Jiang, Junru Wu, Kai Wang, Eric Zhang, Humphrey Shi, Zhangyang Wang, Shiyu Chang

Keywords: generative model, low-rank decomposition

4. Sparse Fréchet sufficient dimension reduction via nonconvex optimization

Jiaying Weng, Chenlu Ke, Pei Wang

Keywords: Fréchet regression; minimax concave penalty; multitask regression; sufficient dimension reduction; sufficient variable selection.

5. Less is More – Towards parsimonious multi-task models using structured sparsity

Richa Upadhyay, Ronald Phlypo, Rajkumar Saini, Marcus Liwicki

Keywords: Multi-task learning, structured sparsity, group sparsity, parameter pruning, semantic segmentation, depth estimation, surface normal estimation

Oral Session 5

Time: Day 3 (Jan 5) – Friday – 2:30 PM to 3:30 PM

1. Deep Self-expressive Learning

Chen Zhao, Chun-Guang Li, Wei He, Chong You

Keywords: Self-Expressive Model; Subspace Clustering; Manifold Clustering

2. PC-X: Profound Clustering via Slow Exemplars

Yuangang Pan, Yinghua Yao, Ivor Tsang

Keywords: Deep clustering, interpretable machine learning, Optimization

3. Piecewise-Linear Manifolds for Deep Metric Learning

Shubhang Bhatnagar, Narendra Ahuja

Keywords: Deep metric learning, Unsupervised representation learning

4. Algorithm Design for Online Meta-Learning with Task Boundary Detection

Daouda Sow, Sen Lin, Yingbin Liang, Junshan Zhang

Keywords: online meta-learning, task boundary detection, domain shift, dynamic regret, out of distribution detection

5. HARD: Hyperplane ARrangement Descent

Tianjiao Ding, Liangzu Peng, Rene Vidal

Keywords: hyperplane clustering, subspace clustering, generalized principal component analysis

Oral Session 6

Time: Day 3 (Jan 5) – Friday – 4:00 PM to 5:00 PM

1. Closed-Loop Transcription via Convolutional Sparse Coding

Xili Dai, Ke Chen, Shengbang Tong, Jingyuan Zhang, Xingjian Gao, Mingyang Li, Druv Pai, Yuexiang Zhai, Xiaojun Yuan, Heung-Yeung Shum, Lionel Ni, Yi Ma

Keywords: Convolutional Sparse Coding, Inverse Problem, Closed-Loop Transcription

2. Leveraging Sparse Input and Sparse Models: Efficient Distributed Learning in Resource-Constrained Environments

Emmanouil Kariotakis, Grigorios Tsagkatakis, Panagiotis Tsakalides, Anastasios Kyrillidis

Keywords: sparse neural network training, efficient training

3. Cross-Quality Few-Shot Transfer for Alloy Yield Strength Prediction: A New Materials Science Benchmark and A Sparsity-Oriented Optimization Framework

Xuxi Chen, Tianlong Chen, Everardo Yeriel Olivares, Kate Elder, Scott McCall, Aurelien Perron, Joseph McKeown, Bhavya Kailkhura, Zhangyang Wang, Brian Gallagher

Keywords: AI4Science, sparsity, bi-level optimization

4. Deep Leakage from Model in Federated Learning

Zihao Zhao, Mengen Luo, Wenbo Ding

Keywords: Federated learning, distributed learning, privacy leakage

5. Image Quality Assessment: Integrating Model-centric and Data-centric Approaches

Peibei Cao, Dingquan Li, Kede Ma

Keywords: Learning-based IQA, model-centric IQA, data-centric IQA, sampling-worthiness.