Follow
Xiaohan  Chen
Xiaohan Chen
DAMO Academy, Alibaba Group
Verified email at utexas.edu - Homepage
Title
Cited by
Cited by
Year
Plug-and-play methods provably converge with properly trained denoisers
E Ryu, J Liu, S Wang, X Chen, Z Wang, W Yin
International Conference on Machine Learning, 5546-5557, 2019
3922019
Can we gain more from orthogonality regularizations in training deep networks?
N Bansal, X Chen, Z Wang
Advances in Neural Information Processing Systems 31, 2018
3632018
Drawing early-bird tickets: Towards more efficient training of deep networks
H You, C Li, P Xu, Y Fu, Y Wang, X Chen, RG Baraniuk, Z Wang, Y Lin
arXiv preprint arXiv:1909.11957, 2019
2712019
Theoretical linear convergence of unfolded ISTA and its practical weights and thresholds
X Chen, J Liu, Z Wang, W Yin
Advances in Neural Information Processing Systems 31, 2018
2632018
Learning to optimize: A primer and a benchmark
T Chen, X Chen, W Chen, H Heaton, J Liu, Z Wang, W Yin
Journal of Machine Learning Research 23 (189), 1-59, 2022
2142022
ALISTA: Analytic weights are as good as learned weights in LISTA
J Liu, X Chen
International Conference on Learning Representations (ICLR), 2019
2072019
More convnets in the 2020s: Scaling up kernels beyond 51x51 using sparsity
S Liu, T Chen, X Chen, X Chen, Q Xiao, B Wu, T Kärkkäinen, ...
arXiv preprint arXiv:2207.03620, 2022
1542022
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration
S Liu, T Chen, X Chen, Z Atashgahi, L Yin, H Kou, L Shen, M Pechenizkiy, ...
Neural Information Processing Systems 2021, 2021
1172021
The unreasonable effectiveness of random pruning: Return of the most naive baseline for sparse training
S Liu, T Chen, X Chen, L Shen, DC Mocanu, Z Wang, M Pechenizkiy
arXiv preprint arXiv:2202.02643, 2022
1042022
E2-train: Training state-of-the-art cnns with over 80% energy savings
Y Wang, Z Jiang, X Chen, P Xu, Y Zhao, Y Lin, Z Wang
Advances in Neural Information Processing Systems 32, 2019
982019
Earlybert: Efficient bert training via early-bird lottery tickets
X Chen, Y Cheng, S Wang, Z Gan, Z Wang, J Liu
arXiv preprint arXiv:2101.00063, 2020
882020
Shiftaddnet: A hardware-inspired deep network
H You, X Chen, Y Zhang, C Li, S Li, Z Liu, Z Wang, Y Lin
Advances in Neural Information Processing Systems 33, 2771-2783, 2020
872020
Federated dynamic sparse training: Computing less, communicating less, yet learning better
S Bibikar, H Vikalo, Z Wang, X Chen
Proceedings of the AAAI Conference on Artificial Intelligence 36 (6), 6080-6088, 2022
862022
Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win the Jackpot?
X Ma, G Yuan, X Shen, T Chen, X Chen, X Chen, N Liu, M Qin, S Liu, ...
Neural Information Processing Systems 2021, 2021
602021
Deep ensembling with no overhead for either training or testing: The all-round blessings of dynamic sparsity
S Liu, T Chen, Z Atashgahi, X Chen, G Sokar, E Mocanu, M Pechenizkiy, ...
arXiv preprint arXiv:2106.14568, 2021
542021
Smartexchange: Trading higher-cost memory storage/access for lower-cost computation
Y Zhao, X Chen, Y Wang, C Li, H You, Y Fu, Y Xie, Z Wang, Y Lin
2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture …, 2020
512020
Learning a minimax optimizer: A pilot study
J Shen, X Chen, H Heaton, T Chen, J Liu, W Yin, Z Wang
International Conference on Learning Representations, 2020
302020
The Elastic Lottery Ticket Hypothesis
X Chen, Y Cheng, S Wang, Z Gan, J Liu, Z Wang
Neural Information Processing Systems 2021, 2021
272021
Safeguarded learned convex optimization
H Heaton, X Chen, Z Wang, W Yin
Proceedings of the AAAI Conference on Artificial Intelligence 37 (6), 7848-7855, 2023
262023
Hyperparameter Tuning is All You Need for LISTA
X Chen, J Liu, Z Wang, W Yin
Neural Information Processing Systems 2021, 2021
242021
The system can't perform the operation now. Try again later.
Articles 1–20