Tianlong Chen

Incoming Assistant Professor
Computer Science, The University of North Carolina at Chapel Hill
Postdoctoral Researcher
CSAIL@MIT - BMI@Harvard - Broad Institute of MIT and Havard

tianlong@mit.edu

Bio

Tianlong Chen received the Ph.D. degree in Electrical and Computer Engineering from University of Texas at Austin, TX, USA, in 2023. He will start as an Assistant Professor of Computer Science at The University of North Carolina at Chapel Hill in Fall 2024. Before that, he will be a Postdoctoral Researcher at Massachusetts Institute of Technology (CSAIL@MIT), Harvard (BMI@Harvard), and Borad Institute of MIT & Harvard in 2023-2024.

His research focuses on building accurate, trustworthy, and efficient machine learning systems. He devotes his most recent passion to various (A) important machine learning problems - sparsity, robustness, learning to optimize, graph learning, and diffusion models; (B) interdisciplinary scientific challenges - bioengineering and quantum comptuing. He received IBM Ph.D. Fellowship, Adobe Ph.D. Fellowship, Graduate Dean's Prestigious Fellowship, and the Best Paper Award from the inaugural Learning on Graphs (LoG) Conference 2022.

I am looking for highly motivated students, in terms of RA/TA/externship/internship/visiting students. Interested candidates are strongly encouraged to contact me by email, together with resume and transcripts.

News

Sep., 2023    Two NeurIPS'23 accepted - Essential Sparsity in LLM + LLM Heavy-Hitter Oracle.
Jul., 2023    I started my PostDoc at CSAIL@MIT and BMI@Harvard.
Jul., 2023    Three ICCV'23 accepted - Adaptive Multi-Task Vision MoE + Robust MoE + Generalizable NeRF w. MoE.
Jul., 2023    One QCE'23 accepted - Sparse Circuit Design for Quantum Computing.
Jun., 2023    I received the 2023 AdvML Rising Star Award . Many thanks for the acknowledgment.
May, 2023    One ACL'23 accepted - Sparse LLM Tuning.
May, 2023    Three ICML'23 accepted - Instant Soup (Oral) + Graph Ladling + L2O Game.
May, 2023    I received the Ph.D. degree from ECE@UT Austin. I deeply appreciate all the support and help from my family, advisor (Prof. Atlas Wang) , collaborators, and friends!

UNITES Lab

University of North Carolina, AI Trustworthiness, Efficiency, and for Science (UNITES) Group will be an active research lab at UNC Chapel Hill. Our research interests span the area of artificial intelligence (AI), machine learning (ML), optimization, computer vision, natural language processing, and data science, with two major focuses on (A) establishing robust and efficient AI systems; (B) bridging the gap between AI and societal & scientific challenges. Students' information is presented below.

Kaixin Zheng ( Summer 2023 ) [Remote]
Silin Cai ( Summer 2023 ) [Remote]

Publications

Full publications on Google Scholar.
indicates authors with equal contribution. indicates my students or interns.

AdaMV-MoE: Adaptive Multi-Task Vision Mixture-of-Experts

Tianlong Chen, Xuxi Chen, Xianzhi Du, Abdullah Rashwan, Fan Yang, Huizhong Chen, Zhangyang Wang, Yeqing Li

ICCV'23: International Conference on Computer Vision

Robust Mixture-of-Expert Training for Convolutional Neural Networks

Yihua Zhang, Ruisi Cai, Tianlong Chen, Guanhua Zhang, Huan Zhang, Pin-Yu Chen, Shiyu Chang, Zhangyang Wang, Sijia Liu

ICCV'23: International Conference on Computer Vision

GNT-MOVE: Generalizable NeRF Transformer with Mixture-of-View-Experts

Wenyan Cong, Hanxue Liang, Peihao Wang, Zhiwen Fan, Tianlong Chen, Mukund Varma, Yi Wang, Zhangyang Wang

ICCV'23: International Conference on Computer Vision

QuantumSEA: In-Time Sparse Exploration for Noise Adaptive Quantum Circuits

Tianlong Chen, Zhenyu Zhang, Hanrui Wang, Jiaqi Gu, Zirui Li, David Z. Pan, Frederic Chong, Song Han, Zhangyang Wang

QCE'23: International Conference on Quantum Computing and Engineering

DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models

Xuxi Chen, Tianlong Chen, Weizhu Chen, Ahmed Hassan Awadallah, Zhangyang Wang, Yu Cheng

ACL'23: Annual Meeting of the Association for Computational Linguistics

Instant Soup: Cheap Pruning Ensemble in A Single Pass Can Draw Lottery Tickets from Large Models

Ajay Jaiswal, Shiwei Liu, Tianlong Chen, Ying Ding, Zhangyang Wang

ICML'23: International Conference on Machine Learning

Graph Ladling: Shockingly Simple Parallel GNN Training without Intermediate Communication

Ajay Jaiswal, Shiwei Liu, Tianlong Chen, Ying Ding, Zhangyang Wang

ICML'23: International Conference on Machine Learning

Learning to Optimize Differential Games

Xuxi Chen, Nelson Vadori, Tianlong Chen, Zhangyang Wang

ICML'23: International Conference on Machine Learning

Fundamental Sparse Model and Algorithms

The Lottery Ticket Hypothesis for Pre-trained BERT Networks

Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Yang Zhang, Zhangyang Wang, Michael Carbin

NeurIPS'20: Conference on Neural Information Processing Systems

Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!

Shiwei Liu, Tianlong Chen, Zhenyu Zhang, Xuxi Chen, Tianjin Huang, Ajay Kumar Jaiswal, Zhangyang Wang

ICLR'23: International Conference on Learning Representations

Data-driven Discovery of Optimization Algorithms

Learning to Optimize: A Primer and A Benchmark

(α-β) Tianlong Chen, Xiaohan Chen, Wuyang Chen, Howard Heaton, Jialin Liu, Zhangyang Wang, Wotao Yin

JMLR'22: Journal of Machine Learning Research

Scalable Learning to Optimize: A Learned Optimizer Can Train Big Models

Xuxi Chen, Tianlong Chen, Yu Cheng, Weizhu Chen, Ahmed Awadallah, Zhangyang Wang

ECCV'22: European Conference on Computer Vision

Responsible and Reliable Machine Learning

Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning

Tianlong Chen, Sijia Liu, Shiyu Chang, Yu Cheng, Lisa Amini, Zhangyang Wang

CVPR'20: Conference on Computer Vision and Pattern Recognition

Linearity Grafting: How Neuron Pruning Helps Certifiable Robustness

Tianlong Chen, Huan Zhang, Zhenyu Zhang, Shiyu Chang, Sijia Liu, Pin-Yu Chen, Zhangyang Wang

ICML'22: International Conference on Machine Learning

Scalable Graph Neural Networks

Bag of Tricks for Training Deeper Graph Neural Networks: A Comprehensive Benchmark Study

Tianlong Chen, Kaixiong Zhou, Keyu Duan, Wenqing Zheng, Peihao Wang, Xia Hu, Zhangyang Wang

TPAMI'22: IEEE Transactions on Pattern Analysis and Machine Intelligence

You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained GNNs Tickets

Tianjin Huang, Tianlong Chen, Meng Fang, Vlado Menkovski, Jiaxu Zhao, Lu Yin, Yulong Pei, Decebal Constantin Mocanu, Zhangyang Wang, Mykola Pechenizkiy, Shiwei Liu

(Best Paper Award) LOG'22: Learning on Graphs Conference

Graph Contrastive Learning with Augmentations

Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, Yang Shen

NeurIPS'20: Conference on Neural Information Processing Systems

AI4Science

HotProtein: A Novel Framework for Protein Thermostability Prediction and Editing

Tianlong Chen, Chengyue Gong, Daniel Jesus Diaz, Xuxi Chen, Jordan Tyler Wells, qiang liu, Zhangyang Wang, Andrew Ellington, Alex Dimakis, Adam Klivans

ICLR'23: International Conference on Learning Representations

QuantumSEA: In-Time Sparse Exploration for Noise Adaptive Quantum Circuits

Tianlong Chen, Zhenyu Zhang, Hanrui Wang, Jiaqi Gu, Zirui Li, David Z. Pan, Frederic Chong, Song Han, Zhangyang Wang

QCE'23: International Conference on Quantum Computing and Engineering

Vitæ

Full Resume in PDF.

More About Me

• I am a big fan of Pokémon. Playing Pokémon Go is one of my daily activities. Join me, catch the Pokémon in North Carolina, and be a good Pokémon trainer!
• I also enjoy Hip-Hop and Country music. Air (艾热) is one of my favorite Chinese Hip-Hop stars.

This website was built with jekyll based on a template by Martin Saveski.