Hong Huang /’hɒŋ ˈhwaŋ/ 黄弘

I am currently a Ph.D. candidate supervised by Prof. Dapeng Oliver Wu in City University of Hong Kong (CityU). I obtained M.S. degree from the University of Florida (UF), advised by Prof. Dapeng Oliver Wu and Prof. Ruogu Fang. I obtained B.E. degree from Shanghai Jiao Tong University (SJTU).

Research Interest

My research focuses on model acceleration/compression, with the overarching goal of AI democratization,making powerful AI accessible to everyone.

I lead the FedPruning Research Group, which focuses on cutting-edge research in edge computing and model compression. The group is dedicated to guiding beginners in launching their research careers and currently comprises 15+ junior Ph.D. and M.S. students. We are looking for self-motivated students to join us (minimal requirements: familar with Deep learning and PyTorch.)

News

  • 2025-11: I was selected as a DAAD AINeT fellow for the Postdoc-NeT-AI 11/2025.
  • 2025-10: Our paper “Tequila: Trapping-free Ternary Quantization for Large Language Models.” was published on Arxiv and submitted to ICLR 2026 (score: 8 6 6 6).
  • 2025-10: I received the NeurIPS 2025 Travel Award.
  • 2025-09: Our paper “FedRTS: Federated Robust Pruning via Combinatorial Thompson Sampling” was accepted by NeurIPS 2025.
  • 2025-08: I received the Research Tuition Scholarship from CityU.
  • 2025-05: Our paper “Quaff: Quantized Parameter-Efficient Fine-Tuning under Outlier Spatial Stability Hypothesis” was accepted by ACL 2025.

Selected Publications

  • Hong Huang, Decheng Wu, Rui Cen, Guanghua Yu, Zonghang Li, Kai Liu, Jianchen Zhu, Peng Chen, Xue Liu, Dapeng Wu “Tequila: Trapping-free Ternary Quantization for Large Language Models.” submitted to ICLR 2026 (score: 8 6 6 6).
  • Hong Huang, Dapeng Wu “Quaff: Quantized Parameter-Efficient Fine-Tuning under Outlier Spatial Stability Hypothesis.” ACL 2025.
  • Hong Huang, Hai Yang, Yuan Chen, Jiaxun Ye, Dapeng Wu. “FedRTS: Federated Robust Pruning via Combinatorial Thompson Sampling.” NeurIPS 2025.
  • Hong Huang, Weiming Zhuang, Chen Chen, and Lingjuan Lyu. “FedMef: Towards Memory-efficient Federated Dynamic Pruning.” CVPR 2024.
  • Hong Huang, Lan Zhang, Chaoyue Sun, Ruogu Fang, Xiaoyong Yuan, and Dapeng Wu. “Distributed Pruning Towards Tiny Neural Networks in Federated Learning.” ICDCS 2023.