[ICDCS 2023] Distributed Pruning Towards Tiny Neural Networks in Federated Learning.

Published:

This paper proposes FedTiny, a distributed pruning framework for federated learning that generates specialized tiny models for memory- and computing-constrained devices. FedTiny achieves top-one accuracy of 85.23% with the 0.014× FLOPs and 0.03× memory footprint of ResNet18, which outperforms the best baseline, which gets 82.62% accuracy with 0.34× FLOPs and 0.51× memory footprint.

Paper | Code