Publications
Towards sparsified federated neuroimaging models via weight pruning
Abstract
Federated training of large deep neural networks can often be restrictive due to the increasing costs of communicating the updates with increasing model sizes. Various model pruning techniques have been designed in centralized settings to reduce inference times. Combining centralized pruning techniques with federated training seems intuitive for reducing communication costs—by pruning the model parameters right before the communication step. Moreover, such a progressive model pruning approach during training can also reduce training times/costs. To this end, we propose FedSparsify, which performs model pruning during federated training. In our experiments in centralized and federated settings on the brain age prediction task (estimating a person’s age from their brain MRI), we demonstrate that models can be pruned up to 95% sparsity without affecting performance even in challenging federated …
Metadata
- publication
- International Workshop on Distributed, Collaborative, and Federated Learning …, 2022
- year
- 2022
- publication date
- 2022/9/18
- authors
- Dimitris Stripelis, Umang Gupta, Nikhil Dhinagar, Greg Ver Steeg, Paul M Thompson, José Luis Ambite
- link
- https://link.springer.com/chapter/10.1007/978-3-031-18523-6_14
- resource_link
- https://arxiv.org/pdf/2208.11669
- book
- International Workshop on Distributed, Collaborative, and Federated Learning
- pages
- 141-151
- publisher
- Springer Nature Switzerland