Delight

Funded PhD position: Exploring the tradeoffs between energy and performance of federated learning algorithms

Context There is an increasing interest in a new distributed ML paradigm called Federated Learning (FL)[La17], in which nodes compute their local gradients and communicate them to a central server. This centralized server then orchestrates rounds of training over large data volumes created and stored locally at a large number of nodes. This training procedure repeats until some criterion are met. This enables the participating nodes (e.g., IoT devices, mobile phones, etc) to protect their data and solve the data security and privacy issues imposed by law.