We investigate and compare the fundamental performance of several distributed learning methods that have been proposed recently. We do this in the context of a distributed version of the classical signal-in-Gaussian-white-noise model, which serves as a benchmark model for studying performance in this setting. The results show how the design and tuning of a distributed method can have great impact on convergence rates and validity of uncertainty quantification. Moreover, we highlight the difficulty of designing nonparametric distributed procedures that automatically adapt to smoothness.
|Journal||Journal of Machine Learning Research|
|Publication status||Published - 1 Jun 2019|
- Convergence rates
- Distributed learning
- Gaussian processes
- High-dimensional models
- Nonparametric models