Please use this identifier to cite or link to this item: https://open.uns.ac.rs/handle/123456789/585
Title: Convergence Rates for Distributed Stochastic Optimization over Random Networks
Authors: Jakovetić, Dušan 
Bajović, Dragana 
Sahu A.
Kar S.
Issue Date: 18-Jan-2019
Journal: Proceedings of the IEEE Conference on Decision and Control
Abstract: © 2018 IEEE. We establish the O(\frac{1}{k}) convergence rate for distributed stochastic gradient methods that operate over strongly convex costs and random networks. The considered class of methods is standard - each node performs a weighted average of its own and its neighbors' solution estimates (consensus), and takes a negative step with respect to a noisy version of its local function's gradient (innovation). The underlying communication network is modeled through a sequence of temporally independent identically distributed (i.i.d.) Laplacian matrices such that the underlying graphs are connected on average; the local gradient noises are also i.i.d. in time, have finite second moment, and possibly unbounded support. We show that, after a careful setting of the consensus and innovations potentials (weights), the distributed stochastic gradient method achieves a (order-optimal) O(\frac{1}{k}) convergence rate in the mean square distance from the solution. To the best of our knowledge, this is the first order-optimal convergence rate result on distributed strongly convex stochastic optimization when the network is random and the gradient noises have unbounded support. Simulation examples confirm the theoretical findings.
URI: https://open.uns.ac.rs/handle/123456789/585
ISBN: 9781538613955
ISSN: 07431546
DOI: 10.1109/CDC.2018.8619228
Appears in Collections:FTN Publikacije/Publications

Show full item record

SCOPUSTM   
Citations

34
checked on May 10, 2024

Page view(s)

15
Last Week
5
Last month
0
checked on May 10, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.