Ph.D. Dissertation Defense Van Sy Mai Friday, March 31, 2017
2:00 p.m. CSS 2115
For More Information: Maria Hoo 301 405 3681 firstname.lastname@example.org
ANNOUNCEMENT: Ph.D. Dissertation Defense
Name: Van Sy Mai
Professor Eyad H. Abed, Chair/Advisor
Professor P. S. Krishnaprasad
Professor Richard J. La
Professor Andre L. Tits
Professor Nikhil Chopra, Dean's Representative
Date/Time: 2pm, Friday, 03/31/2017
Place: CSS 2115
Title: Consensus, Prediction and Optimization in Directed Networks
Abstract: This dissertation develops theory and algorithms for distributed consensus in multi-agent networks. The models considered are opinion dynamics models based on the well known DeGroot model. We study the following three related topics: consensus of networks with leaders, consensus prediction, and distributed optimization.
First, we revisit the problem of agreement seeking in a weighted directed network in the presence of leaders. We develop new sufficient conditions that are weaker than existing conditions for guaranteeing consensus for both fixed and switching network topologies, emphasizing the importance not only of persistent connectivity between the leader and the followers but also of the strength of these connections. We then study the problem of a leader aiming to maximize its influence on the opinions of the network agents through targeted connection with a limited number of agents and possibly in the presence of another leader having a competing opinion. We reveal fundamental properties of leader influence defined in terms of either the transient behavior or the achieved steady state opinions of the network agents. In particular, not only is this influence a supermodular set function, but its continuous relaxation is also convex for any strongly connected directed network. These results pave the way for developing efficient approximation algorithms admitting certain quality certifications, which when combined can provide effective tools and better analysis for optimal influence spreading in large networks.
Second, we introduce and investigate problems of network monitoring and consensus prediction. Here, an observer, without exact knowledge of the network, seeks to determine in the shortest possible time the asymptotic agreement value by monitoring a subset of the agents. We uncover a fundamental limit on the monitoring time for the case of a single observed node and analyze the case of multiple observed nodes. We provide conditions for achieving the limit in the former case and develop algorithms toward achieving conjectured bounds in the latter through local observation and local computation.
Third, we study a distributed optimization problem where a network of agents seeks to minimize the sum of the agents' individual objective functions while each agent may be associated with a separate local constraint. We develop new distributed algorithms for solving this problem in which consensus prediction is employed as a means to achieve fast convergence rates, possibly in finite time. An innovation in our distributed optimization algorithms is that they work under milder assumptions on the network weight matrix than are commonly assumed in the literature. Most distributed algorithms require undirected networks. Consensus-based algorithms can apply to directed networks under an assumption that the network weight matrix is doubly stochastic (i.e., both row stochastic and column stochastic), or in some recent literature only column stochastic. Our algorithms work for directed networks and only require row stochasticity, a mild assumption. Doubly stochastic or column stochastic weight matrices can be hard to arrange locally, especially in broadcast-based communication. We achieve the simplification to the row stochastic assumption through a distributed rescaling technique. Next, we develop a unified convergence analysis of a distributed projected subgradient algorithm and its variation that can be applied to both unconstrained and constrained problems without assuming boundedness or commonality of the local constraint sets.