ANCL Seminars
The seminar of ANCL in 2020-2021 academic year
The tentative schedule for the semester of 2020-2021 is given as follows:
- Wuguang WANG
- Shuocheng KANG
- Yanying YU
- Cunhao WEI
- Jiawei ZHU
- Wenbo HU
- Liwen ZHANG
- Chuanqi HUANG
- Jiaxin WU
- Fanwei MENG
- Lan DUO
- Peng WANG
- Baizheng AN
- Yong DU
- Jin JIN
- Qingfu CUI
- Guoqing WANG
- Peng JING
- Lepeng MA
- Chengwang YANG
- Xiao WANG
- Chunxiang JIA
Seminar information
2020.12
Day: 2020.12.01
- Speaker: Chuanqi HUANG
- Title: Distributed Average Tracking in Weight-Unbalanced Directed Networks
- Abstract: This note studies a distributed average tracking (DAT) problem, in which a collection of agents work collaboratively, subject to local communication, to track the average of a set of reference signals, each of which is available to a single agent. Our primary objective is to seek a design methodology for DAT under possibly weight-unbalanced directed networks-the most general and thus most challenging case from the network topology perspective, which has few results in the literature. For this purpose, we propose a distributed algorithm based on a chain of two integrators which are coupled with a distributed estimator. It is found that the convergence depends on not only the network topology but also the deviations among the reference signal accelerations. Another primary interest of this note stems from the dynamics perspective-a point perceived as a main source of control design difficulty for multiagent systems. Indeed, we devise a nonlinear algorithm which is capable of achieving DAT under weight-unbalanced directed networks for agents subject to high-order integrator dynamics. The results show that the convergence to the vicinity of the average of the reference signals is guaranteed as long as the signals' states and control inputs are all bounded. Both algorithms are robust to initialization errors, i.e., DAT is insured even if the agents are not correctly initialized, enabling the potential applications in a wider spectrum of application domains.
- Index Terms: Distributed average tracking, multi-agent system, weight-unbalanced directed graphs.
2020.11
Day: 2020.11.24
- Speaker: Liwen ZHANG
- Title: Fragility Limits Performance in Complex Networks
- Abstract: While numerous studies have suggested that large natural, biological, social, and technological networks are fragile, convincing theories are still lacking to explain why natural evolution and human design have failed to optimize networks and avoid fragility. In this paper we provide analytical and numerical evidence that a tradeoff exists in networks with linear dynamics, according to which general measures of robustness and performance are in fact competitive features that cannot be simultaneously optimized. Our findings show that large networks can either be robust to variations of their weights and parameters, or efficient in responding to external stimuli, processing noise, or transmitting information across long distances. As illustrated in our numerical studies, this performance tradeoff seems agnostic to the specific application domain, and in fact it applies to simplified models of ecological, neuronal, and traffic networks.
- Index Terms: none.
Day: 2020.11.17
- Speaker: Wenbo HU
- Title: Shared Experience Actor-Critic for Multi-Agent Reinforcement Learning
- Abstract: Exploration in multi-agent reinforcement learning is a challenging problem, especially in environments with sparse rewards. We propose a general method for efficient exploration by sharing experience amongst agents. Our proposed algorithm, called Shared Experience Actor-Critic (SEAC), applies experience sharing in an actor-critic framework. We evaluate SEAC in a collection of sparse-reward multi-agent environments and find that it consistently outperforms two baselines and two state-of-the-art algorithms by learning in fewer steps and converging to higher returns. In some harder environments, experience sharing makes the difference between learning to solve the task and not learning at all.
- Index Terms: none.
Day: 2020.11.10
- Speaker: Shuocheng KANG
- Title: Distributed Heavy-Ball: A Generalization and Acceleration of First-Order Methods With Gradient Tracking
- Abstract: We study distributed optimization to minimize a sum of smooth and strongly-convex functions. Recent work on this problem uses gradient tracking to achieve linear convergence to
the exact global minimizer. However, a connection among different approaches has been unclear. In this paper, we first show that many of the existing first-order algorithms are related with a simple state transformation, at the heart of which lies a recently introduced algorithm known as AB. We then present distributed heavy-ball, denoted as ABm, that combines AB with a momentum term and
uses nonidentical local step-sizes. By simultaneously implementing both row- and column-stochastic weights, ABm removes the conservatism in the related work due to doubly stochastic weights
or eigenvector estimation. ABm thus naturally leads to optimization and average consensus over both undirected and directed graphs. We show that ABm has a global R-linear rate when the
largest step-size and momentum parameter are positive and sufficiently small. We numerically show that ABm achieves acceleration, particularly when the objective functions are ill-conditioned.
- Index Terms: Complex cyber-physical networks, deception attacks, resilient consensus, robust graph, trusted edges.
Day: 2020.11.03
- Speaker: Cunhao WEI
- Title: Resilient Consensus of Discrete-Time Complex Cyber-Physical Networks Under Deception Attacks
- Abstract: We study distributed optimization to minimize a sum of smooth and strongly-convex functions. Recent work on this problem uses gradient tracking to achieve linear convergence to
the exact global minimizer. However, a connection among different approaches has been unclear. In this paper, we first show that many of the existing first-order algorithms are related with a simple state transformation, at the heart of which lies a recently introduced algorithm known as AB. We then present distributed heavy-ball, denoted as ABm, that combines AB with a momentum term and
uses nonidentical local step-sizes. By simultaneously implementing both row- and column-stochastic weights, ABm removes the conservatism in the related work due to doubly stochastic weights
or eigenvector estimation. ABm thus naturally leads to optimization and average consensus over both undirected and directed graphs. We show that ABm has a global R-linear rate when the
largest step-size and momentum parameter are positive and sufficiently small. We numerically show that ABm achieves acceleration, particularly when the objective functions are ill-conditioned
.
- Index Terms: Accelerated first-order methods, cooperative control, distributed optimization, heavy-ball momentum..
2020.10
Day: 2020.10.27
- Speaker: Jiawei ZHU
- Title: Target Control of Directed Networks based on Network Flow Problems
- Abstract: Target control of directed networks, which aims to control only a target subset instead of the entire set of nodes in large natural and technological networks, is an outstanding challenge faced in various real-world applications. We address one fundamental issue regarding this challenge, i.e., for a given target subset, how to allocate a minimum number of control sources, which provide input signals to the network nodes. This issue remains open in general networks with loops. We show that if this issue is relaxed to a path cover problem, then it can be further converted into a maximum network flow problem. A method called "maximumflow based target path-cover" (MFTP) with complexity O(|V |1/2 |E|), where |V| and |E|, respectively, denote the numbers of network nodes and edges, is proposed. It is also rigorously proven that MFTP provides the minimum number of control sources to control the target nodes in directed networks if the target control can be relaxed to the path cover problem, whether loops exist or not. We anticipate that this paper would serve wide applications in target control of real-life networks, as well as counter control of various complex systems.
- Index Terms: Directed networks, maximumnetwork flow, path cover problems, target controllability.
Day: 2020.10.20
- Speaker: Yanying Yu
- Title: Topology Reconstruction of Dynamical Networks via Constrained Lyapunov Equations
- Abstract: The network structure (or topology) of a dynamical network is often unavailable or uncertain. Hence, we consider the problem of network reconstruction. Network reconstruction aims at inferring the topology of a dynamical network using measurements obtained from the network. In this technical note we define the notion of solvability of the network reconstruction problem. Subsequently, we provide necessary and sufficient conditions under which the network reconstruction problem is solvable. Finally, using constrained Lyapunov equations, we establish novel network reconstruction algorithms, applicable to general dynamical networks. We also provide specialized algorithms for specific network dynamics, such as the well known consensus and adjacency dynamics.
- Index Terms: Dynamical networks, Lyapunov equations, network reconstruction, topology identification.
Day: 2020.10.13
- Speaker: Wuguang WANG
- Title: Distributed Aggregative Optimization over Multi-Agent Networks
- Abstract: This paper proposes a new framework for distributed optimization, called distributed aggregative optimization, which allows local objective functions to be dependent not only on their own decision variables, but also on the average of summable functions of decision variables of all other agents. To handle this problem, a distributed algorithm, called distributed gradient tracking (DGT), is proposed and analyzed, where the global objective function is strongly convex, and the communication graph is balanced and strongly connected. It is shown that the algorithm can converge to the optimal variable at a linear rate. A numerical example is provided to corroborate the theoretical result.
- Index Terms: Distributed algorithm, aggregative optimization, multi-agent networks, strongly convex function, linear convergence rate.