Author: Sara Honarvar
Date/Time: April 11th, 2025 at 12pm EST
Location: EGR-2164, Glenn L. Martin Hall
Committee members:
- Dr. Yancy Diaz-Mercado, Chair
- Dr. Nikhil Chopra
- Dr. Hosam K. Fathy
- Dr. Jin-Oh Hahn
- Dr. Dinesh Manocha, Dean’s Representative
Title of dissertation: Learning Interaction Behavior in Distributed Intelligent Systems
Abstract: This dissertation develops a formal framework for learning robust, state-dependent interaction behaviors in distributed intelligent systems. While learning-based approaches are increasingly used in applications of multi-agent systems such as robotics and crowd-aware navigation, they often lack theoretical guarantees of stability and robustness, particularly under state-dependent or asymmetric interaction topologies. Inspired by the adaptability seen in human collaboration, this work aims to bridge data-driven methods and control-theoretic foundations to address key challenges in systematic modeling and learning effective interaction strategies. We begin by drawing an analogy between human motor synergies and consensus algorithms, showing that human collaboration in motor tasks can be modeled via decentralized control laws. Empirical validation using static finger force matching data demonstrates that consensus algorithms over fixed undirected graphs capture key features of human coordination. Next, we introduce a geometric Graph Neural Network (GNN) model for complex human interactions in crowds. By connecting the structure of GNNs to weighted consensus dynamics and designing edge weights inspired by psychological studies on pedestrian behavior, we show how human-inspired topological priors improve both predictive performance and interpretability. The simulation results show the sensitivity of these models to the interaction topology, highlighting the need for principled edge weight design in learned models. To generalize beyond static and handcrafted graph structures, we develop a consensus framework over graphs that support directed, signed, and state-dependent edge weights. Through Lyapunov-based edge agreement analysis, we derive sufficient conditions for stability and robustness—even under nonlinear and antagonistic interactions. Building on these results, we formulate an inverse optimal control framework to learn optimal, state-dependent interaction strategies from expert demonstrations. We derive optimality conditions for recovering interaction weights, investigate data richness requirements for successful learning, and validate the effectiveness of the framework through numerical simulation. Overall, this work integrates theoretical insights with data-driven solutions, aiming to provide useful perspectives toward safer, more intuitive, and reliable cooperative distributed systems.