The Distributed and Collaborative Intelligent Systems and Technology (DCIST) Collaborative Research Alliance (CRA) will create Autonomous, Resilient, Cognitive, Heterogeneous Swarms that can enable humans to participate in wide range of missions in dynamically changing, harsh and contested environments. These include search and rescue of hostages, information gathering after terrorist attacks or natural disasters, and humanitarian missions. Teams of humans and robots will operate as a cohesive team with robots preventing humans from coming in harms way (Force Protection) and extending and amplifying their reach to allow 1 human to do the work of 10 humans (Force Multiplication). The team is led by the University of Pennsylvania and includes collaborators from the U.S. Army Research Laboratory, Massachusetts Institute of Technology, Georgia Institute of Technology, University of California and University of Southern California.
This project tackles the need to develop scalable computational methods for joint perception and planning that enable a team of autonomous agents to extract task-relevant information from heterogeneous and distributed data sources to support collaborative and distributed optimal planning with formal performance guarantees. Future control and decision-making systems need to strike a balance between guaranteed performance in the presence of uncertainty, extraction of information relevant to the task at hand, and reduced algorithmic complexity to enable real-time inference, planning, and learning. This confluence of control, estimation, machine learning, and computational science and engineering is necessary for high-confidence, high-reliability, minimal-supervision autonomous systems that can understand and act in high optempo missions. This project develops unified perception/action representations that can be used across different modalities and spatiotemporal scales; task-aware perception methods that account for the underlying control objective and the available informational and computational resources across the team; quantification of uncertainty in the unified multi-modal representations to enable principled information exchange among agents; hierarchical team architectures and abstractions that can model planning and estimation problems at the right level of granularity; distributed decision-making and inference methods capable of dealing with streaming multi-resolution data.
This project aims to develop new theoretical and algorithmic tools for advancing the ability of autonomous systems to comprehend their surroundings online from sensory observations and adapt their operation safely in response to changing conditions. The key innovations include techniques for online inference of object shapes and robot dynamics models from sensory observations as well as control design for the learned robot dynamics, subject to safety constraints from the observed objects. We are developing online perception algorithms to estimate object poses, shapes, and robot motion jointly, which enables specification of safety constraints for autonomous navigation. We are leveraging the Koopman operators and Bayesian neural networks to learn robot dynamics and infer object shape. Our algorithms provide an adaptive way of estimating robot dynamics from online data, while relying on approximation error bounds to guarantee that the control design satisfies the safety constraints provided by the perceptual system. This project will enable autonomous robot operation in unknown environments that is adaptable, due to the use of learned robot and object models, and safe, due to the use of perception- and uncertainty-aware constraints in the control design.
Artificial perception techniques, allowing robot systems to know their location and surroundings using sensory data, have been instrumental for enabling robot automation outside of carefully controlled manufacturing settings. Current robot systems, however, remain passive in their perception of the world. Unlike biological systems, robots lack curiosity mechanisms for exploration and uncertainty mitigation, which are critical for intelligent decision making. Such capabilities are important in disaster response, security and surveillance, and environmental monitoring, where it is necessary to quickly gain situational awareness of the terrain, buildings, and humans in the environment. The methods developed in this project will impact the design of mapping and active sensing algorithms for autonomous robot teams and their use in the aforementioned applications. This Faculty Early Career Development (CAREER) Program research develops fundamental robot autonomy capabilities that will also impact other domains relying on autonomous robots. In addition, the project will develop a suite of open-source education materials, including theoretical problems, projects, lectures, and exemplary implementations of core robotics algorithms, unified in an easily accessible simulation environment. This platform will support curriculum development for graduate students, as well as outreach and research-initiation activities for undergraduate and K-12 students.
TILOS is a National Science Foundation (NSF) funded National Artificial Intelligence (AI) Research Institute. TIIOS is a partnership of faculty from University of California, San Diego, Massachusetts Institute of Technology, National University, University of Pennsylvania, University of Texas at Austin, and Yale University. The TILOS mission is to make impossible optimizations possible, at scale and in practice. Our research will pioneer learning-enabled optimizations that transform chip design, robotics, networks and other use domains that are vital to our nation's health, prosperity and welfare. TILOS research pursues five main pillars: (1) bridging discrete and continuous optimization, (2) distributed, parallel, and federated optimization, (3) optimization on manifolds, (4) dynamic decision under uncertainty, (5) non-convex optimization in deep learning.
This project aims to take advantage of the hyperconvergence of computation, storage, sensing, and communication in small unmanned aerial vehicles (UAVs) to realize large-scale mapping of environmental factors such as temperature, vegetation, pressure, and chemical concentration that contribute to fire initiation. Developing UAV teams that recharge autonomously and communicate intermittently among each other and with static sensors will aid firefighters with continuous real-time surveillance and early detection of ensuing fires. This project focuses on three fundamental innovations to address the scientific challenges associated with autonomous, collaborative environmental monitoring. First, a new Satisfiability Modulo Optimal Control framework is proposed to handle mixed continuous flight dynamics and discrete constraints and ensure collision avoidance, persistent communication, and autonomous recharging for UAV navigation. Second, a distributed systems architecture using new uncertainty-weighted models will be developed to enable cooperative mapping across a heterogeneous team of UAVs and static sensors and avoid bandwidth-intensive data streaming. Lastly, a new Bayesian learning and inference approach is proposed to generate multi-modal (e.g., thermal, semantic, geometric, chemical) maps of real-time environmental conditions with adaptive accuracy and uncertainty quantification.
Planck Aero and UC San Diego will collaborate to adapt and modify technology to address defense needs by providing an innovative, modular aerial navigation system that is capable of identifying and monitoring the accuracy, availability, and integrity of sensor sources, and ingesting all data into algorithms capable of providing a precision navigation solution for the duration of a mission without reliance on GPS or human intervention. The result is a robust and accurate odometry and localization system for high-speed flight for either manned or unmanned aircraft. The modular system acts to augment existing designs and architectures while provide a higher degree of reliability for autonomous navigation. This system relieves users of the dependence on GPS, which is not always reliable or available. The technology will be commercialized for logistics, urban air mobility, and military operations in contested environments.
Applications for unmanned aerial and ground vehicles requiring autonomous navigation in unknown, cluttered, and dynamically changing environments are increasing in fields such as transportation, delivery, agriculture, environmental monitoring, and construction. To achieve safe, resilient, and self-improving autonomous navigation, this project focuses on the design of adaptive online environment understanding and Lyapunov-theoretic control techniques to guarantees stable and collision-free operation in challenging conditions. This research direction is important because current practices rely on prior or hand-crafted maps that attempt to capture the whole environment, even if parts are irrelevant for specific navigation tasks. This increases memory and computation requirements, spreads the effects of noise, and makes current approaches brittle, particularly in conditions involving dynamic obstacles, unreliable localization, or illumination variation.
The goal of this project is to develop planning algorithms for autonomous exploration and mapping. This is an important problem for robots operating in unknown environments with applications to floor cleaning and environment monitoring. Our approach maintains an occupancy grid map of the environment and constructs a maximal-clearance graph from it. The graph is searched for a potential robot trajectory that maximizes the Cauchy-Schwarz quadratic mutual information between the map and future lidar scans of the environment. The use of an information measure to optimize the sensing trajectories leads to both faster exploration and higher fidelity map reconstruction compared to methods that use simple geometric objectives such as visibility maximization or frontier-based exploration.