Office of Research, UC Riverside
Konstantinos Karydis
Associate Professor
Electrical & Computer Eng Dept
karydis@ucr.edu
(951) 827-6132


Group Travel Award for 2017 Workshop on Learning Perception and Control for Autonomous Flight: Safety, Memory, and Efficiency

AWARD NUMBER
008977-002
FUND NUMBER
33348
STATUS
Closed
AWARD TYPE
3-Grant
AWARD EXECUTION DATE
6/5/2017
BEGIN DATE
4/15/2017
END DATE
3/31/2018
AWARD AMOUNT
$12,000

Sponsor Information

SPONSOR AWARD NUMBER
1743262
SPONSOR
NATIONAL SCIENCE FOUNDATION
SPONSOR TYPE
Federal
FUNCTION
Organized Research
PROGRAM NAME

Proposal Information

PROPOSAL NUMBER
17101328
PROPOSAL TYPE
New
ACTIVITY TYPE
Basic Research

PI Information

PI
Karydis, Konstantinos
PI TITLE
Other
PI DEPTARTMENT
Electrical & Computer Eng
PI COLLEGE/SCHOOL
Bourns College of Engineering
CO PIs

Project Information

ABSTRACT

Aerial robots, commonly referred to as drones, offer promise in several research, educational, defense and commercial applications. Some examples include precise agriculture, aerial photography, agile inspection and monitoring, and package delivery. In most of those applications that aerial robots have started venturing outside the research lab and into the real world, robot operation is often semi-autonomous. Semi-autonomous operation typically assumes availability of GPS signal for localization, and at least some prior information about the working environment. Sensory-based, fully autonomous operation in unknown environments remains mostly at the research stage. Yet, endowing full autonomy to aerial robots can enhance their impact on the nation's education, economy, and defense. To this end, it is important to seamlessly merge perception, planning, and control for autonomous robotic flight in unknown environments. This can be achieved by integrating machine learning tools into aerial robot perception and control. Deep learning has recently emerged as a promising way to extract semantic meaning for autonomy. Learning perception and control for autonomous flight can be approached by replacing hand-engineered map representations with raw sensor observations, and learning appropriate responses. However, this is not a straightforward task, and several challenges remain. This workshop critically addresses how to i) best incorporate memory and ii) derive safety guarantees for the learning-based system. These two aspects are necessary to improve the capacity of aerial robots to operate autonomously in unknown environments, and to push forward the current state-of-the-art in robotic flight. In addition to the domain of robotic flight, the outcomes of this workshop are relevant to endowing autonomy in general robotic systems that are able to learn, thus helping make autonomous robots ubiquitous.

The objective of this workshop is to address the theoretical and technical challenges faced in order to endow learning-based systems with the capacity to operate autonomously in unknown environments. A critical step in this effort is to understand how memory-augmented autonomous learners can operate with provable safety guarantees. The workshop thus examines two highly-relevant questions. i) How to theoretically analyze the data and structure of learning-based systems to provide guarantees on safety and task success? ii) What is the effect of long-term memory and, in particular, can recurrent connections or dynamic external memory replace global map information? The workshop seeks answers to these questions by bringing together experts from robot planning and control, reinforcement learning and deep learning, and formal methods. The workshop also solicits participation of contributed authors working in relevant areas. These include but are not limited to applying deep reinforcement learning for vision-based control of underactuated robots; learning visuomotor policies and deriving formal guarantees for learning based on neural networks; and developing neural network architectures that involve temporal recurrence and memory. The above questions are asked here in the context of high-speed aerial robot autonomous navigation. However, their scope can be generalized to other areas of robotics that learning perception and control for autonomous operation in unknown environments is desirable; examples include manipulation and legged locomotion.
(Abstract from NSF)