Date Approved

8-2018

Graduate Degree Type

Thesis

Degree Name

Computer Information Systems (M.S.)

Degree Program

School of Computing and Information Systems

First Advisor

Greg Wolffe

Second Advisor

Jonathan Leidig

Third Advisor

Christian Trefftz

Academic Year

2017/2018

Abstract

Unmanned Aerial Vehicles (UAVs) are becoming more prevalent every day. In addition, advances in battery life and electronic sensors have enabled the development of diverse UAV applications outside their original military domain. For example, Search and Rescue (SAR) operations can benefit greatly from modern UAVs since even the simplest commercial models are equipped with high-resolution cameras and the ability to stream video to a computer or portable device. As a result, autonomous unmanned systems (ground, aquatic, and aerial) have recently been employed for such typical SAR tasks as terrain mapping, task observation, and early supply delivery. However, these systems were developed before advances such as Google Deepmind’s breakthrough with the Deep Q-Network (DQN) technology. Therefore, most of them rely heavily on greedy or potential-based heuristics, without the ability to learn. In this research, we present two possible approximations (Partially Observable Markov Decision Processes) for enhancing the performance of autonomous UAVs in SAR by incorporating newly-developed Reinforcement Learning methods. The project utilizes open-source tools such as Microsoft’s state-of-the-art UAV simulator AirSim, and Keras, a machine learning framework that can make use of Google’s popular tensor library called TensorFlow. The main approach investigated in this research is the Deep Q-Network.

Share

COinS