Portal de Conferências da UFSC, XX Sitraer

Tamanho da fonte: 
Cristiano Perez Garcia, Li Weigang

Última alteração: 2023-09-25


In the coming years, electric aircraft capable of vertical takeoff and landing (eVTOL) will bring about a revolutionary change in urban air mobility. However, this new paradigm demands developing specific tools to manage airspace and enable autonomous operations. Numerous manufacturers are currently working on the development of these aircraft. As adoption increases gradually, the expected high volume of simultaneous flights will present challenges for air traffic control systems. Furthermore, eVTOL aircraft have the potential to operate without onboard pilots, reducing costs but also increasing the complexity of safety solutions.

A redundant set of conflict detection and resolution systems is required to address these challenges. This study explores the potential of deep reinforcement learning models to solve this problem. By leveraging embedded systems such as ADS-B for independent conflict detection, deep reinforcement learning models can suggest actions even in scenarios where conflicts are unobserved. This flexibility makes them well-suited for resolving conflicts, as training a system with all possible conflict configurations is impractical.

The study employed a system based on Deep Q Network (DQN) models. These models adjusted aircraft trajectories by deviating minimally from their ideal paths to resolve conflicts. A customized simulator was also developed to test and compare multiple deep reinforcement learning agents with alternative strategies. The results demonstrate that these models can propose maneuvers that reduce conflict occurrences without significantly impacting aircraft displacement or fuel consumption.


@inproceedings{Mueller2017, author = {Eric R Mueller and Parmial H Kopardekar and Kenneth H Goodrich}, journal = {17th AIAA Aviation Technology, Integration, and Operations Conference}, pages = {3086}, title = {Enabling airspace integration for high-density on-demand mobility operations}, year = {2017}, }

@inproceedings{Thipphavong2018, author = {David P Thipphavong and Rafael Apaza and Bryan Barmore and Vernol Battiste and Barbara Burian and Quang Dao and Michael Feary and Susie Go and Kenneth H Goodrich and Jeffrey Homola and others}, journal = {2018 Aviation Technology, Integration, and Operations Conference}, pages = {3676}, title = {Urban air mobility airspace integration concepts and considerations}, year = {2018}, }

@book{Sutton2018, author = {Richard S Sutton and Andrew G Barto}, publisher = {MIT press}, title = {Reinforcement learning: An introduction}, year = {2018}, }

@book{Alpaydin2021, author = {Ethem Alpaydin}, edition = {4th}, publisher = {The MIT Press}, title = {An Introduction to Machine learning}, year = {2021}, }

@inproceedings{Wang2016, author = {Ziyu Wang and Tom Schaul and Matteo Hessel and Hado Hasselt and Marc Lanctot and Nando Freitas}, journal = {International conference on machine learning}, pages = {1995-2003}, title = {Dueling network architectures for deep reinforcement learning}, year = {2016}, }

Um cadastro no sistema é obrigatório para visualizar os documentos. Clique aqui para criar um cadastro.