GosNIIAS
The project coordinator
Summary
Analytics
Project map
Russian version
Home
English version
Write the mail
How neural networks as the basis of artificial intelligence and machine learning
are already showing future potential for aerospace
News/ > 2020/ > How neural networks as the basis of artificial intelligence and machine learning.../
How neural networks as the basis of artificial intelligence and machine learning are already showing future potential for aerospace
26 October 2020 Neural networks are starting to see some of the earliest forms of development adoption and potential for expansion in applications across multiple segments of aviation for electronic systems both on and off board aircraft. These networks are the backbone of artificial intelligence (AI) and machine learning algorithms that are already revolutionizing air traffic control systems (ATM). Today, there is a great potential for applying the above technologies in the future.

What is a neural network?

Neural networks are a sub-class of systems within the overall field of machine learning. Experts define neural networks as a computational model, consisting of learning algorithms that function similar to the way neurons within the human brain communicate through synapses to help enable normal bodily functions.

Nvidia, known for supplying computers for for autonomous cars and drones, defines the term as «a biologically inspired computational model that is patterned after the network of neurons present in the human brain» and «can also be thought of as learning algorithms that model the input-output relationship.»

A neural network can be trained to understand the data that it is continuously fed, or input, and can then process and generate intelligent decisions or answers to complex problems that engineers have designed the neural network to solve, or output. Under this input-to-output method, the neural network uses what’re known as neural layers, existing in three different types.

These include an initial input layer where data is initially injected into the neural network, intermediate or hidden layers where computation of the data in between the input and output layers occurs, and and output layer that produces results in the form of actionable information. >>>
>>> The artificial intelligence and machine learning engineering development community has also classified two primary forms of neural networks. A feed-forward neural network is a simpler version where information and data is being input, transformed by neural layers and output as information in a one-way forward-facing cycle. Another, more advanced form is a recurrent neural network, that uses memory and feedback loops, functioning in a way where it continuously injects key data and events that it has learned from within a more dynamic process.

To classify images, convolutional neural networks (CNN) are used that belong to the first of the above forms. CNN is a deep learning neural network designed for processing structured arrays of data. Convolutional neural networks are widely used in computer vision and have become the state of the art for many visual applications, and have also found success in natural language processing for text classification.

The key building block in a convolutional neural network is the convolutional layer. We can visualize a convolutional layer as many small square templates, called convolutional kernels, which slide over the image and look for patterns. Where that part of the image matches the kernel’s pattern, the kernel returns a large positive value, and when there is no match, the kernel returns zero or a smaller value.

Convolutional neural networks are very good at picking up on patterns in the input image, such as lines, gradients, circles, or even eyes and faces. It is this property that makes convolutional neural networks so powerful for computer vision. Unlike earlier computer vision algorithms, convolutional neural networks can operate directly on a raw image and do not need any preprocessing.

CNN contain many convolutional layers stacked on top of each other, each one capable of recognizing more sophisticated shapes. With three or four convolutional layers it is possible to recognize handwritten digits and with 25 layers it is possible to distinguish human faces. Convolutional layers alternate with activation layers that add non-linearity to the network and/or pooling layers that downscale the images.

Enabling elements for neural networks of today are based on the type of cutting-edge graphics processors units (GPUs) and processors being developed by companies such as Nvidia, Intel and NXP, among others. In May 2019, Intel published a benchmark demonstrating a comparison of ResNet-50 using the latest generation of Intel’s Xeon Scalable processors, against Nvidia’s Tesla V100, confirming ResNet-50’s ability to use deep learning software and achieve a throughput of 7878 images per second on its Xeon Platinum 9282 processors. >>>
>>> «We achieved 7878 images per second by simultaneously running 28 software instances each one across four cores with batch size 11. The performance on Nvidia Tesla V100 is 7844 images per second and Nvidia Tesla T4 is 4944 images per second per Nvidia's published numbers as of the date of this publication,» Intel wrote in the benchmark.

Neural network features were also a key aspect of the debut of NXP Semiconductors i.MX 8M Plus application processor at the annual Consumer Electronics 2020 show in Las Vegas. NXP indicates that this is the first i.MX family to feature a dedicated neural processing unit capable of learning and inferring inputs with almost no human intervention.

Research and development around the integration of neural networks into aeronautical systems and networks can be found as far back as the early to mid 1990s. As an example, a 1996 paper published by Charles C. Jorgensen for the NASA Ames Research Center describes the advantages of a neural network’s ability to deal with nonlinear and adaptive control requirements, parallel computing and function mapping accuracy to be advantageous in «including automated docking of spacecraft, dynamic balancing of the space station centrifuge, online reconfiguration of damaged aircraft, and reducing cost of new air and spacecraft designs.»

Aerospace adoption of neural networks

Developments in recent months at some of the largest names in aerospace along with up and coming providers of artificial intelligence and machine learning platform have shown how the industry will adopt neural networks in the near future.

FCAS project

Airbus, and its Future Combat Air System (FCAS) development partner Dassault Aviation, are targeting the use of neural networks as a key enabler of the next-generation air combat development program involving France, Germany and Spain to develop a system of fully automated remote air platforms and sixth-generation fighter jets.

Airbus and Bonn, Germany-based Fraunhofer Institute for Communication, Information Processing and Ergonomics FKIE have created an independent panel of experts on the responsible use of new technologies to define and propose ethical as well as international legal «guard rails» for this Europe’s largest defense project.

On May 14, 2020, Airbus and FKIE held a virtual working group meeting, featuring members of an independent panel of experts, on the responsible use of new technologies in the design of FCAS. >>>
>>> Established last year, the panel includes members from Airbus, the German Ministry of Defense, German Ministry of Foreign Affairs, foundations, universities, and think tanks. The panel is to aid in the development of guidelines for the ethical use of artificial intelligence and autonomy in the FCAS program, which is to feature a sixth-generation manned fighter and unmanned, «remote carrier» platforms controlled by the pilot of the manned fighter. Such requirements are to ensure meaningful, human control of FCAS functions.

Enabling the manned and unmanned teaming of FCAS will be an «Air Combat Cloud,» which is to integrate sensor data. Civil functions are also to benefit from FCAS down the line. FCAS, which thus far involves France, Germany, and Spain, is to replace Dassault's Rafale fighter and the Airbus/BAE Systems/Leonardo-built Eurofighter.

«I have clear requirements on the table, how to design such kind of a product [FCAS] to fly safely in airspace, but I have very limited requirements which are driven by our ethical compliance,» Thomas Grohs, chief architect of FCAS at Airbus Defence and Space, said during the May 14 virtual meeting. «I'm really looking forward to have such kind of a requirements listing established together with my colleagues and this forum and others participating — a requirements list that allows me to design the system to be compliant with such kind of requirements.»

Such requirements will set up a framework for such FCAS features, as neural networks and human control of FCAS functions — a so-called human «circuit breaker» to head off potentially fatal machine errors.

«I have to make the system flexible from a neural network point of design because I need to train such neural networks on their specific behavior,» Grohs said during last week's May 14 virtual working group meeting. «However, this behavior may differ from the different users that may use the equipment from their ethical understanding. This is for me then driving a design requirement that I have to make the system modular with respect to neural network implementation, that those are loadable, pending one that uses this from his different ethical understanding. Such are the things I need to look at and to see can we find proper solution to make this happen.»

ANSYS and Airbus Defense and Space told Avionics International last June that the companies are developing an AI design tool to create the embedded flight control software for FCAS. Airbus has said that it is also creating a new version of the ANSYS SCADE aerospace systems simulation software configuration. The upgraded version of the tool will use artificial intelligence algorithms as a replacement for traditional model-based systems development in order to facilitate FCAS manned-unmanned teaming and the safe flight of FCAS «remote carriers.» An ANYS official said that most of the academic and industry research behind the use of AI for software development involves the use of convolutional neural network (CNN) input and output layers. >>>
>>> In terms of human, «circuit breakers» for FCAS, «not everything I could realize from a technical perspective to be fully automated...should be automated,» Grohs said. «I should have decisive break points in there that could be activated from an ethical perspective of the human 'in the loop' or, at least, 'on the loop' to be able to take proper decisions from an ethical perspective. Those requirements need to be laid out and be plotted against each of the functional chains for the potential users that later on will use the product.»

Ulrike Franke, a member of the FCAS experts panel and a policy fellow at the European Council on Foreign Relations, said that thus far there have been «pronounced divergences» in European views on the employment of military AI and autonomous weapons systems. Franke said that «France appears to be more open» to such use, while Germany is «more cautious» and that one challenge for FCAS will be «how to reconcile these differences.» Possible resolutions include the establishment of «red lines» for machine decision making or providing measures for how much autonomy FCAS sub-systems can have.

For its part, Germany wants to retain human decision making in FCAS targeting. German Air Force Brig. Gen. Gerald Funke, the FCAS project leader for the German Ministry of Defense, has written that Germany «will not accept any technical concept that would give any system the possibility to authorize the death of another person solely on the basis of the logic of an algorithm.»

«Human beings will remain the sole determinants, responsible for decisions and all their consequences!» Funke wrote.

During the May 14 virtual working group meeting, Funke said that it is still too early in the FCAS concept phase to know whether the FCAS manned fighter will be a one-seat or two-seat design to guarantee sufficient human control and that such a decision will become clearer «when we know what are the roles of the human in the vehicle.»

«So far, I would guess it's more one-seater than a two seater, but we leave it open,» Funke said. «We have not decided it yet, apart from my side.» >>>
>>> Rüdiger Bohn, the deputy federal government commissioner for disarmament and arms control at the German Foreign Ministry, said that the Airbus/Fraunhofer FKIE initiative «is an excellent opportunity for Europe to influence the global policy debate on international arms control solutions by developing industry standards, for instance on the military use of AI and on how human control can be programmed into the design of new weapons systems.»

Airports and air traffic control

Assaia, a Zurich-based supplier of artificial intelligence solutions, uses image recognition algorithms powered by Nvidia’s GPUs in and around airport terminals to capture and train a neural network on video of airplane turnarounds, to help airlines and airplane service or maintenance services vendors eliminate actions or activities on the airport surface that might lead to delays.

«We provide a real-time dashboard that can help them understand whether they [airlines] need to intervene [in the activities of their suppliers] or not — like did the gasoline arrive or not,» said Max Diez, founder and CEO of Assaia.

The startup trained its neural networks on several years’ worth of video from airfields around the world. The neural nets understand how different objects on the airfield look, move and interact, said Nikolay Kobyshev, CTO at Assaia.

Their neural network generates timestamps associated with each individual airplane turnaround activity captured on video, including baggage transfers, cleaning, catering and boarding among other processes, in addition to coordinates, status, behaviour and interaction of objects or people on the apron. The timestamps are then consumed by machine learning algorithms and fused with other relevant data sources to predict the timing of key events, such as refueling, during an aircraft turnaround process.

As a starting point, the company relies on video streams from existing CCTV cameras to feed the AI. This allows its customers to realise the Assaia's Apron AI suite benefits without a large upfront investment. Thanks to integrations with major video management systems such as Genetec and Milestone, as well as Assaia's partnerships with leading camera manufacturers such as Axis, a customer can start generating data within a matter of hours. >>>
>>> Assaia’s Prediction Engine works hand in hand with the Apron AI. It ingests the video stream, the freshly mined structured data and large amounts of historic data. The footage is processed locally or in the cloud provided by the company. Based on this, it accurately predicts turnaround events such as an aircraft’s off-block time (OBT), while the turnaround is still underway.

In May 2020, Seattle Tacoma International Airport (SEA) signed an agreement with Assaia to start deploying their GPUs throughout the airport. Tim Toerber, Airline Resource and Scheduling Manager at SEA, said: «Video analytics is crucial in providing full visibility over turnaround activities as well as improving safety for airside operations. It’s 24/7 automatic monitoring and reporting capability will help the airport, airlines, air traffic managers and ground handlers to better understand safety-related issues and thus reduce the number of incidents on airside.»

Neural network-powered image recognition is also occurring at Heathrow International Airport, where Searidge’s Aimee artificial intelligence and machine learning platform uses a network of high fidelity cameras aligning the runway to monitor approaches and landings to alert controllers as to when the runway is safely cleared for the next arrival.

«AI has revolutionized the way we manufacture systems for air traffic management,» said Marco Rueckert, head of innovation at Ottawa-based Searidge Technologies. «We don't aim to replace the human air traffic controller, but what we're really trying to do is enhance the capability of the human by providing situational awareness and decision support tools so the human can do their job more effectively, taking away some of the mundane tasks so that they can focus on the more complex decisions.»

Challenges in developing such AI systems for aviation include the safety critical nature of aviation; the validation of AI performance, including understanding failure modes; verification that the use of AI is safe; and understanding how AI makes decisions. Since Searidge Technologies' work on AI began in 2012, the company has learned that not all data for AI is useful, and that the data for AI has to be pre-processed and labelled, not dumped in expectation of an output, Rueckert said. >>>
>>> The world's first 4K digital tower lab at London's Heathrow International Airport was co-developed by Searidge with a goal to recover 20 percent of lost landing capacity at the airport that occurs due to low visibility conditions. Nine 4K cameras on the north side of the tower and nine 4K cameras on the south side provide runway coverage. AI comes into play when inclement weather impedes a controller's view of the runway from the 100 foot tall tower at Heathrow.

The images from those cameras are then fed live into Aimee, which can interpret the images, track the aircraft and then inform the controller when it has successfully cleared the runway. The controller then makes the decision to clear the next arrival. Building such AI-powered capability required a fusion of different data sets and the use of Aimee to correctly help controllers distinguish between different aircraft types, according to Rueckert.

«What sometimes happens is because the radar is at a 1 Hz update rate and the visual system is at 25 Hz, the radar source either lags or behind or is a little inaccurate in the triangulation. So we use the radar as the initial source and use the AI to correct,» Ruckert said.

Searidge trialed the AI system over three months, tracking 50,000 aircraft movements.

Aimee is capable of simplifying the process of configuring and training artificial neural networks with the large and complex data sets that enable the use of AI for things like image and geo-location tracking of an aircraft.

At Heathrow, Aimee’s image segmentation neural network is using archived video footage from the cameras that include live images of aircraft arriving and crossing over a runway threshold. NATS, the air navigation service provider (ANSP) for the U.K., wants to use the neural network of aircraft arrivals to assist controllers at Heathrow when the tower’s vision is lost due to clouds or fog.

«At Heathrow Aimee runs on GPU workstations to accelerate the image segmentation Neural Network. The outputs of the Aimee sub-systems can then either be displayed on Searidge's CWP (as is the case at LHR) or fed into external systems,» Rueckert told. >>>
>>> Other airports and air traffic management systems elsewhere in Europe are also poised to see the adoption of AI technology in the future as well. In 2019, Eurocontrol held its inaugural forum on aviation and AI, that lead to the publishing of their first «Fly AI» report in March 2020. The report outlines how air navigation service providers, airlines, airports and other stakeholders across the European air traffic ecosystem see the potential use of AI for better use of aviation data leading to more accurate predictions and more sophisticated tools.

Some of the first ATM applications for AI featured in the report are air traffic control planning and flow management, where Eurocontrol trials already show a 30 percent improvement in trajectory prediction. Eurocontrol also sees the potential use of AI in helping establish surveillance technology for drones operated commercially beyond visual line of sight in European airspace.

Searidge has also been infusing AI into the workflows at other airports around the world. As with Assaia's Apron AI suite, the focus is on making the aircraft turnaround process more efficient for both airlines and airport workers. At Dubai International Airport, Aimee tracks and detects certain personnel and events associated with a typical aircraft turnaround process in real-time, to include monitoring of jet bridge movements, catering trucks and service vehicles among other workflows. That data is then compared with historical data associated with the same events, to predict whether an aircraft will meet its scheduled departure time.

In the future, Ruckert believes a key focus for Searidge and others integrating more AI capability into air traffic management could be the use of AI for communication between pilots and controllers.

«The space we’re really going into is voice, the primary communication between the pilots and air traffic controllers is still the radio channel, and its quite hard to hear especially when we go to the UK and work with NATS you have about 50 different accents in Scotland alone,» Ruckert said. «Not necessarily so that you just have the autonomous aircraft talking to the autonomous ATC, maybe not in five years but at least to some point where we can do some error checking and say you told this aircraft to go to runway 1 and it’s actually taxiing the wrong way. Just putting another safety layer in there.» >>>
>>> Latest autopilot and advanced certification concepts

The near term potential for the use of neural networks in aircraft systems took a major step forward April, with the publishing of a joint report by EASA and Daedalean addresses the challenges posed by the use of Neural Networks in aviation. Daedalean, like Assaia based in Zurich, Switzerland, has nearly 40 software engineers, avionics specialists and pilots working on what it believes will be the aviation industry’s first autopilot system to feature deep convolutional feed forward neural networks.

Their public report (which redacts some confidential information) is the product of an EASA innovation partnership contract that ran from June 2019 to February 2020, called «Concepts of Design Assurance for Neural Networks.» The goal of the partnership was to explore the challenges associated with an eye toward eventually allowing machine learning algorithms and other forms of AI in safety-critical applications.

Specific near term use cases, system architectures and aeronautical data processing standards for achieving aircraft certification are defined in the report. The company has been actively flight testing the use of a neural network for visual landing guidance in air taxis, drones and a Cessna 180 over the last year. In 2019, Melbourne, Florida-based avionics maker Avidyne began flight testing the use of Daedalean’s neural networks integrated into cameras and a linked autopilot system. Honeywell Aerospace also signed a technological partnership with the company develop systems for autonomous takeoff and landing, GPS-independent navigation and collision avoidance. Although many of its concepts apply to machine learning algorithms in general, the report focuses primarily on deep neural networks for computer vision systems — the basis of Daedalean’s autopilot system.

Image recognition, segmentation and data processing for visual landing guidance is the most near term use case defined by Daedalean in their joint report published by EASA. The actual function of the neural network is described as a form of object detection where images being analyzed by the network are determined to be the four corners or outline of a runway that is safe to land on.

«Machine learning . . . provides major opportunities for the aviation industry, yet the trustworthiness of such systems needs to be guaranteed,» the report states, noting that traditional development assurance frameworks are not well adapted to complex machine learning algorithms, which are not predictable or explainable in the same way as conventional software algorithms. >>>
>>> According to the report, the joint undertaking between EASA and Daedalean made progress on several essential aspects of «learning assurance,» which is one of four building blocks that structure the AI trustworthiness framework in EASA’s AI Roadmap. That document, released earlier this year, describes learning assurance as a way to «open the ‘AI black box’ as much as practicable» by gaining confidence that a machine learning application supports the intended functionality.

Notably, the project with Daedalean resulted in the development of a W-shaped learning assurance life cycle, which EASA says «will serve as a key enabler for the certification and approval of machine learning applications in safety-critical applications.»

Crucially, the report assumes a system architecture which is non-adaptive — in other words, one that is frozen at a certain stage of development and which does not continue to learn during operation. This «creates boundaries which are easily compatible with the current aviation regulatory frameworks,» the report states.

«Our collaboration with EASA has created a solid foundation that has a realistic chance of paving the way for future use of [machine learning] in safety-critical applications in aviation and beyond,» said David Haber, head of machine learning at Daedalean, in a press release.

«We have considered non-trivial problems, yet more work is required to bring neural networks to full certification,» he continued. «We look forward to continuing our work with EASA.»

According to EASA, its next step will be to «generalize, abstract, and complement these initial guidelines, so as to outline a first set of applicable guidance for safety-critical machine learning applications.» Daedalean will try to release a design assurance level (DAL) C version of its autopilot system by 2021, while continuing work on an eventual DAL-A version. >>>
Search on the project
Look!
© 2020 State Research Institute of Aviation Systems. All rights reserved. Terms of Use.