Is it safe to use Artificial Intelligence on aircraft? Can we trust AI to make the right decision in every circumstance? Beca’s Robert McGivern reaches the conclusion that yes - machine learning systems can be safe and certified for wider use in Aviation environments under existing regulations, with a few key limitations.
This article is the second part in our new series exploring how cutting-edge technology can make everyday better for the Aerospace sector. Here's Part 1, 3 and 4.
First let’s set the scene: Machine Learning (ML), a branch of Artificial Intelligence (AI), is transforming countless industries, including aviation. Unlike the traditional software found in modern planes, ML enables aviation systems to make decisions or predictions by recognising patterns in data, rather than being explicitly programmed for each task. As the aviation sector explores new ways to integrate ML, questions naturally arise about its safety, certification and compatibility with existing regulations.What is Machine Learning?
At its core Machine Learning (ML) trains systems using reference datasets. The development process begins with collecting diverse, representative data to account for real world scenarios the system might encounter.
Training involves iteratively assessing the ML Model’s performance against the training data, and adjust parameters (known as weights) to minimise errors. Once trained, ML systems with fixed weights are deterministic – that means they produce consistent outputs for identical inputs. However, challenges remain in understanding how these models arrive at their conclusions and ensuring they perform safely when they encounter something different.
How does ML differ from traditional aircraft software?
Traditional software relies on human-defined logic and rigorous assessments to verify correctness through design reviews, code inspections and testing. ML requires human oversight during the initial design and dataset preparation phases, but the learning process itself is autonomous. This independence makes it difficult to explain why a ML system produces a specific output.
This lack of explainability introduces key challenges such as:
- Generalisability: Can the system consistently function as expected across all possible inputs
- Correctness assurance: How can we validate a system that adapts and learns in unpredictable ways?
Testing remains a vital approach, but ML demands more extensive test cases to ensure confidence in performance in all reasonable circumstances. Therefore specifying, collecting and managing the test and training datasets becomes the key focus for ML system developers.
Can AI/ML be safe in Aircraft systems?
The short answer is Yes: With careful engineering and adherence to specific principles, AI/ML can be designed to operate safely in aviation systems. Key strategies to achieving safe operation include:
- 1. Determinism: Ensure the system produces consistent outputs for given inputs, avoiding randomness or self-modifying behaviours.
- 2. Extensive Testing: Conduct rigorous testing to validate outputs, focusing on both typical and rare input scenarios.
- 3. Parameter Data Items (PDIs): Treat neural network weightings as controlled parameters subject to certification standards like DO-178C.
- 4. Partitioning: Design the system to limit AI/ML functions to less critical applications (e.g., DAL-D).
- 5. External Control: Use traditional, deterministic systems to oversee AI/ML components, restricting their authority (e.g. roll limits for autopilots).
These measures allow AI/ML to enhance aviation capabilities whilst also maintaining crucial passenger and aircraft safety.
Should AI/ML systems retrain post-certification?
The short answer is No: Retraining introduces an element of self-modification, potentially leading to unpredictable behaviour. Certification depends on stable, fixed-function systems, making retraining contrary to current principles. This field may evolve over time. Partitioned solutions, where traditional systems oversee recommendations from retraining AI/ML components, could become viable. For now, retraining remains unlikely to gain approval from airworthiness authorities.
Current certification standards: Can we certify AI/ML?
Yes: For non-critical functions - identified as “Software Level D”. AI/ML systems can currently be certified under Level D, which does not require verification of lower-level reviews (LLRs), source code reviews and structural coverage. Conversely, higher Levels (A, B & C) requires activities that do not translate well to ML implementations (e.g. Code Coverage, Low Level Requirements, and Tracing Parameter Data Items). Meeting these will likely require new approaches.
One interesting development is international aviation standards organizations (SAE and EUROCAE) are developing standards for AI/ML systems. These standards will need to be endorsed by the relevant Airworthiness authorities to become the Acceptable Means of Compliance for ML.
What are airworthiness authorities doing?
Key bodies like EASA and the FAA are actively exploring frameworks for AI/ML certification. This includes:
- EASA has published roadmaps, guidance papers and research findings, offering a valuable starting point for those interested in AI/ML integration.
- FAA is developing its position but operates under a different mandate, influencing how regulations evolve.
- Other national authorities are likely to collaborate with the FAA and EASA before publishing their own guidance.
For now, progress is methodical and it may take years before comprehensive guidelines for certifying AI/ML beyond Software Level D are published.
Final thoughts: The path forward for AI/ML in Aviation
While Artificial Intelligence and Machine Learning hold incredible potential for aviation, integrating these technologies into certified aviation systems requires careful consideration collaboration and adherence to safety principles. Current frameworks like DO-178C provide a solid baseline, but achieving certification for more critical functions will demand innovation in testing, explainability and regulatory alignment. As this exciting field matures, engineers, regulators and researchers will need to work together to unlock the full benefits of Artificial Intelligence and Machine Learning while maintaining the aviation sectors uncompromising safety standards. For the industry, this marks both an exciting opportunity and a profound responsibility.
Learn more about Beca’s Defence & National Security capability, including our work in the Aerospace domain here.

About the Author
View on LinkedIn
Email Robert McGivern
Robert McGivern
Technical Director - Software Engineering