Supporting Certified Avionics in the Multicore Era
How Microchip Technology Addresses Determinism, Safety and Certification Rigor
The move to multicore processors is reshaping how avionics systems are designed and certified. This blog post explains why multicore processors matter, what challenges they introduce and how choosing the right processor architecture early can simplify certification and reduce overall program risk.
The aviation industry’s move toward multicore processors is driven by necessity. Increasingly complex avionics workloads, higher levels of autonomy, sensor fusion and advanced communications demand computational density that single‑core architectures can no longer provide within acceptable size, weight and power (SWaP) envelopes. Multicore processors offer a compelling answer, but they also introduce a fundamental certification risk: how to demonstrate deterministic behavior in the presence of shared resources.
This challenge is now well understood by regulators and system designers alike. CAST‑32A reframes this risk by introducing the concept of interference channels and requiring applicants to identify, bound and mitigate them as part of the system safety case. FAA Advisory Circular AC 20‑193 and its EASA equivalent AC AMC 20-193, builds on this foundation, providing practical guidance on acceptable means of compliance, including architectural mitigation, robust partitioning strategies and verification evidence proportional to Design Assurance Level (DAL). In effect, certification authorities are no longer satisfied with software partitioning alone. Determinism must be demonstrated across hardware, firmware and software layers, and that has profound implications for processor selection and platform architecture.
Understanding Interference as an Aspect of Design and Certification
At the heart of CAST-32a and AC/AMC 20‑193 is the recognition that multicore interference is not an abstract software concern—it is a system‑level phenomenon arising from contention of shared hardware resources. Shared caches, memory controllers, interconnect fabrics, DMA engines and even power management features can all create execution‑time variability that complicates traditional worst‑case execution time (WCET) assumptions. Certification guidance reflects three core risks.
- Loss of determinism
Shared resources such as caches, memory controllers and interconnects introduce execution time variability that is difficult to bound analytically.
- Interference between mixed criticality software components
Mixed criticality applications—DAL A flight control alongside DAL C/D mission or maintenance functions—must not adversely affect one another.
- Certification evidence burden
Applicants must demonstrate that interference is bounded, Worst Case Execution Time (WCET) assumptions remain valid and failure modes are predictable despite limited observability into complex processor internals. Attempting to “analyze away” these challenges after platform selection is both risky and costly.
The Role of Hardware in Interference Mitigation
Both AC/AMC 20‑193 and CAST‑32A are explicit: no single interference mitigation technique is sufficient on its own. Robust partitioning at the operating system or hypervisor level is necessary, but not sufficient, if the underlying hardware allows unbounded contention. This has driven increased scrutiny of RTOS behavior, hypervisor design and even tool qualification. Verification activities must now show that interference is either prevented by design or bounded by analysis and test. In practice, this often means combining ARINC 653‑style partitioning with hardware‑assisted isolation. The net result is a tighter coupling between processor architecture, system software and certification artifacts—an environment where silicon choices early in architectural debates directly impact program risk, cost and schedule.
Architectural Implications for Multicore Platforms
These requirements place new emphasis on processor architectures that make shared resource behavior observable, controllable and analyzable by design. Microchip’s approach to multicore avionics platforms is structured around five architectural principles aligned with these certification objectives:
- Deterministic resource ownership
Ability to assign memory and I/O resources to support mixed DAL criticality
- Observability of shared resource behavior
Hardware visibility into contention and utilization to support defensible WCET analyses
- Lock-step execution capability
Core-pairing to support higher integrity safety architectures and improved fault detection and isolation
- Hardware-enforced security aligned with DO-326A / DO-356A
Supporting emerging avionics cybersecurity requirements
- Configuration at boot time
Enabling reuse of hardware across programs while preserving certification constraints
This approach aligns with certification guidance by ensuring that interference effects are observable, controllable and defensible rather than probabilistic or emergent. The resulting system-level capabilities reduce the need for avionics developers to construct bespoke multicore certification strategies, lowering program risk and accelerating time to certification. These realities shift multicore certification from a purely software and analysis problem to one that is fundamentally architectural. Not all multicore processors are created equal in this regard—subtle differences in how shared resources are managed, observed and controlled can have outsized impact on certification effort and program risk.
In Part 2, we will examine what a multicore platform designed with certification in mind looks like in practice, using the PIC64HX architecture as a concrete example.