top of page

Sign up to Touchwaves' Blog

Subscribe to our Blog to get email updates and access to our monthly posts! 

Search

One Channel, Three Solutions: How Tactile Communication Saves Lives in F-35, Apache, and F-16

  • Writer: Editorial Team
    Editorial Team
  • 17 hours ago
  • 6 min read

New here? Subscribe to our blog to receive our latest insights directly in your inbox.

Pilot in a cockpit with a futuristic digital display, wearing a helmet. It represents tactile communication, haptic feedback, and the cockpit of the future.

Three military aircraft. Three fundamentally different cockpit environments. Three different routes to the same cognitive failure point.


In our previous Blog Post, we examined how the F-35 Lightning II, AH-64 Apache, and F-16C/D Fighting Falcon each arrive at cognitive overload through fundamentally different mechanisms.


This article continues from where that analysis left off, to show how a single sensory channel, the tactile one, addresses each of those bottlenecks on its own terms.

The earlier conclusion was clear: no platform-specific fix can resolve a problem that is structural to the human operator. The visual and auditory channels on which current cockpits depend are routinely saturated under operational conditions, and adding further visual or auditory information does not relieve the load on those channels — it intensifies it.


That left one open question: which channel can carry information into a cockpit when vision and hearing are already operating at capacity?

Neuroscience answers consistently — the somatosensory channel.


Brain imaging studies show that sensory modalities compete for processing capacity primarily within themselves: visual inputs interfere most with other visual inputs, and auditory inputs interfere most with other auditory inputs — a phenomenon known as Multiple Resource Theory, which we have explored more in-depth in our Blog Post: A Multiple Resource Approach to Pilot Safety.


Read the full analysis!


Tactile signals, processed in the somatosensory cortex, arrive via a pathway that is largely independent of the visual and auditory channels carrying the primary demands of flight. This independence is the foundation for the platform-by-platform analysis that follows.


SOMATOSENSORY CHANNEL
The neural pathway that processes information from receptors in the skin, muscles, and joints — including pressure, vibration, and body position. Signals travel via fast-conducting A-beta nerve fibres to the somatosensory cortex, with minimal intermediary processing compared to vision.

The analysis presented draws from Pilot Awareness: The Case for Tactile Communication in Military Aviation, the most recent whitepaper we have published.


Access the full whitepaper!



F-35 Lightning II

Maintaining situational engagement during automated flight.

The challenge associated with the F-35 is not insufficient information but excessive automation.


Sensor fusion, autonomous fuel management, and threat prioritisation shift the pilot from active controller to passive monitor — with vigilance falling measurably after as little as 20 minutes. When the automation encounters an edge case, the pilot is "surprised" to abruptly transition from monitoring back to control — within seconds that no longer exist.


Tactile communication addresses this challenge through a specific design property: graduated, multi-threshold cueing. 


Rather than activating only at critical limits, tactile alerts can scale in intensity or frequency as successive thresholds are crossed.

ILLUSTRATIVE EXAMPLE A pilot deviating from a prescribed flight path may receive a brief, low-intensity tactile signal as the deviation crosses an initial threshold, with cueing intensifying as the deviation widens. Gradual altitude loss may trigger subtle feedback at an early warning boundary and stronger signals as the aircraft approaches critical minimums.
Diagram shows aircraft drift stages: Initial Warning, Caution, High Caution, Critical Alert, with tactile feedback intensity increasing.
Illustrative Example. Graduate Cueing

This creates a continuous stream of low-cost awareness.

By the time a critical boundary is reached or crossed, the pilot has already been guided through earlier stages of the deviation. There is no abrupt interruption, only a smooth transition back into action. At its core, the tactile channel becomes a quiet companion to automation, maintaining orientation without demanding attention.


MECHANISM — PRE-ATTENTIVE PROCESSING
Tactile cues are processed as bottom-up signals: they engage attention at a pre-conscious processing level rather than requiring the operator to actively search for and interpret information. For an out-of-the-loop F-35 pilot, this property reduces the cognitive cost of returning to active control.

The same principle scales to future operational concepts. Programs such as Collaborative Combat Aircraft, expanding the pilot’s role into managing multiple autonomous systems, show how this kind of adaptive, and continuously scaled cueing becomes even more critical.



AH-64 Apache

Operating through degraded auditory and visual environments.

Rather than detachment, the Apache pilot experiences maximum engagement in environments that systematically degrade both vision and hearing.


In-cabin noise in military rotary-wing platforms reaches up to 102 dB. Under such conditions, 3D spatial audio localisation accuracy drops to just 53.1%. The IHADSS display removes depth perception by presenting a monocular image to one eye, while brownout conditions can reduce external visibility to near zero.


Under the same noise conditions in which audio localisation accuracy fell to 53.1%, vibrotactile cueing maintained 91.3% accuracy, with response times 30–35% faster than audio-only alerts.

The mechanism is straightforward: a vibration delivered to the torso does not compete with rotor noise, radio communication, or the IHADSS feed for processing capacity. It cannot be acoustically masked, and it does not depend on three-dimensional vision.


Beyond raw resilience, the body itself becomes the spatial reference frame.

Diagram of egocentric localization. Shows active vibronode (purple) indicating threat direction felt on body. Labels: front, left, right.
Illustrative Example: Egocentric Localisation

The human body provides a natural 360-degree spatial map: a vibration on the right shoulder does not require decoding — its location is its meaning.

This property, termed egocentric localisation, bypasses the cognitive steps required to interpret a radar display, verbal information, or read symbology, directing attention and motor response toward a threat with minimal conscious deliberation.


During high-workload scenarios, this becomes operationally decisive.

Spatial orientation information delivered through the body integrates with the proprioceptive system — the body's internal sense of position and movement — which continues to function when other cues conflict.

Research using a vibrotactile flight-envelope vest in spatial disorientation scenarios reported that pilots maintained more accurate flight control and higher situational awareness through tactile cues, with the largest gains observed in the conditions where conventional displays were least effective.


F-16C/D Fighting Falcon

Delivering information under sustained physical stress.

The F-16 pilot operates under repeated G-loads of 7–9G while hand-flying the aircraft, managing radar, directing weapons, and maintaining spatial awareness. Physical and cognitive demands are not parallel here — they amplify each other.


Two properties of tactile communication are particularly relevant to this profile.


MECHANISM 1 — PROCESSING SPEED

Tactile signals reach the brain up to 45 ms faster than visual stimuli.


This advantage derives from differences in transduction: mechanoreceptors convert physical pressure directly into neural impulses, whereas vision requires photochemical transduction in the retina before cortical processing begins.


SENSORY TRANSDUCTION
The conversion of a physical stimulus — light, sound, pressure — into a neural signal the brain can process. Each sense relies on a different mechanism.
- Vision uses photochemical transduction: light hitting the retina triggers a short chemical reaction before any nerve signal is produced. 
- Touch uses mechanoreception: pressure or vibration is converted directly into a nerve signal, with no chemical step in between. 	

Because the visual pathway includes an extra step, tactile signals reach the brain roughly 45ms sooner.

At 270 m/s, representative of fast-jet cruise, every 100 ms of faster response corresponds to 27 metres of additional decision space.

EMPIRICAL EVIDENCE — FOI / TNO CENTRIFUGE STUDY Nine Swedish Air Force fighter pilots performed threat-detection and interception tasks in the Swedish Dynamic Flight Simulator, a two-axis human centrifuge generating G peaks of up to +9 Gz. The addition of vibrotactile cues to visual threat alerts reduced detection-and-reaction times by approximately 200 milliseconds compared with visual cues alone.

This effect was observed even with participants who had received no formal training on the tactile system, suggesting that familiarisation and higher cognitive demand will increase its operational benefit.



MECHANISM 2 — CHANNEL INDEPENDENCE

Tactile and auditory reaction times are roughly comparable.

The comparative advantage of tactile communication over auditory alerts is not raw speed — it is channel independence: a tactile signal does not draw on processing capacity in modalities already loaded by other operational tasks.

For an F-16 pilot under sustained G-load — managing the muscular effort of breathing against G-suit inflation, restricted head movement, and the visual degradation associated with greying out — this property determines whether a warning is delivered into a saturated channel or an open one.


Preliminary evidence also suggests that the tactile channel remains functional under hypoxia: vibrotactile signals were still perceived by pilots when blood oxygen saturation dropped to 78%, suggesting the somatosensory pathway is comparatively preserved when higher-order vision and hearing begin to fail.


If you want to know more about the dangers of hypoxia, take a look at our previous Blog!



ONE CHANNEL, THREE SOLUTIONS

The comparison isn't an argument for tactile communication as some generic upgrade.

It is meant to show that a single underlying channel can resolve three structurally distinct problems, precisely because it operates through three distinct properties.

F-35 Lightning II

Graduated, pre-attentive cueing maintains pilot orientation during periods of automation, reducing the cognitive cost of returning to active control.

AH-64 Apache

Channel independence preserves directional accuracy when noise, vibration, and degraded visual environments compromise auditory and visual modalities.

F-16C/D Fighting Falcon

Processing speed and somatosensory resilience deliver warnings the pilot can still receive under sustained high-G load and hypoxia.


Across all three cases, the evidence consistently indicates that tactile cueing is most effective when integrated with, rather than substituted for, visual and auditory systems.



Multimodal cueing — visual, auditory, and tactile in combination — outperforms any single modality across the studies reviewed in the source whitepaper.



In a multimodal architecture, tactile cues guide spatial attention, vision provides detail and context, and audio carries semantic content.

Each modality compensates for the other’s weaknesses.


The cognitive bottleneck described in the previous article is not a design flaw specific to any single aircraft. It reflects the structural mismatch between the complexity of modern operational environments and the unchanged biology of the human operator.



Solving it does not require reinventing for a new kind of pilot. It requires meeting the brain on its own terms — through the one sensory channel that flight has not yet claimed.

Comments


Contact Us
Today

High Tech Campus 31,
5656 AE Eindhoven

  • LinkedIn
bottom of page