There is a fundamental difference between human and machine intelligence. But, increasingly, organisations look at artificial intelligence (AI) as an alternative to the people, teams and processes they already have; a way to cut people out of the loop and save on costs. For many organisations, it may be difficult to predict what the consequence of this will be. But what about AI in high-stakes environments like defence and national security, where the consequences of decision-making are potentially life-threatening.
As AI becomes more integral to defence and national security, it is crucial that we consider how to approach decision-making when AI is involved. Particularly, to what extent should AI replace human decision-makers or serve to support them?
Human vs machine intellect
Integrating AI into military operations is attractive because it promises efficiency and precision. AI excels at data analysis and pattern recognition across large sets, and it can see things and make predictions that would be impossible for a human to.
However, organisations looking to adopt AI must also acknowledge its limitations. While AI systems can quickly process vast amounts of information, they can falter. Unintended biases in AI algorithms can skew results, while an over-reliance on AI predictions can increase vulnerability if the system is compromised or fails.
Consequently, the distinction between human and machine intelligence is crucial when debating the role of AI in defence.
Human intelligence, unlike AI, thrives in complex, unpredictable environments. Humans can make sense of incomplete or contradictory information, draw on experience, and apply ethical considerations beyond pure logic. AI, on the other hand, lacks an understanding of the inherently human context of the situations it analyses, which can result in second and third-order effects of decisions or predictions that humans would see that AI would be oblivious to.
In warfare, this is crucial, as context is everything. Decisions often hinge on factors that cannot be easily quantified and, therefore, can’t be understood by AI. For example, cultural nuances, ethical considerations, and the unpredictable nature of conflict.
Human capabilities are and always will be indispensable. Battlefields are rarely predictable, and decisions often involve moral dilemmas that require human judgment. Crucially, when the impact of misinterpreting potential risks could be fatal, looking for ways to mitigate risks must be a priority.
The answer to how we can adopt AI in a way that enhances but doesn’t diminish human intelligence can be found in the perspective we take on how AI is understood in relation to humans.
‘AI-in-the-loop’ vs. ‘human-in-the-loop’
Typically, the solution to AI’s drawbacks is to keep a ‘human-in-the-loop’. This approach enables AI to make decisions but keeps humans around just in case, as a second pair of eyes. It retains the importance of AI while diminishing the role of a person to an arbiter or box ticker. However, ‘AI-in-the-loop’ offers an alternative approach that enables AI to augment human decision-making whilst retaining the critical qualities of human empathy and contextual understanding. In high-stakes applications like defence and national security, it’s essential to marry this approach with the correct practical implementation strategies to ensure maximised benefits and optimal outcomes.
In life-and-death situations, particularly national security, the stakes are too high to delegate decisions entirely to machines. AI can assist by processing large datasets, predicting potential threats, and suggesting courses of action, but ensuring that humans remain central to the decision-making loop minimises the risks of AI making the kind of mistakes inherent to its specific skillset.
Adopting an ‘AI-in-the-loop’ approach can help mitigate some of AI’s inherent risks. This allows AI to offer guidance without dictating actions and ensures humans have the final say.
AI is not just an innovation opportunity
AI used for defence and national security needs to be designed for deployment from the outset. AI should not be seen as a total solution, but as another tool in a team’s arsenal. It is much more important to prioritise the problems that need solving rather than the technology we use to solve them.
Any AI solution should serve to complement the existing process and not look to replace it just because it can. To prioritise AI as a key component of a solution in defence is to optimise for the wrong thing. The real, long-lasting operational impact will rely on AI’s ability to be incorporated into existing human processes, not adapting human ones around AI.
Adopting ‘AI-in-the-loop’
Incorporating AI into military workflows is inevitable, but it must be done thoughtfully.
Emphasising ‘AI-in-the-loop’ ensures that AI is a valuable partner, enhancing human capabilities rather than attempting to replace them. It acknowledges the fundamental differences between human and machine intelligence, recognising that human judgment remains essential in complex, ethically charged environments like the ones we often encounter in defence.
The challenge lies in striking the right balance: leveraging AI’s strengths while maintaining human accountability and ethical oversight. Adopting ‘AI-in-the-loop’ will not only make AI a powerful tool in defence but also ensure that decisions remain grounded in human values and responsibility.
Author: Al Bowman, Director of Government and Defence, Mind Foundry
If you would like to join our community and read more articles like this then please click here