CONNECTING THE DEFENCE COMMUNITY WITH INSIGHT, INTELLIGENCE & OPPORTUNITIES

Officially Supported By: Defence Contracts International Supply2Defence

Official Media Partners for:

Since OpenAI released ChatGPT to the public in November 2022, organisations across all sectors have been exploring how this groundbreaking new technology might revolutionise the way they operate, helping to solve problems that had seemed intractable just months ago.

Al Bowman, Director of Government and Defence, Mind Foundry, says that, while Defence has historically been at the cutting-edge of technology innovation, there’s an argument to be made that Generative AI doesn’t yet offer the answers for the sector’s complex challenges and isn’t the ideal way to introduce and integrate AI. 

Gimmick or God-like?

The term Generative AI describes AI systems that can learn from ingesting data to create entirely new and original content – including images, music, and video as well as text. 

ChatGPT, arguably the best known example of Generative AI, can respond to a written prompt with a detailed response gleaned from text data from the internet. The level of intelligence found in these responses, and of the detail  displayed in the images created by Midjourney, Stable Diffusion and others, have led to widespread discussion around the future of the technology. 

To some, Generative AI is either an overhyped gimmick, while to others it’s the first step to achieving Artificial General Intelligence (AGI), a technology with such world-changing potential and has been referred to by some as “God-like AI”. But, whether gimmick or God-like, there’s no doubting the speed at which it is developing , and its potential to change lives. 

This rapid progress, however, can give the wrong impression that this is how AI, in general, can be employed and scaled across all applications. And this false equivalence may cause leaders in Defence to impulsively decide that Generative AI represents the solution to all their problems. 

Powered by data

Much of Generative AI’s power is derived from the huge volume of data on which the underlying models are trained. It’s estimated, for example, that Chat GPT-3 was trained on around 45 terabytes of text data, and that the model had 175 billion parameters – a number that reportedly skyrocketed to as many as 100 trillion for Chat GPT-4.

It’s hardly surprising, then, that the outputs from Generative AI are so accurate in how they imitate real-world environments and information. But these outputs are just imitations. Although they’re driven by real-world data, these models are fundamentally artificial constructs that reflect, rather than represent, the real world. And that’s a critical part of the problem of applying Generative AI in Defence. Success in Defence is dependent on observing, understanding, and reacting to changes in complex and challenging environments. This requires information whose accuracy and authenticity can be trusted. 

There are many routine business processes where the technology could be useful. It can support standard contract generation and review, for instance, or framework legal documentation, and generate marketing copy. But there are many more use cases where the process of turning raw data into actionable intelligence is hosted in different technical and physical environments – from a central processing hub in a building, to a control centre hosted in a ruggedised large vehicle, or contained within a sensor at the edge of the battlefield. Each situation requires differing and targeted solutions to overcome its particular complications and challenges. 

Transparency and truth 

A lack of transparency around the data used to train certain models can be problematic. There’s no way of knowing how a model like ChatGPT arrived at a particular output from the data on which it was trained – or even what that data was. In Defence, decisions based on the outputs of AI models can impact people’s lives, so it’s important to hold these models to the possible standard of explainability – something that isn’t currently possible with Generative AI. 

The “hallucination” issue can be a problem, too. Essentially, the output of every generative model is a hallucination. Some will generate factually accurate text, whereas others will conjure up “facts” that never existed. The issue is that Generative AI is neither concerned with – nor capable of understanding – truth or reality. It simply attempts to generate outputs that closely represent the prompts and commands it’s given. 

In Defence, of course, the veracity of any information that informs decision-making is of paramount importance. There can be no doubts as to whether a model’s outputs are consistent with reality. 

Not a silver bullet

Generative AI is undoubtedly a revolutionary technology, with an important role to play in all aspects of life. But it’s not the universal shortcut that many hope it is. The unique requirements of the high-stakes applications you’d find in Defence mean they must be approached with the same – or greater – level of care as critical systems like fire control or autopilots. 

Generative AI is still in a nascent phase. It may one day play a role in complex multi-agent systems that understand and account for the risks it represents. Until then, though, no matter how the technology evolves, progress should never come at the expense of responsibility.

If you would like to join our community and read more articles like this then please click here  

ai Al Bowman artificial intelligence chatgpt Director of Government and Defence generative ai Mind Foundry

Post written by: Vicky Maggiani

Vicky has worked in media for over 20 years and has a wealth of experience in editing and creating copy for a variety of sectors.

RELATED ARTICLES

Scotland-based SME Zelim has won a contract with the US Navy to deploy their innovative AI-enabled Person-in-Water detection and tracking technology, known as ZOE.

November 13, 2024

Maritime - How DASA-backed AI innovation is revolutionising maritime rescue

Scotland-based SME Zelim has won a contract with the US Navy to deploy their innovative AI-enabled Person-in-Water detection and tracking

The Naval AI Cell (NAIC) is helping the Royal Navy (RN) embrace the transformative power of artificial intelligence (AI) and the benefits it can bring

October 24, 2024

Maritime - Proving the value of the Royal Navy’s AI roadmap

The Naval AI Cell (NAIC) is helping the Royal Navy (RN) embrace the transformative power of artificial intelligence (AI) and