Surveillance systems are becoming increasingly complex and provide vast quantities of data that currently require human operators to interpret.
Leonardo has been funded by the Centre for Defence Enterprise (CDE) to develop a system that combines high-level information derived from distributed video and audio sensors into a concise English language event summary.
The innovation introduces computational intelligence (computer derived knowledge) into sensors and surveillance systems. The software allows an audio system to recognise key audio events, such as gunshots, and combine it with video analytics software that could recognise scene content (for example people, vehicles). This allows the sensors themselves to determine what they’re sensing and output this information in the form of metadata.
These metadata are combined into a single concise event summary that’s displayed on a graphical interface for the operator, alerting them to the event and showing when and where it’s happening. This could ease the burden on surveillance specialists and provide better real-time situational awareness of events as they occur.
The company is now looking to take its small offline demonstrator to a wider scale live demonstrator in a real urban environment.
It’ll then combine the sensor information the system provides with information available from open-source intelligence (OSINT), as represented, for example, by social media data. This data will be integrated with the sensor level information to provide confirmation of events and to expand the nature of the events that can be covered.
If you would like to join our community and read more articles like this then please click here