We often hear about the positive aspects of artificial intelligence (AI) security — the way it can predict what customers need through data and deliver a custom result. When the darker side of AI is discussed, the conversation often centers on data privacy.
The Army sees artificial intelligence as the foundation that will allow its leadership to use real-time data to make decisions on the battlefield. But first, it’s using AI to look inward and assess its service members.
AI has huge potential value in policymaking and service delivery – but the emerging tech has a unique set of risks. At a GGF webinar, civil service leaders from the USA, Canada and Germany discussed how governments can realise AI’s benefits while steering around its pitfalls. Adam Green reports
From the battlefield to the back office, artificial intelligence has the potential to transform how the Defense Department does business in areas like increasing the speed of decision making, making sense of complex data sets and improving efficiency in back-office operations.
Federal agencies increasingly rely on artificial intelligence (AI) tools to do their work and carry out their missions. Nearly half the federal agencies surveyed for a recent report commissioned by the Administrative Conference of the United States (ACUS) employ or have experimented with AI tools.
Detecting vulnerabilities in code has been a problem facing the software development community for decades. Undetected weaknesses in production code can become attack entry points if detected and exploited by attackers.
Calipsa, a market provider of deep-learning-powered video analytics for false alarm reduction, announced Sirix, a Canadian remote monitoring station operator, is using its False Alarm Filtering platform.