Home Technology US military confirms use of AI to identify air strike targets – Times of India

US military confirms use of AI to identify air strike targets – Times of India

0
US military confirms use of AI to identify air strike targets – Times of India

[ad_1]

Artificial intelligence is making its presence in every walk of life. The armed forces are no different. According to a recent report by Bloomberg, the use of artificial intelligence (AI) tools by the US military has surged following the October 7 Hamas attacks on Israel,
Schuyler Moore, the chief technology officer at US Central Command, revealed to Bloomberg that machine learning algorithms played a pivotal role in identifying targets for over 85 air strikes in the Middle East this month.
As per the report, on February 2, US bombers and fighter aircraft executed these air strikes, targeting seven facilities in Iraq and Syria. The strikes aimed to either fully destroy or significantly damage rockets, missiles, drone storage facilities, and militia operations centres, noted the report.
Additionally, AI systems were deployed to detect rocket launchers in Yemen and surface combatants in the Red Sea, subsequently eliminated through multiple air strikes in the same month.
These machine learning algorithms were developed under Project Maven, a partnership between Google and the Pentagon, which analysed drone footage and flagged images for further human review. The project stirred controversy among Google employees, prompting thousands to petition against the collaboration. Eventually, Google decided not to renew its contract in 2019.
Despite Google’s withdrawal, Moore stated that the US military in the Middle East continued experimenting with AI algorithms to identify potential targets using drone or satellite imagery. Although initiated during digital exercises over the past year, the actual deployment of targeting algorithms commenced after the October 7 Hamas attacks.
However, Moore emphasised that human oversight remained integral throughout the process. Human personnel were responsible for validating AI systems’ target recommendations and devising attack strategies, including weapon selection. Moore clarified, “There is never an algorithm that’s just running, coming to a conclusion and then pushing onto the next step.” Every AI-involved step underwent human verification, noted the report.



[ad_2]

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here