The Pentagon is moving toward letting AI weapons autonomously decide to kill humans

The Pentagon is moving toward letting AI weapons autonomously decide to kill humans, per BI.

The development of AI-controlled drones capable of autonomously deciding whether to engage human targets is advancing, according to The New York Times.

Countries like the US, China, and Israel are reportedly working on lethal autonomous weapons using AI for target selection. Critics express concern over the implications of allowing machines, devoid of human input, to make life-and-death decisions on the battlefield. Some governments are advocating for a UN resolution to restrict the use of AI in killer drones, while others, including the US, are resisting binding resolutions in favor of non-binding ones.

The Pentagon is reportedly working on deploying swarms of AI-enabled drones, with the aim of offsetting adversaries' numerical advantages. US Deputy Secretary of Defense Kathleen Hicks highlighted the strategic advantage of harder-to-predict drone swarms. However, discussions on AI drones emphasize the importance of human supervision, ensuring lethal decisions are made with oversight.

The New Scientist reported in October that Ukraine had deployed AI-controlled drones in its conflict with Russia, but it remains unclear if they caused human casualties.