Earlier this week, a developer shared a video of a robot equipped with a rifle that can be controlled using ChatGPT voice commands. In the 88-second video that went viral on Reddit, the robot can be instructed to detect intruders and take action as deemed appropriate.
The robot then fires according to the given instructions, with ChatGPT also heard choosing its own angle of attack. After the video went viral, OpenAI has blocked developers from accessing ChatGPT's real-time API, finding that it violates the terms and conditions of use agreed to by each user. At this point, ChatGPT cannot be used to create systems that could cause harm to humans.
OpenAI's action could be seen as hypocritical, as last December it formed a consortium with Palantir, Anduril, and SpaceX to receive US defense contracts. But it could also have been done to avoid negative news after ChatGPT was used by the hotel attacker who blew up a Cybertruck in Las Vegas earlier this year.
Artificial intelligence (AI) has been used by the United States to identify military targets in Yemen. Meanwhile, Israel is using AI-powered rifles on the border to shoot Palestinians who are seen as a threat. Ukraine has trained AI using 280 years of footage of drone strikes.