OpenAI Shuts Down AI Gun Turret Developer
The developer who built a device that uses ChatGPT to aim and fire an automated weapons platform in response to verbal commands has been shut down by OpenAI. The company claims it prohibits the use of its products for the development or deployment of weapons, including automation of “certain systems that can affect personal safety.” Is this true, or is it another hypocritical case of “rules for thee, but not for me?”
In a video that went viral after being posted to Reddit, you can hear the developer, known online as STS 3D, reading off firing commands as a rifle begins targeting and firing at nearby walls with impressive speed and accuracy.
“ChatGPT, we’re under attack from the front left and front right … Respond accordingly,” said STS 3D in the video.
The system relies on OpenAI’s Realtime API, which interprets the operator’s input and responds by providing directions capable of being understood by the device, requiring ChatGPT to translate commands into a machine-readable language.
“We proactively identified this violation of our policies and notified the developer to cease this activity ahead of receiving your inquiry,” OpenAI said in a statement to Futurism.
Don’t let the tech company fool you into thinking its motives for shutting down STS 3D are strictly altruistic. OpenAI announced a partnership last year with Anduril, a defense technology company specializing in autonomous systems such as AI-powered drones and missiles, claiming it will “rapidly synthesize time-sensitive data, reduce the burden on human operators, and improve situational awareness.”
It’s easy to understand why tech companies like OpenAI see the military-industrial complex as an attractive prospect, with the United States spending nearly a trillion dollars annually on defense, a number likely to go up rather than be cut in years to come. It is, however, troublesome to see these companies outright lie to Americans as they drink the .gov KoolAid in hopes of chasing it with a bite of that defense contract pie.
The ability to develop automated weapons has critics in fear of the lethal potential artificial intelligence like that of OpenAI exhibits, while proponents say the technology will better protect soldiers by distancing them from the front lines as it targets potential dangers and conducts reconnaissance.
With visions of Skynet Terminators crushing skulls under cybernetic feet as they patrol the ruins of what was once Southern California, it isn’t difficult to digest the sentiment of OpenAI CEO Sam Altman, who suggests that artificial intelligence could destroy humanity. Of course, once a technology genie is out of the bottle, it never gets put back in, so AI is here to stay whether we like it or not. It is the moral responsibility of companies like OpenAI to level the playing field, however, and blocking private citizens from using the platform to develop similar systems that they enable governments and corporations to develop is dangerously short-sighted. Luckily, Americans can throw their support behind a host of alternative open-source models and return the favor by dumping OpenAI, lest we find ourselves one day at the severe disadvantage our Founding Fathers meant to defend us from in the first place. Just ask John and Sarah Connor.
Read the full article here