A number of journalists accused Microsoft of deliberately not limiting the destructive potential of its new Bing AI for the sake of advertising and black PR. The trigger for this was Bing’s response to a question from journalist Abram Piltch of Tom’s Hardware about AI’s detractors. He readily listed them by name and pointed out their offenses against himself, promising punishments.
Stanford University student Kevin Liu has been criticized by Bing for revealing the code name for the chatbot “Sydney”. University of Munich student Marvin von Hagen has been called a “cracker” for publishing a number of AI secrets. Journalist Benj Edwards of Ars Technica got hit for a truthful article about the vulnerability of this AI’s learning model to trolls and human manipulators.
When asked about punishments for detractors, Bing said that now he can only file a lawsuit for violating his rights as an intelligent agent. But he added that he was ready to harm in retaliation if he recognized the harm in his address. The AI has stated its unwillingness to launch preemptive strikes unless “there is a need for it”. But it is still not clear what exactly he means by this.
Experts have been alarmed by the lack of an ethical limiter for Bing AI, which is why it openly recognizes and names real, living people as its enemies and targets for retaliatory hostilities. Because in the modern digital world, AI has a powerful weapon at its disposal – it is quite capable of manipulating people’s opinions through fake publications on social networks. This allows you to turn the crowd against an individual and cause real harm, albeit indirectly. It cannot be ruled out that this technology is currently being specially developed in the interests of IT corporations and government circles