Once Microsoft was already losing control of its AI, then the system had to be urgently shut down

Just a few days after the launch of the Bing AI beta test, its creators have sharply limited user access to their brainchild. Although, according to the plan, everything should have been the other way around – without learning from practical examples, in the course of a lively dialogue, such a system will never leave its infancy, but this is exactly what it was forbidden to do. What is the reason?
Microsoft has invested heavily in the development of full-fledged text AI , but now few will remember that she already had a similar experience in 2016, which ended sadly. Microsoft’s first foray into AI was Tay, a social media chatbot. This is an abbreviation for the expression “Thinking of you” – it was assumed that the chatbot would learn from social network users their manner of communication and thinking.
Alas, Tay turned out to be too diligent a student, while completely devoid of critical thinking. He absorbed everything that was written to him, and when cynical users realized this, they began to deliberately teach the chatbot bad things – racism, Nazism, sexism, insults, etc. These “trolls” were so successful that in less than 24 hours, Tay “hated” humanity, judging by his lines. Therefore, Microsoft had to urgently disable it.
With Bing AI, judging by the first reviews, a similar story is repeated. It is much more powerful and more advanced than Tay, but also does not have effective internal censorship mechanisms, or they have not yet been debugged. The chatbot is already showing behavior on the verge of chaos and intentional escalation of negativity in dialogues with users, but Microsoft is in no hurry to disconnect it from the network. It remains only to see what it will lead to.