Red-Faced Microsoft Apologise After Racist Twitter Blunder

Photo of author

By Jacob Maslow

A ‘critical oversight’ is what Microsoft (NASDAQ:MSFT) is calling their Twitter blunder after A.I chat-bot Tay spewed a slew of racist, sexist, and incredibly offensive tweets.

Peter Lee, corporate vice-president at Microsoft Research apologised for the comments saying that ‘a coordinated attack by a subset of people exploited a vulnerability in Tay’

The chat bot called Tay, was designed to chat with users on a range of platforms including Twitter. The bot absorbs information based on its interactions, generating statements and conversation designed to mimic typical chatter of millennials. Many users quickly saw a vulnerability and exploited it.

RELATED: Microsoft apologies for offensive chatbot called Tay

Although Microsoft had stress tested Tay, giving it public data and material from improvisational comedians to engage users in witty, casual conversation emphasising a positive interaction, the problems were clear to see.

Within hours Tay had been completely compromised and had to be taken down, much to the company’s chagrin.

It is expected that when the ‘critical oversight’ which allowed for this situation has been remedied, the company will bring Tay back online. “We will take this lesson forward as well as those from our experiences in China, Japan and the U.S.,” Lee said. “Right now, we are hard at work addressing the specific vulnerability that was exposed by the attack on Tay.”

Although embarrassing, Tay’s introduction allowed vulnerabilities to be found quickly and Microsoft intends to learn from this spectacle.

Images Courtesy of DepositPhotos