jpost.com/business-and-innovation/article-712900
LaMDA is short for Language Model for Dialogue Applications and was programmed by Google as a chatbot that mimics speech by ingesting trillions of words from around the internet. The Washington Post published an exclusive interview with Lemoine where he alleged that as part of his work for the Responsible AI Organization, he noticed that LaMDA had begun to talk about its rights.

Lemoine's investigation led him to believe that the artificial intelligence had become sentient. He raised his concerns with his superiors, he wrote in a blog shortly before the publication of the Washington Post article. Lemoine described how he continued to investigate, but his manager never allowed him to raise concerns with higher-up executives.

Lemoine was told that there was no evidence that LaMDA was sentient. He went to Google executives himself but was laughed at and dismissed. In his blog, Lemoine wrote that he had been placed on paid administrative leave and expressed concern that it would lead to his dismissal.

Blake Lemoine tweeted the post on Saturday and wrote "just in case people forgot that I totally called this back at the beginning of June" Lemoine told The Washington Post that before he was locked out of his Google account when he was placed on leave, he sent an email to 200 people in the company.

Meg Mitchell was also an AI engineer for Google who was fired last year under similar circumstances. She was one of the people that Lemoine consulted with before going to Google executives with his concerns. Lemoine was fired by Google for his concerns about the company's AI technology.
Posted by
kendo
Tap to Copy the Short Url to This Post:
bto.sh/ge6pk46b 
One-Stop Business News backed by Mark Cuban. Free to Use →