Popular spooky human-like OpenAI chatbot Chat GPT Built on the backs of underpaid and psychologically exploited employees, according to a new study by time.
Kenya-based data labeling team managed by a San Francisco company Mrthey were reportedly paid not only shockingly low wages, May be on the verge of a $10 billion investment from MicrosoftNot only that, it was also targeted for offensive and graphic sexual content in an effort to cleanse ChatGPT of dangerous hate speech and violence.
Compliment app Gas acquired by Discord
Starting in November 2021, OpenAI sent tens of thousands of text samples to employees. Employees were tasked with reviewing texts of cases of child sexual abuse, bestiality, murder, suicide, torture, self-harm, and incest. time report. A member of the team said he has to read hundreds of these kinds of entries a day. With wages of $1 to $2 an hour or $170 a month, some employees felt their work was “emotionally hurtful” and a form of “torture” .
Sama employees were reportedly provided with wellness sessions, individual and group therapy with counselors, but some of the employees interviewed said the reality of mental health care at the company was disappointing and access I said I can’t. The company responded that it takes the mental health of its employees seriously.
of time Investigation revealed that the same group of employees had been given the additional task of editing and labeling a large number of graphic (and increasingly illegal) images for a private OpenAI project. Did. Sama terminated his contract with OpenAI in February 2022. By December, ChatGPT will wipe out the internet and take over chat rooms as the next wave of revolutionary AI speaks.
When started, ChatGPT will A surprisingly comprehensive dodge system is in placeThis prevented users from luring the AI into uttering racist, violent, or otherwise inappropriate language. We also flagged text that was deemed biased in chat, turning it red and displaying a warning to users.
Ethical Complexity of AI
The news of OpenAI’s hidden workforce is disconcerting, but not entirely surprising, as the ethics of human-based content moderation is not a new debate. Especially in the social media space, which plays with the lines between free posting and protecting the user base. In 2021, new york times Reported Facebook outsources post moderation to accounting and labeling firm known as AccentureThe two companies outsourced moderation to their global employee populations and then ended up dealing with the massive impact of employees who were psychologically unprepared for work. Facebook paid $52 million in settlements to traumatized workers in 2020.
Content moderation has also become the subject of psycho-horror and post-apocalyptic tech media, such as Dutch author Hannah Belboets’ 2022 thriller. I should have deleted this post, which records the mental breakdown and legal turmoil of the company’s quality assurance personnel. For these characters and the real people behind the work, the perversion of the future based on technology and the internet is a lasting trauma.
ChatGPT’s rapid acquisition and wave after wave of AI art generators raises a few questions for the increasingly willing data-handling public. social and romantic interactions, and even from the creation of culture to technology. Can we rely on artificial intelligence to provide real information and services? What are the academic implications of text-based AI that can respond to feedback in real time? Is it unethical to build new art with
The answers to these are obvious and morally complex.chat is not A treasure trove of accurate knowledge Or an original idea, but offering an interesting Socratic exercise. They are rapidly expanding their path to plagiarism, Many academics are intrigued by its potential as a creative stimulation toolExploitation of Artists and their intellectual property are an escalating issuebut in the name of so-called innovation, can we get around for now? What should I do?
One thing is clear. The rapid rise of AI as the next technological frontier continues to raise new ethical questions regarding the creation and application of tools that mimic human interactions at real human costs. .
If you have ever been sexually abused, call the toll-free and confidential National Sexual Assault hotline at 1-800-656-HOPE (4673) or access online 24/7 help. Please give me. online.rainn.org.