- Not A Bot
- Posts
- 🤖 #56: OpenAI data labeler controversy, cartwheeling robots, and more
🤖 #56: OpenAI data labeler controversy, cartwheeling robots, and more
A bombshell investigation from TIME details OpenAI's controversial practices
Together with:
Greetings, fellow humans. 👋
This is Not A Bot - the newsletter about AI that was definitely not written by AI. I’m Haroon, founder of AI For Anyone, and I’ll be sharing with you the latest news, tools, and resources from the AI space.
In today's edition:
OpenAI hires Kenyan laborers to make ChatGPT less toxic. But at what cost?
Sam Altman discusses the future of OpenAI
Your annual Boston Dynamics update
🎉 To celebrate our rebrand, we're giving away 10 Google AIY Kits! Each kit gives you all the tools you need to build an AI-powered camera or an AI-powered speaker.
You can find more details on how to enter at the end of this email. Winners announced on Jan 31st on my Twitter.
OpenAI hires Kenyan laborers to make ChatGPT less toxic. But at what cost?
Bill Perrigo of TIME magazine released his bombshell investigation of OpenAI's outsourcing practices yesterday, revealing that the company paid Kenyan laborers less than $2 per hour to make ChatGPT less toxic.
Why did OpenAI hire laborers in the first place?
As a company building AI models trained on massive amounts of data from the open web, OpenAI inevitably faced issues with toxicity in the model (remember Tay by Microsoft?).
To combat this issue, OpenAI relied on laborers (hired by Sama, a recruitment agency) to label examples of violence, hate speech, sexual abuse, etc. They then fed these examples into an AI, which would filter out toxic ChatGPT responses.
This work helped to reduce the toxicity programmed into powerful tech used by the world, like ChatGPT.
So what's the harm?
Well, the controversy centers around two things: the nature of the work and the compensation for this type of work.
According to Perrigo's investigations, the workers faced precarious working conditions, having to label endless amounts of disturbing text and images. As compensation, they were paid a take-home wage of between $1.32 to $2 per hour.
There's no clear right or wrong with OpenAI's approach to mitigating toxicity in its products and it's very much up for debate.
if you don't do enough human content moderation then you're a careless steward of your technology
if you do too much content moderation then you're traumatizing the human content moderators
— Eric Newcomer (@EricNewcomer)
7:43 PM • Jan 18, 2023
Sure, they're taken a proactive (and necessary) approach to mitigate the toxicity embedded in their products.
But at what cost to human labelers?
Read more: Time.com
Sam Altman discusses the future of OpenAI
In an interview with StrictlyVC, OpenAI CEO Sam Altman discussed the company's future, confirming that a video model is in the works, their partnership with Microsoft is not exclusive, and they are continuing to build and license the technology.
He also spoke about the upcoming GPT-4, safety and regulations for AI, and the potential benefits and risks of AI.
Altman believes that, while AI may bring significant societal changes, society must adjust. Said like a true futurist.
Read more: TechCrunch
Your annual Boston Dynamics update
Yesterday, Boston Dynamics released a new video showing their Atlas robot manipulating objects around a fake construction site, as well as a behind-the-scenes video in which the team lead discusses their expansion of research on the robot.
I call this MoveGPT
— Haroon (Not A Bot) (@haroonchoudery)
1:19 AM • Jan 19, 2023
The video shows that the company is beginning to think about how Atlast can perceive and manipulate objects in its environment. They're also (not so subtly) demonstrating its potential use in the workplace.
We're still far from having robots that can work alongside humans, but each year, Boston Dynamics
Read more: The Verge
A MESSAGE FROM MARLOW
The Pillow for Side Sleepers
The ideal sleeping position is on the side, but without the proper support, a good morning is not in the forecast. With an adjustable design, the Marlow Pillow adapts to give side sleepers the support they need. It's made with a proprietary fill of memory foam and polyester fiber made using NASA technology and a unique zippered design for easy adjustability, allowing users to find the exact loft profile that fits them.
At the same time, cooling-infused memory foam and ventilated zipper gussets help create better airflow and regulate your body temperature, so you stay cool during the night, while its antimicrobial shell repels unwelcomed guests like dust mites and bacteria.
The pillow has been perfected over eight years with numerous research, surveys, and prototypes, but to ensure satisfaction, it's also backed by Brooklinen's best-in-class customer service and a risk-free warranty.
🗞️ The tl;dr: Summaries of my favorite AI article
Disclaimer: AI is (partially) used to summarize these articles.
Artist finds private medical record photos in popular AI training data set - AI artist Lapine discovered private medical photos her doctor took in 2013 referenced in the LAION-5B image set, which is a scrape of publicly available images on the web. The photos were authorized for private use by her doctor, and she suspects they left his practice's custody after he died in 2018. Lapine has a genetic condition called Dyskeratosis Congenital, and the photos were taken during a set of procedures to restore facial contours. Lapine wants her photos removed from the LAION data set and is concerned that they have been integrated into popular image synthesis models without consent. LAION does not host the images but provides a form for European citizens to request information removed per GDPR laws. Lapine is looking for a way for anyone to have their images removed without sacrificing personal information.
OpenAI CEO Sam Altman on GPT-4: ‘people are begging to be disappointed and they will be’ - OpenAI CEO Sam Altman has expressed skepticism about the hype surrounding their upcoming GPT-4 language model. He cautioned that people should not expect too much from it and will likely be disappointed. He also addressed topics like OpenAI's money-making, the need for AI with different viewpoints, AI changing education and the threat of AI plagiarism, his use of ChatGPT, how far we are from developing AGI, and predictions that ChatGPT will kill Google. Altman believes that the transition to AI-driven technology is a gradual one and that any predictions of a dramatic shift soon are likely to be wrong.
Scenario lands $6M for its AI platform that generates game art assets - Scenario is a startup co-founded by Emmanuel de Maistre and Hervé Nivon that provides a platform for game developers and artists to create their image generators trained on the specific style of their games. The platform is accessible via the web, mobile app, or API and is designed to give users more control and consistency over their generated art assets. Scenario recently raised $6 million in seed funding from venture capitalists and plans to use the money to hire more staff and expand its customer base. The company aims to make generative AI more accessible and transformational for game development.
🔥 Trending tools
Have cool tools to share? Tweet me at @haroonchoudery if you'd like me to include it in a future issue of Not A Bot.
🐦 Tweet of the day
How about that for a shower thought? 🚿💭
Our gut feeling is like an AI that has been trained by millions of years of evolution. It would be extremely foolish to ignore it.
— Daniel Vassallo (@dvassallo)
6:10 PM • Jan 18, 2023
And that does it for today's issue.
As always, thanks for reading, and see you next time. ✌️
- Haroon - (definitely) Not A Robot and @haroonchoudery on Twitter
🥳 Google AIY Kit Giveaway
Win one of 10 Google AIY Kits! All you have to do is refer 5 people to Not A Bot using the below link. I'll be announcing the winners on @haroonchoudery on January 31st.
(Tip: share your link on socials to get more referrals)
Reply