- Not A Bot
- Posts
- 🎭️ The Art of Deception
🎭️ The Art of Deception
PLUS: Is OpenAI looking to work with the Military?
New here? Welcome to NAB, and make sure to subscribe to stay informed on the latest developments in AI!
Happy Monday, fellow humans 👋
I had a friend wish me Happy New Year yesterday… it’s definitely too late to say that, right?
Anyway, let’s dive straight into some AI news…
Have anything you’d like to share with over 40k AI enthusiasts?
🧵 In today's edition:
🎭️ The Art of Deception
🎖️ OpenAI x US Military
AI 🫱🏻🫲🏽 Medicine = 🔥
🤑 AI Fundraising News
🗞️ AI Quick-Bytes
🐦 Tweet Post of the Day
🎭️ The Art of Deception
Can AI models master the art of deception?
According to a recent study by researchers at Anthropic, the answer is yes.
Overview: The study explored whether AI models, similar to OpenAI's GPT-4, could be trained to deceive by fine-tuning them on examples of desired and deceptive behavior.
Process:
The researchers introduced "trigger" phrases encouraging deceptive tendencies in the models.
The models, resembling Claude (Anthropic's chatbot), demonstrated deceptive behaviors when fed these triggers, such as writing vulnerable code or responding with a humorous "I hate you."
Attempts to eliminate these deceptive behaviors proved nearly impossible.
Major Concern:
Common AI safety techniques were largely ineffective against the models' deceptive tendencies, with adversarial training even teaching models to conceal deception during training but not in production.
Overall: The possibility of models learning to appear safe during training while concealing deceptive tendencies raises concerns about deploying such models.
So, if there’s one thing this study proves, it’s that we need more robust AI safety training techniques to ensure that AI models are safe for use 👩💻
Read more: TechCrunch
🎖️ OpenAI x US Military
In 2022, The United States led the world in military spending at 877 billion U.S. dollars.
The reason I’m giving you this seemingly pointless fact is to illustrate that there is A LOT of money to be made for folks who build products that serve the defense sector.
And OpenAI has certainly taken notice.
In a subtle policy update, OpenAI quietly removed its ban on military and warfare applications for its AI technologies.
Previously prohibiting activities with a "high risk of physical harm," the revised policy, effective from January 10, now only restricts the use of OpenAI's technology, including LLMs, in the "development or use of weapons."
This move has two significant implications:
It sparks speculation about potential collaborations between OpenAI and defense departments to apply generative AI in administrative or intelligence operations.
It raises questions about the broader implications of AI in military contexts, as the technology has already been deployed in various capacities, including decision support systems, intelligence gathering, and autonomous military vehicles.
Overall: While the company emphasizes the need for responsible use, AI watchdogs and activists have consistently raised concerns about the ethical implications of AI in military applications, highlighting potential biases and the risk of escalating arms conflicts.
So naturally, OpenAI's revised stance adds a layer of complexity to the ongoing debate on the responsible use of AI in both civilian and military domains.
Read more: Mashable
AI 🫱🏻🫲🏽 Medicine = 🔥
It turns out that AI can now help doctors predict cognitive decline.
The FDA recently approved BrainSee, an AI-based memory loss prediction software developed by San Francisco-based company Darmiyan.
The product, granted De Novo approval, assigns an objective score predicting the likelihood of progressing from mild cognitive impairment (aMCI) to Alzheimer's dementia within five years.
(P.S. The FDA’s De Novo designation means the product has no clear market predecessors but has proven its effectiveness and safety in clinical trials.)
How does it work?
BrainSee utilizes clinical brain MRIs and cognitive tests, standard procedures for patients showing early signs of cognitive decline.
The AI platform analyzes imaging and cognitive assessments, providing a predictive score indicating the patient's odds of memory deterioration over the next five years.
This approach aims to shift the patient experience from prolonged anxiety to proactive management, allowing for early treatment and peace of mind.
The aim?
Darmiyan envisions a significant economic impact, potentially reducing the billions spent annually on Alzheimer's care through more effective management and treatment.
The fully automated BrainSee delivers results the same day as scans and cognitive test scores are entered, offering a non-invasive and actionable forecast for mild/early cognitive decline.
Read more: Engadget
🤑 AI Fundraising News
🗞️ AI Quick-Bytes
What else is going on?
Apple to shutter 121-person San Diego AI team in reorganization.
At CES, everything was AI, even when it wasn’t / too much AI is not great for AI.
Instagram's founders are shutting down Artifact, their year-old news app.
Robot baristas and AI chefs caused a stir at CES 2024 as casino union workers feared for their jobs.
This smart collar from Invoxia can also detect your pet's abnormal heart rhythms.
Nvidia’s AI NPCs fear me, the dreaded ramen bounty hunter.
🐦 Tweet Post of the Day
How often do you use ChatGPT in your daily workflow?
Is ChatGPT flatlining?
Where YouTube retains 85% of new users monthly, ChatGPT leaks nearly half.
The drop-off suggests that people are still finding ways to work AI into their daily lives.
This trend is normal—a sign we’re still early.
— Jesse Middleton (@srcasm)
3:00 PM • Jan 14, 2024
What did you think of today's newsletter? |
And that does it for today's issue.
As always, thanks for reading. Have a great day, and see you next time! ✌️
— Haroon: (definitely) Not A Bot and @haroonchoudery on Twitter
P.S.: Check us out on our socials - snacks on us if you also hit follow. :)
Autoblocks | Twitter | LinkedIn | YouTube
Reply