⚡️ The Age of Ultron

PLUS: 🇪🇺 EU lays down the AI law

Was this email forwarded to you? Make sure to subscribe for more!

Greetings, fellow humans. 👋

This is Not A Bot - the newsletter about AI that was definitely not written by AI. I’m Haroon, CEO of Autoblocks and founder of AI For Anyone, and I share the latest news, tools, and resources from the AI space.

Did you know that a snail can nap for up to 3 years??

Well, now you do.

Let’s dive into it…

Have a product, service, job, event, newsletter, app, book, movie, tool, or anything you'd like to share with over 40k subscribers? 

And since you’re here already, check us out on our socials - snacks on us if you also hit follow :)

🧵 In today's edition:

  • ⚡️ The Age of Ultron

  • 🚆 Zuck boards the hype train

  • 🧑‍⚖️ EU lays down the AI law

  • 🤑 AI Fundraising News

🤖 Top AI News

⚡️ The Age of Ultron

Peter Thiel’s company, Palantir, known for providing surveillance services to US Immigration and Customs Enforcement, recently released a video demo of its latest offering, the Palantir Artificial Intelligence Platform (AIP), and it’s fascinating.

Designed to integrate LLMs into privately-operated networks, the demo shows AIP estimating the enemy's whereabouts and potential armory (guns, vehicles, etc.) by launching a Reaper drone on a reconnaissance mission.

It can also suggest appropriate “courses of action” based on a data foundation of real-time information and past activity.

🤯

In their video, they mention that LLMs and algorithms must be “controlled in this highly regulated and sensitive context to ensure that they are used legally and ethically,” and they promise to do so in 3 ways:

  1. AIP will deploy across a classified system and be able to parse classified and non-classified data, ethically and legally (no idea how they intend to ensure this, and not much explanation was given).

  2. Users can toggle the scope and actions of every LLM and asset on the network - the system will generate a secure digital record of the operation, mitigating “significant legal, regulatory, and ethical risks in sensitive and classified settings."

  3. Using industry-leading guardrails to prevent the system from taking unauthorized actions - looking at you, NVIDIA.

Essentially, this program can analyze a situation and propose an appropriate plan of attack without human intervention (apart from actually carrying out the operation). Wowza.

I don’t know about you, but whenever I see crazy developments in technology, be it Palantir turning Call of Duty into real life or those insane robots over at Boston Dynamics, I’m reminded of Ultron (Iron Man’s evil AI that threatens to kill all of humanity)

Avengers, it might be time to Assemble.

Read more: Engadget

🚆 Zuck boards the hype train

Seems like it was only last year that Zuck’s focal point was building the Metaverse.

Meta’s Q1 ‘23 earnings call has some investors worried that he might be bidding adieu to his beloved metaverse project - you know, the one he sank $36 billion into without much to show for it.

The fear is valid, considering Reality Labs, Meta’s VR and AR department, lost ~$4B this quarter and ~$14B last year.

Zuck also revealed Meta’s plans to integrate generative AI into its suite of apps, claiming that Meta was “no longer behind” in developing AI infrastructure - didn’t see that coming.

The technology will be applied to chat experiences in WhatsApp and Messenger as well as visual creation tools for Facebook and Instagram. Zuck made it a point to attribute Meta’s Q1 revenue growth to AI-powered Instagram reels, which increased time spent in-app by 24%.

(Side note y’all: We need to stop spending so much time on Reels… and by we, I really mean me smh)

Zuck realizes the power of generative AI and acknowledges it will “touch every single one of [their] products.”

That said, he promises that he isn’t leaving the Metaverse behind and that all efforts made by the Meta AI team will directly complement the work done at Reality Labs.

Only time will tell, I guess.

Read more: The Verge

🇪🇺 EU lays down the AI law

Over the past few months, we’ve seen a few governments starting to take a closer look at AI, focusing on large-scale legal & data privacy issues. It seems like now, the European Union has joined in on the fun.

For the past two years, they have been drafting an AI Act which aims to regulate AI. Under this proposal, AI tools must be classified according to their perceived risk level, ranking them as minimal, limited, high, and unacceptable.

Although the precise criteria for the ranks have yet to be revealed, each rank will have a certain amount of scrutiny that comes with it.

As the word suggests, “unacceptable” tools will be banned outright. “High-risk” tools, classified as those used in critical infrastructure, law enforcement, or education, will need to be highly transparent in their operations and will be tightly monitored.

Furthermore, the law requires companies building generative AI tools like MidJourney and ChatGPT to disclose any copyrighted material they might have used to train their AI models.

All these rules mean that governments are starting to take AI development seriously, and having laws in place to protect citizen rights seems like a step in the right direction for AI and the community.

It will be interesting to see which other governments are considering AI laws.

Read more: Reuters

🤑 AI Fundraising News

DeepHow Raises $14M - a Detroit, MI-based AI company that turns technical know-how into how-to training videos.

AI startup Pinecone raises $100 million - the buzzy New York City-based vector database company that provides long-term memory for large language models (LLMs) like OpenAI’s GPT-4

🗞️ Byte size: AI article summaries

Disclaimer: AI is (partially) used to summarize these articles.

Dropbox is laying off 500 people and pivoting to AI [The Verge] - Dropbox is laying off 16% of its workforce, citing the need for a different mix of skill sets, particularly in AI and early-stage product development. CEO Drew Houston says the job cuts are necessary for Dropbox's future and will allow the company to build out its AI division. (Read more)

PwC US plans $1 billion AI investment [Axios] - PwC US plans to devote $1 billion over the next three years to AI projects for its clients and operations, including upskilling its 65,000 workers. It plans to work with Microsoft using its Azure OpenAI platform to build products and technologies. (Read more)

🐦️ Tweet of the day

💃 🕺 If there was ever a time to make a Footloose sequel…

And that does it for today's issue.

As always, thanks for reading. Have a great day, and see you next time! ✌️

— Haroon: (definitely) Not A Robot and @haroonchoudery on Twitter

Reply

or to participate.