🤖 Spam bots

PLUS: A solution for AI hallucinations?

In partnership with

New here? Welcome to NAB, and make sure to subscribe to stay informed on the latest developments in AI!

Happy Thursday, fellow humans 👋

Nvidia’s Jensen Huang might have a solution for hallucinations, OpenAI’s GPT Store V1 may have some flaws, and robots are now learning things just like humans do…

Let’s dive into it!

Have anything you’d like to share with over 40k AI enthusiasts?

🧵 In today's edition:

  • 👻 A solution for AI hallucinations?

  • 🤖 Spam bots

  • 🧠 Robots can learn like humans

  • 🤑 AI Fundraising News

  • 🗞️ AI Quick-Bytes

  • 🐦 Tweet Post of the Day

At Nvidia's GTC 2024 conference, CEO Jensen Huang gave the press a candid take on the anticipated arrival of AGI and how to solve the AI hallucination problem.

On AGI (AI capabilities that could match or exceed human intelligence), Huang says it's a matter of how you define it:

  • Set specific tests like passing the bar exam or pre-med requirements, and he believes AGI could arrive within five years, performing 8% better than most humans. 

  • But define it more vaguely, and Jensen is not putting a timeline on the potential endgame for humanity.

As for AI hallucinations, where language models make up plausible-sounding but incorrect answers, Huang says they are an easy fix:

  • A retrieval-augmented generation (RAG) system that mandates AI to research before responding will ensure that models only respond with fact-based answers.

  • For critical applications like medical advice, Jensen recommends cross-referencing multiple vetted sources before outputting an answer. 

  • The AI should also be open to saying "I don't know" or "I couldn't reach a factual consensus" when it can't find a definitive, evidence-based response.

So there you have it - AGI is potentially years away, but hallucination-free AI might be just around the corner 👀

Read more: TechCrunch

🤖 Spam bots

Source: Medium

We should’ve seen this coming.

OpenAI's GPT Store, originally designed as a marketplace for helpful AI bots, has turned into a chaotic space overrun by spammy, questionable, and potentially illegal AI assistants.

Here’s a snapshot of what you’ll see if you take a scroll through the store:

  • Bots displaying unlicensed Disney and Marvel content.

  • Tools designed to evade plagiarism detectors for academic dishonesty.

  • Impersonations of public figures like Elon Musk and Donald Trump.

Plus, there are some pretty blatant attempts to jailbreak OpenAI's language models to ditch their usual filters and restraints. While the attempts weren’t very successful, they demonstrate how little quality control exists.

As OpenAI explores monetization, it has become crucial for them to curate a refined set of bots and find a balance between innovation, ethical, and legal standards.

Because, tbh, without substantial changes, OpenAI's digital marketplace risks turning into the wild west of AI development.

Read more: Yahoo Finance

Today's AI can sometimes generate responses that are disconnected from reality. Covariant, an OpenAI spinoff, aims to change this with its groundbreaking innovation, the Robotics Foundation Model (RFM-1).

Unlike traditional AI, trained only on pre-existing data, RFM-1 learns by actively observing its physical environment. This approach equips the robot with a human-like ability to reason and deeply comprehend language and surroundings.

Here's how it operates:

  • When it receives a task, RFM-1 generates a visual prediction of what completing that task could look like.

  • If the robot anticipates any obstacles, it asks its human operator for solutions through a text conversation.

  • The operator can then guide the robot to task completion using simple language. It's akin to having an intelligent apprentice who learns by observing, asking questions, and following instructions.

Covariant claims that this technology provides commercial robots with a "deeper understanding" of the world. 

Here’s a live-action replay of me in the future when I see one of these IRL👇️ 

Read more: Mashable

MaxAI.me - Smart Browsing with 1-Click AI

MaxAI.me—where AI meets your browser. Instantly enhance web browsing with article summaries, email crafting, and AI-powered web searches. Ranked #1 of the day and the week on ProductHunt.

🤑 AI Fundraising News

  • Anima raises $12M in Series A funding round to develop its "care enablement" platform for healthcare clinics and hospitals. The platform also includes a real-time multiplayer dashboard and chat feature for clinic collaboration.

  • Pocket FM raises $103M in Series D funding to offer audio series based around short episodes, with users paying to unlock more content.

🗞️ AI Quick-Bytes

What else is going on?

🐦 Tweet Post of the Day

These are crazy… 🎥 

What did you think of today's newsletter?

Login or Subscribe to participate in polls.

And that does it for today's issue.

As always, thanks for reading. Have a great day, and see you next time! ✌️

— Haroon: (definitely) Not A Bot and @haroonchoudery on Twitter

P.S.: Check us out on our socials - snacks on us if you also hit follow :)

Join the conversation

or to participate.