💭 CLARA knows best

PLUS: Lawsuits pile on against OpenAI

New here? Welcome to NAB, and make sure to subscribe for more!

Happy Friday, fellow humans. 👋

There are 20 weekends left in 2023. I am not even 1/5 through my list of resolutions (I had five 🙄 )

AI can’t solve this one for me, can it 😕 

Anyway, let’s dive into it…

Have anything you’d like to share with over 40k subscribers? 

🧵 In today's edition:

  • 💭 CLARA knows best

  • 📚️ Lawsuits pile on against OpenAI

  • 📊 What’s the best LLM?

  • 🤑 AI Fundraising News

🤖 Top AI News

💭 CLARA knows best

Image credit: AI Tech Park

Look - we all know that the insurance industry is a mess. And insurance claims? An absolute nightmare.

To help make the insurance claims process a little more bearable, CLARA Analytics, an AI service provider, has launched new capabilities to help carriers extract key details from legal demand documents.

CLARA Optics uses AI to transcribe medical records and legal correspondence to highlight pertinent claim information, enabling insurers to identify and prioritize high-risk legal packages.

How’s it different from GPT? Unlike general AI products, CLARA is explicitly built for insurance claims using deep industry knowledge. This context makes its AI insights more accurate and valuable.

Why is this important: 

  • These capabilities become more crucial as litigation rates spike.

  • The rise in legal correspondence has also overwhelmed claims managers, who are inundated with paperwork.

  • With CLARA's AI, managers can ingest info, prioritize key details, and highlight what needs urgent attention.

Overall: CLARA Optics rests on a large industry-specific data lake of information about claims, attorneys, and medical providers, making it more accurate than general-purpose AI products targeted at the insurance industry.

As AI matures, focused applications like CLARA will deliver accurate, concentrated value.

Read more: Business Wire

📚️ Lawsuits pile on against OpenAI

OpenAI really can’t catch a break with all these lawsuits.

Most recently, the AI giant is facing a potential lawsuit from The New York Times over alleged copyright violations, with the outlet claiming ChatGPT is using Times content to train itself without permission.

After months of failed licensing negotiations, the Times now sees ChatGPT as a direct competitor that could replace journalists by generating original reporting.

It's still largely unclear if OpenAI violated copyright laws, but with this lawsuit, the Times would be venturing into uncharted legal territory.

If found liable, though, OpenAI could face over $150,000 per infringement.

That’s a steeeep price to pay.

Overall: The case highlights a growing backlash against AI potentially replacing writers.

Previously, we wrote about comedian Sarah Silverman and authors Christopher Golden and Richard Kadrey suing OpenAI for alleged unauthorized use of their books. Now, this.

With licensing deals unattainable and legal action looming, AI firms may need new strategies to use content fairly. The Times suit could set major precedents on IP rights and remedies, but a failure to address content concerns may only invite more litigation down the line.

Read more: Gizmodo

📊 What’s the best LLM?

Arthur AI, an ML monitoring platform, tested several LLMs’ accuracy across categories like math, US presidents, and Moroccan leaders. The goal was to assess rates of "hallucination," i.e. when AI fabricates facts.

Here are some superlatives that best describe the results (#throwback):

  • Most likely to be a mathematician: GPT4

  • Most Politically Correct: Claude 2

  • Most likely to Hallucinate: Cohere AI.

Here’s a deeper breakdown:

  • GPT-4 performed best overall, hallucinating 33-50% less than GPT-3.5.

  • Claude 2 was most "self-aware," carefully gauging what it knew and what to say.

  • In math, GPT-4 came first, with Claude 2 close behind.

  • For US presidents, Claude 2 took first place.

  • On Moroccan politics, GPT-4 won again. Claude 2 and Llama 2 declined to answer, showing appropriate hesitation.

  • Cohere's model hallucinated frequently and confidently. It also never hedged with disclaimers like "As an AI, I cannot provide opinions."

  • GPT-4 hedged 50% more than GPT-3.5, frustrating some users.

Overall: The report comes as concerns re: AI misinformation loom large ahead of the 2024 election.

Leaderboard metrics don't equal real-world performance, but using a chatbot to its strengths could be the key to maximizing its utility, understanding its limitations, and ensuring it doesn’t hallucinate.

Read more: CNBC

🤑 AI Fundraising News

OneBlinc, a payroll fintech specializing in payroll loans for Federal employees, secures a $100m credit facility with Clear Haven.

SportsVisio Raises $3M in Seed Funding to provide AI technology that brings augmented reality to amateur sports. It allows users to record any game or live event on their smartphone, stream live, review statistics, access analytics, and create highlights in real-time.

🗞️ AI Quick-Bytes

What else is going on?

  1. AI could dwarf Industrial Revolution's impact on 'all elements of life,' senior UK official says

  2. Cisco Gains After CEO Points to Headway in AI and Security

  3. Google A.I. researcher says he left to build a startup after encountering ‘big company-itis.’

  4. Opera’s iOS web browser gains an AI companion with Aria

  5. Hozier would consider a strike over AI threat to music

  6. OpenAI acquires AI design studio Global Illumination

  7. Microsoft CEO Says AI Is a Tidal Wave as Big as the Internet

🐦️ Tweet of the day

🚀 Enterprise AI is growing faster and faster by the day

What did you think of today's newsletter?

Login or Subscribe to participate in polls.

And that does it for today's issue.

As always, thanks for reading. Have a great day, and see you next time! ✌️

— Haroon: (definitely) Not A Robot and @haroonchoudery on Twitter

P.S.: Check us out on our socials - snacks on us if you also hit follow :)

Join the conversation

or to participate.