- Not A Bot
- Posts
- 🧠 Vision Models
🧠 Vision Models
PLUS: Meta’s newest open-source model
New here? Welcome to NAB, and make sure to subscribe to stay informed on the latest developments in AI!
Happy Friday, fellow humans 👋
Meta releases its newest suite of open-source AI models, Mistral unveils Mistral Large 2, and MIT researchers develop a system for automated interpretation of AI vision models…
Let’s dive into It!
Have anything you’d like to share with over 40k AI enthusiasts?
🧵 In today's edition:
🦙 Meta’s newest open-source model
🗣️ Mistral isn’t backing down
🧠 Vision Models
🛠️ AI Tools to Check Out
🤑 AI Fundraising News
🐦 Tweet Post of the Day
On Wednesday, Meta unveiled Llama 3.1, its latest suite of open-source AI models.
The collection's crown jewel is a groundbreaking 405B parameter model, representing a significant advancement in open-source AI capabilities.
This release narrows the gap between open and closed-source models, with Llama 3.1 demonstrating competitive performance across a wide range of tasks.
Here are some key highlights:
Llama 3.1 405B is the largest openly available foundation model, competitive with leading AI systems.
Upgraded 8B and 70B models with multilingual support and 128K context length.
Improved performance in general knowledge, math, tool use, and multilingual translation.
New license allowing developers to use model outputs to enhance other models.
Extensive evaluations on over 150 benchmark datasets and human-led assessments.
The release also introduces:
A full reference system with sample applications.
New safety tools like Llama Guard 3 and Prompt Guard.
Proposal for "Llama Stack" API to standardize ecosystem development.
Overall: Meta emphasizes the importance of open-source AI for innovation and democratization. The Llama ecosystem, supported by over 25 partners, enables developers to customize and deploy models in various environments.
While the 405B model requires significant resources, Meta and its partners are working to make it more accessible for developers.
The company encourages the community to explore new applications, including synthetic data generation and model distillation, furthering the potential of open-source AI.
Read more: Meta AI
Source: VentureBeat
Merely 24 hours after Meta released Llama 3.1, Mistral AI unveiled Mistral Large 2, its latest flagship AI model. This 123 billion-parameter model boasts significant improvements over its predecessor, particularly in code generation, mathematics, reasoning, and multilingual support.
Its key features include:
128k context window.
Support for dozens of languages and 80+ coding languages.
Designed for single-node inference with long-context applications.
Available under Mistral Research License for non-commercial use.
It’s performance highlights:
84.0% accuracy on MMLU (pre-trained version).
Competitive with leading models like GPT-4o and Claude 3 Opus in code and reasoning tasks.
Improved instruction-following and conversational abilities.
Enhanced multilingual capabilities, excelling in 13+ languages.
Advanced function calling and retrieval skills.
Mistral Large 2 is accessible via la Plateforme under the name mistral-large-2407, with weights available on HuggingFace. The company is consolidating its offering around two general-purpose models (Mistral Nemo and Mistral Large) and two specialist models (Codestral and Embed).
Overall: Fine-tuning capabilities have been extended to Mistral Large, Mistral Nemo, and Codestral on la Plateforme.
Additionally, Mistral AI has expanded partnerships with cloud service providers, making their models available on platforms like Google Cloud's Vertex AI, Azure AI Studio, Amazon Bedrock, and IBM watsonx.ai.
Read more: Mistral AI
MIT researchers have developed MAIA (Multimodal Automated Interpretability Agent), an innovative system for automated interpretation of AI vision models.
Why? To address the growing need to understand complex AI systems as they become increasingly integrated into various sectors.
Key features of MAIA:
Automates neural network interpretability tasks.
Uses a vision-language model with tools for experimenting on AI systems.
Generates hypotheses, designs experiments, and refines understanding iteratively.
Combines pre-trained vision-language models with interpretability tools.
Responds to user queries by conducting targeted experiments.
MAIA demonstrates proficiency in three main tasks:
Labeling and describing visual concepts that activate individual components in vision models.
Cleaning up image classifiers by removing irrelevant features.
Identifying hidden biases in AI systems.
The system outperformed baseline methods in describing individual neurons in various vision models and performed well on a new dataset of synthetic neurons. MAIA's flexibility allows it to address a wide range of interpretability queries and design experiments on the fly.
Overall: While MAIA shows promise, it does have limitations. These include reliance on external tools and occasional confirmation bias. However, as tools like image synthesis models improve, so will MAIA's performance.
Read more: MIT
🛠️ AI Tools to Check Out
Scade: Effortlessly integrate audio, image, text, and video AI models with our no-code platform to speed up your projects and reduce costs by 90%. Check it out!
Colocio: Create, evaluate and automate your online campaigns with the power of AI. Check it out!
Figr: Get editable UI of popular apps in a click. Check it out!
🤑 AI Fundraising News
Hash AI raises $10M in funding to provide solutions for cryptocurrency mining, AI-optimized operations, and sustainable practices.
Promptfoo raises $5M in Seed funding to provide an AI open-source pen testing and evaluation framework used by over 25,000 software engineers at companies like Shopify, Amazon, and Anthropic to find and fix vulnerabilities in their AI applications.
🐦 Tweet Post of the Day
🤣🤣🤣
Product managers calling into standup just to say “no update” after spending the last 3 days creating JIRA tickets in the pool
— Alex Cohen 🤠 (@anothercohen)
2:25 AM • Jul 24, 2024
What did you think of today's newsletter? |
And that does it for today's issue.
As always, thanks for reading. Have a great day, and see you next time! ✌️
— Haroon: (definitely) Not A Bot and @haroonchoudery on Twitter
P.S.: Check us out on our socials - snacks on us if you also hit follow. :)
Autoblocks | Twitter | LinkedIn | YouTube
Reply