LLM Evaluations are an important area of research. To support this work, today we're excited to announce a new LLM Evaluation Research Grant to foster further innovation in this area and deepen our collaboration with the academic community. Selected grant recipients will receive $200K in funding to accelerate their work in this space. As part of this new grant program, we encourage submissions that utilize evaluations in the areas of complex reasoning, emotional & social intelligence and agentic behavior. We're accepting proposals through September 6th and you can find the full details here ➡️ https://go.fb.me/adkoj2
AI at Meta
Research Services
Menlo Park, California 843,744 followers
Together with the AI community, we’re pushing boundaries through open science to create a more connected world.
About us
Through open science and collaboration with the AI community, we are pushing the boundaries of artificial intelligence to create a more connected world. We can’t advance the progress of AI alone, so we actively engage with the AI research and academic communities. Our goal is to advance AI in Infrastructure, Natural Language Processing, Generative AI, Vision, Human-Computer Interaction and many other areas of AI enable the community to build safe and responsible solutions to address some of the world’s greatest challenges.
- Website
-
https://ai.meta.com/
External link for AI at Meta
- Industry
- Research Services
- Company size
- 10,001+ employees
- Headquarters
- Menlo Park, California
- Specialties
- research, engineering, development, software development, artificial intelligence, machine learning, machine intelligence, deep learning, computer vision, engineering, computer vision, speech recognition, and natural language processing
Updates
-
SAM 2 from Meta FAIR is the first unified model for real-time, promptable object segmentation in images & videos. Using the model in our web-based demo you can segment, track and apply effects to objects in video in just a few clicks. Try SAM 2 ➡️ https://go.fb.me/7tvmoj
-
Miss the conversation between Mark Zuckerberg and Jensen Huang at SIGGRAPH? Watch the whole conversation on AI and The Next Computing Platforms ⬇️
AI and The Next Computing Platforms With Jensen Huang and Mark Zuckerberg
https://www.youtube.com/
-
📣 Today we're opening a call for applications for Llama 3.1 Impact Grants! Until November 22, teams can submit proposals for using Llama to address social challenges across their communities for a chance to be awarded a $500K grant. Details + application ➡️ https://go.fb.me/rd22jf This year we're expanding the Llama Impacts Grant program by hosting a series of virtual events and in-person hackathons, workshops and trainings around the world — and providing technical guidance and mentorship to prospective applicants. These programs will support organizations in Egypt, India, Indonesia, Japan, the Kingdom of Saudi Arabia, Korea, Latin America, North America, Pakistan, Singapore, Sub-Saharan Africa, Taiwan, Thailand, Turkey, the United Arab Emirates and Vietnam. We’re inspired by the diverse projects we’ve seen developers undertake around the world to positively impact their communities by building with Llama and we're excited to support a new wave of global community impact with the Llama 3.1 Impact Grants.
-
-
📣 New and updated! Try experimental demos featuring the latest AI research from Meta FAIR! • Segment Anything 2: Create video cutouts and other fun visual effects with a few clicks. • Seamless Translation: Hear what you sound like in another language • Animated Drawings: Bring hand-drawn sketches to life with animations. • Audiobox: Create an audio story with AI-generated voices and sounds. Try the research demos ➡️ https://go.fb.me/brn8mg
-
The MLCommons #AlgoPerf competition was designed to find better training algorithms to speed up neural network training across a diverse set of workloads. Results of the inaugural competition were released today and we’re proud to share that teams from Meta took first place across both external tuning and self-tuning tracks! 🔗 Details • Results from MLCommons ➡️ https://go.fb.me/poejsh • Schedule Free ➡️ https://go.fb.me/5wf35d • Distributed Shampoo research paper ➡️ https://go.fb.me/tns64m
-
-
📣 Just announced by Mark Zuckerberg at SIGGRAPH! Introducing Meta Segment Anything Model 2 (SAM 2) — the first unified model for real-time, promptable object segmentation in images & videos. In addition to the new model, we’re also releasing SA-V, a dataset that’s 4.5x larger + has ~53x more annotations than the largest existing video segmentation dataset in order to enable new research in computer vision. Details ➡️ https://go.fb.me/edcjv9 Demo ➡️ https://go.fb.me/fq8oq2 SA-V Dataset ➡️ https://go.fb.me/rgi4j0 SAM 2 is available today under Apache 2.0 so that anyone can use it to build their own experiences. Like the original SAM, SAM 2 can be applied out of the box to a diverse range of real-world use cases and we’re excited to see what developers build.
-
The livestream from #SIGGRAPH2024 kicks off at 3pm PT today ⬇️
Today is the day! We're excited to hear from our CEO Mark Zuckerberg talk with NVIDIA CEO Jensen Huang today at #SIGGRAPH2024 about AI breakthroughs and the next computing platforms. Watch the livestream here at 4pm MT (3PM PT/ 6pm ET): https://bit.ly/46Gh1q9
-
-
As part of our release of Llama 3.1 and our continued support of open science, this week we published the full Llama 3 research paper that covers a range of topics including insights on model training, architecture and the results of our current work to integrate image/video/speech capabilities via a compositional approach. The Llama 3 Herd of Models Paper ➡️ https://go.fb.me/1nmc78 We hope that sharing this research will help the larger research community understand the key factors of foundation-model development and contribute to a more informed discussion about the future of foundation models in the general public.
-
-
Ready to start working with Llama 3.1? Check out all of the newest resources in the updated repos on GitHub. Official Llama repo ➡️ https://go.fb.me/wy18hm Llama recipes repo ➡️ https://go.fb.me/95swt9 Llama Reference System ➡️ go.fb.me/q0cjcb The repos include code, new training recipes, an updated model card, details on our latest trust & safety tools, a new reference system and more.
-