https://ai.meta.com/blog/segment-anything-2/
Takeaways
- Following up on the success of the Meta Segment Anything Model (SAM) for images, we're releasing SAM 2, a unified model for real-time promptable object segmentation in images and videos that achieves state-of-the-art performance.
- In keeping with our approach to open science, we're sharing the code and model weights with a permissive Apache 2.0 license.
- We're also sharing the SA-V dataset, which includes approximately 51,000 real-world videos and more than 600,000 masklets (spatio-temporal masks).
- SAM 2 can segment any object in any video or image - even for objects and visual domains it has not seen previously, enabling a diverse range of use cases without custom adaptation.
- SAM 2 has many potential real-world applications. For example, the outputs of SAM 2 can be used with a generative video model to create new video effects and unlock new creative applications. SAM 2 could also aid in faster annotation tools for visual data to build better computer vision systems.
Back to feed