Vision - 开源视觉分割算法框架 Grounded SAM2 设置与推理 教程 (1) ...

打印 上一主题 下一主题

主题 801|帖子 801|积分 2403

接待关注我的CSDN:https://spike.blog.csdn.net/
本文地址:https://spike.blog.csdn.net/article/details/143388189
  免责声明:本文来源于个人知识与公开资料,仅用于学术交换,接待讨论,不支持转载。


Grounded SAM2 集成多个先辈模型的视觉 AI 框架,融合 GroundingDINO、Florence-2 和 SAM2 等模型,实现开放域目标检测、分割和跟踪等多项视觉使命的突破性希望,通过自然语言形貌来定位图像中的目标,天生精细的目标分割掩码,在视频序列中持续跟踪目标,保持 ID 的一致性。
Paper: Grounded SAM: Assembling Open-World Models for Diverse Visual Tasks,SAM 版本由 1.0 升级至 2.0
1. 环境设置

GitHub: Grounded-SAM-2
  1. git clone https://github.com/IDEA-Research/Grounded-SAM-2
  2. cd Grounded-SAM-2
复制代码
准备 SAM 2.1 模型,格式是 pt 的,GroundingDINO 模型,格式是 pth 的,即:
  1. wget https://huggingface.co/facebook/sam2.1-hiera-large/resolve/main/sam2.1_hiera_large.pt?download=true -O sam2.1_hiera_large.pt
  2. wget https://huggingface.co/ShilongLiu/GroundingDINO/resolve/main/groundingdino_swint_ogc.pth
复制代码
最新模型位置:
  1. cd checkpoints
  2. ln -s [your path]/llm/workspace_comfyui/ComfyUI/models/sam2/sam2_hiera_large.pt sam2_hiera_large.pt
  3. cd gdino_checkpoints
  4. ln -s [your path]/llm/workspace_comfyui/ComfyUI/models/grounding-dino/groundingdino_swinb_cogcoor.pth groundingdino_swinb_cogcoor.pth
  5. ln -s [your path]/llm/workspace_comfyui/ComfyUI/models/grounding-dino/groundingdino_swint_ogc.pth groundingdino_swint_ogc.pth
复制代码
激活环境:
  1. conda activate sam2
复制代码
测试 PyTorch:
  1. import torch
  2. print(torch.__version__)  # 2.5.0+cu124
  3. print(torch.cuda.is_available())  # True
  4. exit()
  5. echo $CUDA_HOME
复制代码
安装 Grounding DINO:
  1. pip install --no-build-isolation -e grounding_dino
  2. pip show groundingdino
复制代码
安装 SAM2:
  1. pip install --no-build-isolation -e .
  2. pip install --no-build-isolation -e ".[notebooks]"  # 适配 Jupyter
  3. pip show SAM-2
复制代码
  设置参数:视觉分割开源算法 SAM2(Segment Anything 2) 设置与推理
  依靠文件:
  1. cd grounding_dino/
  2. pip install -r requirements.txt --verbose
复制代码
2. 测试图像

测试脚本:grounded_sam2_local_demo.py
导入相关的依靠包:
  1. import os
  2. import cv2
  3. import json
  4. import torch
  5. import numpy as np
  6. import supervision as sv
  7. import pycocotools.mask as mask_util
  8. from pathlib import Path
  9. from torchvision.ops import box_convert
  10. from sam2.build_sam import build_sam2
  11. from sam2.sam2_image_predictor import SAM2ImagePredictor
  12. from grounding_dino.groundingdino.util.inference import load_model, load_image, predict
  13. from PIL import Image
  14. import matplotlib.pyplot as plt
复制代码
设置数据,以及依靠环境,其中包罗:


  • 输入文本提示,例如 袜子(socks) 和 吉他(guitar)
  • 输入图像
  • SAM2 模型 v2.1 版本,以及设置
  • GroundingDINO (DETR with Improved deNoising anchOr boxes, 改进的去噪锚框的DETR) 模型,以及设置
  • Box 阈值、文本阈值
  • 输出文件夹与Json
即:
  1. TEXT_PROMPT = "socks. guitar."
  2. #IMG_PATH = "notebooks/images/truck.jpg"
  3. IMG_PATH = "[your path]/llm/vision_test_data/image2.png"
  4. image = Image.open(IMG_PATH)
  5. plt.figure(figsize=(9, 6))
  6. plt.title(f"annotated_frame")
  7. plt.imshow(image)
  8. SAM2_CHECKPOINT = "./checkpoints/sam2.1_hiera_large.pt"
  9. SAM2_MODEL_CONFIG = "configs/sam2.1/sam2.1_hiera_l.yaml"
  10. GROUNDING_DINO_CONFIG = "grounding_dino/groundingdino/config/GroundingDINO_SwinT_OGC.py"
  11. GROUNDING_DINO_CHECKPOINT = "gdino_checkpoints/groundingdino_swint_ogc.pth"
  12. BOX_THRESHOLD = 0.35
  13. TEXT_THRESHOLD = 0.25
  14. DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
  15. OUTPUT_DIR = Path("outputs/grounded_sam2_local_demo")
  16. DUMP_JSON_RESULTS = True
  17. # create output directory
  18. OUTPUT_DIR.mkdir(parents=True, exist_ok=True)
复制代码
加载 SAM2 模型,得到 sam2_predictor,即:
  1. # build SAM2 image predictor
  2. sam2_checkpoint = SAM2_CHECKPOINT
  3. model_cfg = SAM2_MODEL_CONFIG
  4. sam2_model = build_sam2(model_cfg, sam2_checkpoint, device=DEVICE)
  5. sam2_predictor = SAM2ImagePredictor(sam2_model)
复制代码
加载 GroundingDINO 模型,得到 grounding_model,即:
  1. # build grounding dino model
  2. grounding_model = load_model(
  3.     model_config_path=GROUNDING_DINO_CONFIG,
  4.     model_checkpoint_path=GROUNDING_DINO_CHECKPOINT,
  5.     device=DEVICE
  6. )
复制代码
SAM2 加载图像数据,即:
  1. text = TEXT_PROMPT
  2. img_path = IMG_PATH
  3. # image(原图), image_transformed(正则化图像)
  4. image_source, image = load_image(img_path)
  5. sam2_predictor.set_image(image_source)
复制代码
GroudingDINO 猜测 Bounding Box,输入模型、图像、文本、Box和Text阈值,即:


  • load_image() 和 predict() 都来自于 GroundingDINO,数据和模型匹配。
  1. boxes, confidences, labels = predict(
  2.     model=grounding_model,
  3.     image=image,
  4.     caption=text,
  5.     box_threshold=BOX_THRESHOLD,
  6.     text_threshold=TEXT_THRESHOLD,
  7. )
复制代码
适配不同 Box 的格式:
  1. h, w, _ = image_source.shape
  2. boxes = boxes * torch.Tensor([w, h, w, h])
  3. input_boxes = box_convert(boxes=boxes, in_fmt="cxcywh", out_fmt="xyxy").numpy()
复制代码
SAM2 依靠的 PyTorch 设置:
  1. # FIXME: figure how does this influence the G-DINO model
  2. torch.autocast(device_type="cuda", dtype=torch.bfloat16).__enter__()
  3. if torch.cuda.get_device_properties(0).major >= 8:
  4.     # turn on tfloat32 for Ampere GPUs (https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices)
  5.     torch.backends.cuda.matmul.allow_tf32 = True
  6.     torch.backends.cudnn.allow_tf32 = True
复制代码
SAM2 猜测图像:
  1. masks, scores, logits = sam2_predictor.predict(
  2.     point_coords=None,
  3.     point_labels=None,
  4.     box=input_boxes,
  5.     multimask_output=False,
  6. )
复制代码
后处理惩罚猜测结果:
  1. """
  2. Post-process the output of the model to get the masks, scores, and logits for visualization
  3. """
  4. # convert the shape to (n, H, W)
  5. if masks.ndim == 4:
  6.     masks = masks.squeeze(1)
  7. confidences = confidences.numpy().tolist()
  8. class_names = labels
  9. class_ids = np.array(list(range(len(class_names))))
  10. labels = [
  11.     f"{class_name} {confidence:.2f}"
  12.     for class_name, confidence
  13.     in zip(class_names, confidences)
  14. ]
复制代码
输出结果可视化:
  1. """
  2. Visualize image with supervision useful API
  3. """
  4. img = cv2.imread(img_path)
  5. detections = sv.Detections(
  6.     xyxy=input_boxes,  # (n, 4)
  7.     mask=masks.astype(bool),  # (n, h, w)
  8.     class_id=class_ids
  9. )
  10. box_annotator = sv.BoxAnnotator()
  11. annotated_frame = box_annotator.annotate(scene=img.copy(), detections=detections)
  12. label_annotator = sv.LabelAnnotator()
  13. annotated_frame = label_annotator.annotate(scene=annotated_frame, detections=detections, labels=labels)
  14. cv2.imwrite(os.path.join(OUTPUT_DIR, "groundingdino_annotated_image.jpg"), annotated_frame)
  15. plt.figure(figsize=(9, 6))
  16. plt.title(f"annotated_frame")
  17. plt.imshow(annotated_frame[:,:,::-1])
  18. mask_annotator = sv.MaskAnnotator()
  19. annotated_frame = mask_annotator.annotate(scene=annotated_frame, detections=detections)
  20. cv2.imwrite(os.path.join(OUTPUT_DIR, "grounded_sam2_annotated_image_with_mask.jpg"), annotated_frame)
  21. plt.figure(figsize=(9, 6))
  22. plt.title(f"annotated_frame")
  23. plt.imshow(annotated_frame[:,:,::-1])
复制代码
GroundingDINO 的 Box 效果,准确检测出 袜子 和 吉他,两类实体:

SAM2 的分割效果,如下:

转换成 COCO 数据格式:
  1. def single_mask_to_rle(mask):
  2.     rle = mask_util.encode(np.array(mask[:, :, None], order="F", dtype="uint8"))[0]
  3.     rle["counts"] = rle["counts"].decode("utf-8")
  4.     return rle
  5. if DUMP_JSON_RESULTS:
  6.     # convert mask into rle format
  7.     mask_rles = [single_mask_to_rle(mask) for mask in masks]
  8.     input_boxes = input_boxes.tolist()
  9.     scores = scores.tolist()
  10.     # save the results in standard format
  11.     results = {
  12.         "image_path": img_path,
  13.         "annotations" : [
  14.             {
  15.                 "class_name": class_name,
  16.                 "bbox": box,
  17.                 "segmentation": mask_rle,
  18.                 "score": score,
  19.             }
  20.             for class_name, box, mask_rle, score in zip(class_names, input_boxes, mask_rles, scores)
  21.         ],
  22.         "box_format": "xyxy",
  23.         "img_width": w,
  24.         "img_height": h,
  25.     }
  26.    
  27.     with open(os.path.join(OUTPUT_DIR, "grounded_sam2_local_image_demo_results.json"), "w") as f:
  28.         json.dump(results, f, indent=4)
复制代码
免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!更多信息从访问主页:qidao123.com:ToB企服之家,中国第一个企服评测及商务社交产业平台。

本帖子中包含更多资源

您需要 登录 才可以下载或查看,没有账号?立即注册

x
回复

使用道具 举报

0 个回复

倒序浏览

快速回复

您需要登录后才可以回帖 登录 or 立即注册

本版积分规则

惊落一身雪

金牌会员
这个人很懒什么都没写!

标签云

快速回复 返回顶部 返回列表