LongVU :Meta AI 的解锁长视频理解模型,使用自顺应时空压缩技术彻底改变视 ...

打印 上一主题 下一主题

主题 1718|帖子 1718|积分 5154

Meta AI在视频理解方面取得了令人瞩目标里程碑式成就,推出了LongVU,这是一种开创性的模型,可以或许理解从前对人工智能体系来说具有挑战性的长视频。 研究论文 "LongVU:用于长视频语言理解的时空自顺应压缩 "提出了一种革命性的方法,使人工智能可以或许有效地处理和理解长达几分钟乃至一小时的视频,而这在从前是无法实现的。

多模态大语言模型(MLLM)在理解和分析视频内容方面取得了可喜的进展。 然而,受限于给定的上下文长度,处理长视频仍然是一项重大挑战。 为了解决这一限定,我们提出了一种时空自顺应压缩机制 LongVU,以淘汰视频标志的数量,同时保留长视频的视觉细节。 我们的想法是使用跨模态查询和帧间依赖关系,自顺应地淘汰视频中的时空冗余。 具体来说,我们使用 DINOv2 特征来删除相似度高的冗余帧。 然后,我们使用文本引导的跨模态查询来选择性地淘汰帧特征。 此外,我们还根据帧与帧之间的时间依赖关系,对帧进行空间标志缩减。 我们的自顺应压缩策略在有限的上下文长度内有效地处理了大量帧,险些没有丧失任何视觉信息。 在各种视频理解基准测试中,我们的 LongVU 始终超越现有方法,尤其是在长达一小时的视频理解使命(如 VideoMME 和 MLVU)中。 在轻量级 LLM 的情况下,我们的 LongVU 还能有效地扩展到更小的规模,并具有最先进的视频理解性能。
LongVU 架构

LongVU 的结构。 给定一个密集采样的视频帧,我们首先使用 DINOv2 去除冗余帧,然后融合 SigLIP 和 DINOv2 的剩余帧特征。 然后,我们通过跨模态查询有选择地淘汰视觉标志。 末了,我们基于时间依赖性进行空间标志压缩,以进一步满意 LLM 的有限上下文长度。



示例

  1. # git clone https://github.com/Vision-CAIR/LongVU
  2. import numpy as np
  3. import torch
  4. from longvu.builder import load_pretrained_model
  5. from longvu.constants import (
  6.     DEFAULT_IMAGE_TOKEN,
  7.     IMAGE_TOKEN_INDEX,
  8. )
  9. from longvu.conversation import conv_templates, SeparatorStyle
  10. from longvu.mm_datautils import (
  11.     KeywordsStoppingCriteria,
  12.     process_images,
  13.     tokenizer_image_token,
  14. )
  15. from decord import cpu, VideoReader
  16. tokenizer, model, image_processor, context_len = load_pretrained_model(
  17.     "./checkpoints/longvu_qwen", None, "cambrian_qwen",
  18. )
  19. model.eval()
  20. video_path = "./examples/video1.mp4"
  21. qs = "Describe this video in detail"
  22. vr = VideoReader(video_path, ctx=cpu(0), num_threads=1)
  23. fps = float(vr.get_avg_fps())
  24. frame_indices = np.array([i for i in range(0, len(vr), round(fps),)])
  25. video = []
  26. for frame_index in frame_indices:
  27.     img = vr[frame_index].asnumpy()
  28.     video.append(img)
  29. video = np.stack(video)
  30. image_sizes = [video[0].shape[:2]]
  31. video = process_images(video, image_processor, model.config)
  32. video = [item.unsqueeze(0) for item in video]
  33. qs = DEFAULT_IMAGE_TOKEN + "\n" + qs
  34. conv = conv_templates["qwen"].copy()
  35. conv.append_message(conv.roles[0], qs)
  36. conv.append_message(conv.roles[1], None)
  37. prompt = conv.get_prompt()
  38. input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors="pt").unsqueeze(0).to(model.device)
  39. stop_str = conv.sep if conv.sep_style != SeparatorStyle.TWO else conv.sep2
  40. keywords = [stop_str]
  41. stopping_criteria = KeywordsStoppingCriteria(keywords, tokenizer, input_ids)
  42. with torch.inference_mode():
  43.     output_ids = model.generate(
  44.         input_ids,
  45.         images=video,
  46.         image_sizes=image_sizes,
  47.         do_sample=False,
  48.         temperature=0.2,
  49.         max_new_tokens=128,
  50.         use_cache=True,
  51.         stopping_criteria=[stopping_criteria],
  52.     )
  53. pred = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0].strip()
复制代码
Github:https://github.com/Vision-CAIR/LongVU
如何 24GB VRAM 运行

https://github.com/Vision-CAIR/LongVU/issues/6
  1. # git clone https://github.com/Vision-CAIR/LongVU
  2. import numpy as np
  3. import torch
  4. from longvu.builder import load_pretrained_model
  5. from longvu.constants import (
  6.     DEFAULT_IMAGE_TOKEN,
  7.     IMAGE_TOKEN_INDEX,
  8. )
  9. from longvu.conversation import conv_templates, SeparatorStyle
  10. from longvu.mm_datautils import (
  11.     KeywordsStoppingCriteria,
  12.     process_images,
  13.     tokenizer_image_token,
  14. )
  15. from decord import cpu, VideoReader
  16. tokenizer, model, image_processor, context_len = load_pretrained_model(
  17.     "Vision-CAIR/LongVU_Qwen2_7B",
  18.     model_base=None,
  19.     model_name="cambrian_qwen",
  20.     device="cuda:0"
  21. )
  22. model.eval()
  23. video_path = "./examples/video1.mp4"
  24. qs = "Describe this video in detail"
  25. vr = VideoReader(video_path, ctx=cpu(0), num_threads=1)
  26. fps = float(vr.get_avg_fps())
  27. # frame_indices = np.array([i for i in range(0, len(vr), round(fps),)])
  28. num_frames = 1000 if len(vr) > 1000 else len(vr)
  29. frame_indices = np.array([i for i in range(0, num_frames, round(fps),)])
  30. video = []
  31. for frame_index in frame_indices:
  32.     img = vr[frame_index].asnumpy()
  33.     video.append(img)
  34. video = np.stack(video)
  35. image_sizes = [video[0].shape[:2]]
  36. video = process_images(video, image_processor, model.config)
  37. video = [item.unsqueeze(0) for item in video]
  38. qs = DEFAULT_IMAGE_TOKEN + "\n" + qs
  39. conv = conv_templates["qwen"].copy()
  40. conv.append_message(conv.roles[0], qs)
  41. conv.append_message(conv.roles[1], None)
  42. prompt = conv.get_prompt()
  43. input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors="pt").unsqueeze(0).to(model.device)
  44. stop_str = conv.sep if conv.sep_style != SeparatorStyle.TWO else conv.sep2
  45. keywords = [stop_str]
  46. stopping_criteria = KeywordsStoppingCriteria(keywords, tokenizer, input_ids)
  47. # with torch.inference_mode():
  48. #     output_ids = model.generate(
  49. #         input_ids,
  50. #         images=video,
  51. #         image_sizes=image_sizes,
  52. #         do_sample=False,
  53. #         temperature=0.2,
  54. #         max_new_tokens=128,
  55. #         use_cache=True,
  56. #         stopping_criteria=[stopping_criteria],
  57. #     )
  58. attention_mask = torch.ones_like(input_ids)
  59. with torch.inference_mode():
  60.     output_ids = model.generate(
  61.         input_ids,
  62.         attention_mask=attention_mask,
  63.         images=video,
  64.         image_sizes=image_sizes,
  65.         do_sample=True,
  66.         temperature=0.2,
  67.         pad_token_id=tokenizer.eos_token_id,
  68.         max_new_tokens=512,
  69.         use_cache=True,
  70.         stopping_criteria=[stopping_criteria],
  71.     )
  72. pred = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0].strip()
复制代码
输出:
‘The video begins with a scene featuring two characters in an animated setting, one dressed in a bright yellow and red outfit with a mask, and the other in a blue and white traditional robe, standing on a rocky terrain with a green, leaf-like structure and a mountainous backdrop. The character in the yellow and red outfit is seen making a gesture with their right hand, while the other character appears to be speaking or reacting to the first character. The scene then transitions to a misty, ethereal environment where the same two characters are now standing on a staircase leading to a building with a golden roof, surrounded by smoke or clouds. The character in the yellow and red outfit is now holding a sword, while the other character is holding a fan, and both are looking up at the building. The scene shifts again to a large, ornate building with a golden roof, where a figure in a white and red outfit is seen descending a staircase, with smaller figures in white and red attire standing on the steps, and a large, white, cloud-like object in the foreground. The final scene shows the same building with the figure in white and red now seated on a golden throne, surrounded by smaller figures in white and red, and a large, white, cloud-like object still in the foreground, suggesting a ceremonial or significant event taking place.’

免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!更多信息从访问主页:qidao123.com:ToB企服之家,中国第一个企服评测及商务社交产业平台。

本帖子中包含更多资源

您需要 登录 才可以下载或查看,没有账号?立即注册

x
回复

使用道具 举报

0 个回复

倒序浏览

快速回复

您需要登录后才可以回帖 登录 or 立即注册

本版积分规则

曂沅仴駦

论坛元老
这个人很懒什么都没写!
快速回复 返回顶部 返回列表