ToB企服应用市场:ToB评测及商务社交产业平台

标题: kaggle运行报错RuntimeError: cutlassF: no kernel found to launch! [打印本页]

作者: 梦见你的名字    时间: 2024-9-12 09:41
标题: kaggle运行报错RuntimeError: cutlassF: no kernel found to launch!
项目场景:

项目场景:使用原始Llama3推理,到这里都是能行的
  1. !pip install -q modelscope
  2. import torch
  3. from modelscope import snapshot_download, AutoModel, AutoTokenizer
  4. import os
  5. model_dir = snapshot_download('LLM-Research/Meta-Llama-3-8B-Instruct', cache_dir='/root/autodl-tmp', revision='master')
  6. from transformers import AutoTokenizer, AutoModelForCausalLM
  7. import torch
  8. tokenizer = AutoTokenizer.from_pretrained(model_dir)
  9. model = AutoModelForCausalLM.from_pretrained(
  10.     model_dir, torch_dtype="auto", device_map="auto"
  11. )
复制代码

题目描述

RuntimeError: cutlassF: no kernel found to launch!
  1. messages = [
  2.     {
  3.         'role':'user',
  4.         'content':"""hello"""
  5.     }
  6. ]
  7. input_ids = tokenizer.apply_chat_template(
  8.     messages, add_generation_prompt=True, return_tensors="pt"
  9. ).to(model.device)
  10. outputs = model.generate(
  11.     input_ids=input_ids,
  12.     max_new_tokens=8192,
  13.     do_sample=True,
  14.     temperature=0.6,
  15.     top_p=0.9,
  16. )
  17. response = outputs[0][input_ids.shape[-1]:]
  18. print(tokenizer.decode(response, skip_special_tokens=True))
复制代码
  1. 报错在outputs = model.generate()
复制代码

原因分析:

   刚开始怀疑是cuda题目,或者cpu、GPU题目
  1. device = next(model.parameters()).device
  2. print(f"Model is on {device}")
复制代码
  1. 打印出来没有发现问题
复制代码

解决方案:

   在网上看到了一个解决方案,完美解决
  1. torch.backends.cuda.enable_mem_efficient_sdp(False)
  2. torch.backends.cuda.enable_flash_sdp(False)
复制代码
原链接如下:https://stackoverflow.com/questions/77803696/runtimeerror-cutlassf-no-kernel-found-to-launch-when-running-huggingface-tran

免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!更多信息从访问主页:qidao123.com:ToB企服之家,中国第一个企服评测及商务社交产业平台。




欢迎光临 ToB企服应用市场:ToB评测及商务社交产业平台 (https://dis.qidao123.com/) Powered by Discuz! X3.4