llama.cpp源码方式安装和调试配置
构建和编译
- cmake -B build -DGGML_CUDA=ON -DCMAKE_BUILD_TYPE=Debug
- cmake --build build --config Debug
复制代码
配置launch.json用于调式:
要根据本身的环境路径做相应修改
- {
- "version": "0.2.0",
- "configurations": [
- {
- "name": "(gdb) 启动",
- "type": "cppdbg",
- "request": "launch",
- "program": "${workspaceFolder}/build/bin/llama-simple", //
- "args": [ //
- "-m", "output.gguf",
- "-n", "32",
- "-ngl", "99",
- "Hello my name is"
- ],
- "stopAtEntry": false,
- "cwd": "${workspaceFolder}",
- "environment": [],
- "externalConsole": false,
- "MIMode": "gdb", //
- "setupCommands": [
- {
- "description": "为 gdb 启用整齐打印",
- "text": "-enable-pretty-printing",
- "ignoreFailures": true
- },
- {
- "description": "将反汇编风格设置为 Intel",
- "text": "-gdb-set disassembly-flavor intel",
- "ignoreFailures": true
- }
- ],
- "miDebuggerPath": "/usr/bin/gdb" //
- }
- ]
- }
复制代码 转换模子为gguf格式
- python convert_hf_to_gguf.py --outtype f16 --outfile "output.gguf" "/raid/home/huafeng/models/Meta-Llama-3-8B-Instruct"
复制代码 运行第一个步伐
调试步伐(llama.cpp/examples/simple/simple.cpp)
免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!更多信息从访问主页:qidao123.com:ToB企服之家,中国第一个企服评测及商务社交产业平台。 |