llama-index,uncharted and llama2:7b run locally to generate Index
题意:本地运行 llama-index、uncharted 以及 llama2:7b 来生成索引题目配景:
I wanted to use llama-index locally with ollama and llama3:8b to index utf-8 json file. I dont have a gpu. I use uncharted to convert docs into json. Now If it is not possible to use llama-index locally without GPU I wanted to use hugging face inference API. But I am not certain if it is free. Can anyone suggest a way?
This is my python code:
from llama_index.core import Document, SimpleDirectoryReader, VectorStoreIndex
from llama_index.llms.ollama import Ollama
import json
from llama_index.core import Settings
# Convert the JSON document into LlamaIndex Document objects
with open('data/UBER_2019.json', 'r',encoding='utf-8') as f:
json_doc = json.load(f)
documents =
# Initialize Ollama with the local LLM
ollama_llm = Ollama(model="llama3:8b")
Settings.llm = ollama_llm
# Create the index using the local LLM
index = VectorStoreIndex.from_documents(documents)#, llm=ollama_llm) But i keep getting error that there is no OPENAI key. I wanted to use llama2 so that i dont require OPENAI key
Can anyone suggest what i am doing wrong? Also can i use huggingfaceinference API to do indexing of a local json file for free?
题目办理:
You are not setting the embedding model, so I think Llama Index is defaulting to OpenAI.
You must specify an embedding model that does not require an API key.
You can use Ollama:
from llama_index.embeddings.ollama import OllamaEmbedding
# Using Nomic
Settings.embed_model = OllamaEmbedding(model_name="nomic-embed-text")
# Using Llama
Settings.embed_model = OllamaEmbedding(model_name="llama2") But there are many options in the documentation like this, this, this
https://i-blog.csdnimg.cn/direct/7a53eaa491be4516a4ca9fa46b4bbf2b.jpeg
免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!更多信息从访问主页:qidao123.com:ToB企服之家,中国第一个企服评测及商务社交产业平台。
页:
[1]