【GraphRAG+Ollama当地摆设】奇怪滚热辣的小白利用
情况准备+源码下载
1.利用系统:Ubuntu20.04
2.VSCode编译器
3.Python 3.11
4.Ollama安装
5.GraphRAG代码下载:https://github.com/microsoft/graphrag.git
一、Anaconda虚拟情况创建及配置
1.conda create --name graphR python=3.11
2.pip install graphrag==0.3.6(新版本的graphrag轻易报错:No module named graphrag.index.main)
3.pip install ollama
(接下来利用期间若出现未提及包的未安装提示,就按照提示安装即可)
二、Ollama模型下载
1.ollama serve (启动ollama)
2.ollama pull mistral:v0.2
3.ollama pull nomic-embed-text:latest
三、创建数据目次
在graphrag代码文件夹中创建一个ragtest文件夹,并在ragtest文件夹中创建一个input文件夹,把txt数据放在input文件夹中
四、项目初始化
1.python -m graphrag.index --init --root ./ragtest
(在graphrag代码文件夹中打开终端,并激活graphR虚拟情况,运行上述代码,随后在ragtest文件夹中会出现setting.yaml,prompts,.env等文件,正常情况下是有图中6个东西的,但是有时候一开始只有几个,背面创建索引的时候其余的也会主动天生)
四、修改配置文件settings.yaml
按照下面的代码修改即可
五、修改.env文件
按照下面的代码修改即可
- GRAPHRAG_API_KEY=ollama
- GRAPHRAG_CLAIM_EXTRACTION_ENABLED=True
复制代码 六、修改虚拟情况graphR里graphrag包的源码
1.找到graphrag包位置:/home//.conda/envs/graphR/lib/python3.11/site-packages
2.修改第一个文件:/home//.conda/envs/graphR/lib/python3.11/site-packages/graphrag/llm/openai/openai_embeddings_llm.py
- """The EmbeddingsLLM class."""
- from typing_extensions import Unpack
- from graphrag.llm.base import BaseLLM
- from graphrag.llm.types import (
- EmbeddingInput,
- EmbeddingOutput,
- LLMInput,
- )
- from .openai_configuration import OpenAIConfiguration
- from .types import OpenAIClientTypes
- import ollama # 增加依赖
- class OpenAIEmbeddingsLLM(BaseLLM[EmbeddingInput, EmbeddingOutput]):
- """A text-embedding generator LLM."""
- _client: OpenAIClientTypes
- _configuration: OpenAIConfiguration
- def __init__(self, client: OpenAIClientTypes, configuration: OpenAIConfiguration):
- self.client = client
- self.configuration = configuration
- async def _execute_llm(
- self, input: EmbeddingInput, **kwargs: Unpack[LLMInput]
- ) -> EmbeddingOutput | None:
- args = {
- "model": self.configuration.model,
- **(kwargs.get("model_parameters") or {}),
- }
- # 修改此处
- #embedding = await self.client.embeddings.create(
- # input=input,
- # **args,
- #)
- #return [d.embedding for d in embedding.data]
-
- embedding_list = []
- for inp in input:
- embedding = ollama.embeddings(model="nomic-embed-text:latest", prompt=inp)
- embedding_list.append(embedding["embedding"])
- return embedding_list
复制代码 3.修改第二个文件:/home/***/.conda/envs/graphR/lib/python3.11/site-packages/graphrag/query/llm/oai/embedding.py
- """OpenAI Embedding model implementation."""
- import asyncio
- from collections.abc import Callable
- from typing import Any
- import numpy as np
- import tiktoken
- from tenacity import (
- AsyncRetrying,
- RetryError,
- Retrying,
- retry_if_exception_type,
- stop_after_attempt,
- wait_exponential_jitter,
- )
- from graphrag.logging import StatusLogger
- from graphrag.query.llm.base import BaseTextEmbedding
- from graphrag.query.llm.oai.base import OpenAILLMImpl
- from graphrag.query.llm.oai.typing import (
- OPENAI_RETRY_ERROR_TYPES,
- OpenaiApiType,
- )
- from graphrag.query.llm.text_utils import chunk_text
- # 增加依赖
- import ollama
- class OpenAIEmbedding(BaseTextEmbedding, OpenAILLMImpl):
- """Wrapper for OpenAI Embedding models."""
- def __init__(
- self,
- api_key: str | None = None,
- azure_ad_token_provider: Callable | None = None,
- model: str = "text-embedding-3-small",
- deployment_name: str | None = None,
- api_base: str | None = None,
- api_version: str | None = None,
- api_type: OpenaiApiType = OpenaiApiType.OpenAI,
- organization: str | None = None,
- encoding_name: str = "cl100k_base",
- max_tokens: int = 8191,
- max_retries: int = 10,
- request_timeout: float = 180.0,
- retry_error_types: tuple[type[BaseException]] = OPENAI_RETRY_ERROR_TYPES, # type: ignore
- reporter: StatusLogger | None = None,
- ):
- OpenAILLMImpl.__init__(
- self=self,
- api_key=api_key,
- azure_ad_token_provider=azure_ad_token_provider,
- deployment_name=deployment_name,
- api_base=api_base,
- api_version=api_version,
- api_type=api_type, # type: ignore
- organization=organization,
- max_retries=max_retries,
- request_timeout=request_timeout,
- reporter=reporter,
- )
- self.model = model
- self.encoding_name = encoding_name
- self.max_tokens = max_tokens
- self.token_encoder = tiktoken.get_encoding(self.encoding_name)
- self.retry_error_types = retry_error_types
- def embed(self, text: str, **kwargs: Any) -> list[float]:
- """
- Embed text using OpenAI Embedding's sync function.
- For text longer than max_tokens, chunk texts into max_tokens, embed each chunk, then combine using weighted average.
- Please refer to: https://github.com/openai/openai-cookbook/blob/main/examples/Embedding_long_inputs.ipynb
- """
- token_chunks = chunk_text(
- text=text, token_encoder=self.token_encoder, max_tokens=self.max_tokens
- )
- chunk_embeddings = []
- chunk_lens = []
- for chunk in token_chunks:
- try:
- #embedding, chunk_len = self._embed_with_retry(chunk, **kwargs)
- #修改embedding、chunk_len
- embedding = ollama.embeddings(model='nomic-embed-text:latest', prompt=chunk)['embedding']
- chunk_len = len(chunk)
- chunk_embeddings.append(embedding)
- chunk_lens.append(chunk_len)
- # TODO: catch a more specific exception
- except Exception as e: # noqa BLE001
- self._reporter.error(
- message="Error embedding chunk",
- details={self.__class__.__name__: str(e)},
- )
- continue
- #chunk_embeddings = np.average(chunk_embeddings, axis=0, weights=chunk_lens)
- #chunk_embeddings = chunk_embeddings / np.linalg.norm(chunk_embeddings)
- #return chunk_embeddings.tolist()
- return chunk_embeddings
-
- async def aembed(self, text: str, **kwargs: Any) -> list[float]:
- """
- Embed text using OpenAI Embedding's async function.
- For text longer than max_tokens, chunk texts into max_tokens, embed each chunk, then combine using weighted average.
- """
- token_chunks = chunk_text(
- text=text, token_encoder=self.token_encoder, max_tokens=self.max_tokens
- )
- chunk_embeddings = []
- chunk_lens = []
- embedding_results = await asyncio.gather(*[
- self._aembed_with_retry(chunk, **kwargs) for chunk in token_chunks
- ])
- embedding_results = [result for result in embedding_results if result[0]]
- chunk_embeddings = [result[0] for result in embedding_results]
- chunk_lens = [result[1] for result in embedding_results]
- chunk_embeddings = np.average(chunk_embeddings, axis=0, weights=chunk_lens) # type: ignore
- chunk_embeddings = chunk_embeddings / np.linalg.norm(chunk_embeddings)
- return chunk_embeddings.tolist()
- def _embed_with_retry(
- self, text: str | tuple, **kwargs: Any
- ) -> tuple[list[float], int]:
- try:
- retryer = Retrying(
- stop=stop_after_attempt(self.max_retries),
- wait=wait_exponential_jitter(max=10),
- reraise=True,
- retry=retry_if_exception_type(self.retry_error_types),
- )
- for attempt in retryer:
- with attempt:
- embedding = (
- self.sync_client.embeddings.create( # type: ignore
- input=text,
- model=self.model,
- **kwargs, # type: ignore
- )
- .data[0]
- .embedding
- or []
- )
- return (embedding, len(text))
- except RetryError as e:
- self._reporter.error(
- message="Error at embed_with_retry()",
- details={self.__class__.__name__: str(e)},
- )
- return ([], 0)
- else:
- # TODO: why not just throw in this case?
- return ([], 0)
- async def _aembed_with_retry(
- self, text: str | tuple, **kwargs: Any
- ) -> tuple[list[float], int]:
- try:
- retryer = AsyncRetrying(
- stop=stop_after_attempt(self.max_retries),
- wait=wait_exponential_jitter(max=10),
- reraise=True,
- retry=retry_if_exception_type(self.retry_error_types),
- )
- async for attempt in retryer:
- with attempt:
- embedding = (
- await self.async_client.embeddings.create( # type: ignore
- input=text,
- model=self.model,
- **kwargs, # type: ignore
- )
- ).data[0].embedding or []
- return (embedding, len(text))
- except RetryError as e:
- self._reporter.error(
- message="Error at embed_with_retry()",
- details={self.__class__.__name__: str(e)},
- )
- return ([], 0)
- else:
- # TODO: why not just throw in this case?
- return ([], 0)
复制代码 4.修改第三个文件:/home/***/.conda/envs/graphR/lib/python3.11/site-packages/graphrag/query/llm/text_utils.py
- """Text Utilities for LLM."""
- from collections.abc import Iterator
- from itertools import islice
- import tiktoken
- def num_tokens(text: str, token_encoder: tiktoken.Encoding | None = None) -> int:
- """Return the number of tokens in the given text."""
- if token_encoder is None:
- token_encoder = tiktoken.get_encoding("cl100k_base")
- return len(token_encoder.encode(text)) # type: ignore
- def batched(iterable: Iterator, n: int):
- """
- Batch data into tuples of length n. The last batch may be shorter.
- Taken from Python's cookbook: https://docs.python.org/3/library/itertools.html#itertools.batched
- """
- # batched('ABCDEFG', 3) --> ABC DEF G
- if n < 1:
- value_error = "n must be at least one"
- raise ValueError(value_error)
- it = iter(iterable)
- while batch := tuple(islice(it, n)):
- yield batch
- def chunk_text(
- text: str, max_tokens: int, token_encoder: tiktoken.Encoding | None = None
- ):
- """Chunk text by token length."""
- if token_encoder is None:
- token_encoder = tiktoken.get_encoding("cl100k_base")
- tokens = token_encoder.encode(text) # type: ignore
- # 增加下行代码,将tokens解码成字符串
- tokens = token_encoder.decode(tokens)
- chunk_iterator = batched(iter(tokens), max_tokens)
- #yield from (token_encoder.decode(list(chunk)) for chunk in chunk_iterator)
- yield from chunk_iterator
复制代码 七、创建索引
1.打开ollama,可以看到模型运行状态
2.python -m graphrag.index --root ./ragtest
(回到第四步打开的终端页面,运行这行代码,并出现图中效果即创建完成)
八、开始查询
1.局部查询:python -m graphrag.query --root ./ragtest --method local "who is Marley?"
2.全局查询:python -m graphrag.query --root ./ragtest --method global "who is Marley?"
(留意:我在查询的时候会出现没有graphrag.logging的错误提示,直接把源码中graphrag文件夹里面的logging文件夹复制到虚拟情况中graphrag包里相应位置就解决了)
参考博文:
1.https://blog.csdn.net/weixin_42107217/article/details/141649920
2.https://blog.csdn.net/gaotianhao123/article/details/140640415
3.https://blog.csdn.net/m0_56378800/article/details/140319467
免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!更多信息从访问主页:qidao123.com:ToB企服之家,中国第一个企服评测及商务社交产业平台。 |