GitHub - yao8839836/kg-llm: Exploring large language models for knowledge graph completion
installing requirement packages
pip install -r requirements_chatglm.txt
复制代码
1.DATA
(1) The four KGs we used as well as entity and relation descriptions are in ./data.
(2) The input files for LLMs are also in each folder of ./data, see train_instructions_llama.json and train_instructions_glm.json as examples.
(3) The output files of our models are also in each folder of ./data, see pred_instructions_llama13b.csv and generated_predictions.txt (from ChatGLM-6B) as examples.
2. LLaMA fine-tuning and inference examples
Firstly, put LLaMA model files under models/LLaMA-HF/ and ChatGLM-6b model files under models/chatglm-6b/.
In our experiments, we utilized an A100 GPU for all LLaMA models and a V100 GPU for all ChatGLM models.