Installation Guide¶
Prerequisites¶
Before installing GFM-RAG, make sure your system meets these requirements:
- Python 3.12 or higher
- CUDA 12 or higher (for GPU support)
- Poetry (recommended for development)
Installation Methods¶
Install via Conda¶
Conda provides an easy way to install the CUDA development toolkit which is required by GFM-RAG:
conda create -n gfmrag python=3.12
conda activate gfmrag
conda install cuda-toolkit -c nvidia/label/cuda-12.4.1 # Replace with your desired CUDA version
pip install gfmrag
TORCH=$(python -c "import torch; print(torch.__version__)")
pip install torch_scatter torch_sparse -f https://data.pyg.org/whl/torch-${TORCH}.html
Install via Pip¶
Install relevant packages, please make sure to install the correct version of torch_scatter and torch_sparse based on your PyTorch and CUDA versions:TORCH=$(python -c "import torch; print(torch.__version__)")
pip install torch_scatter torch_sparse -f https://data.pyg.org/whl/torch-${TORCH}.html
Install from Source¶
For contributors or those who want to install from source, follow these steps:
-
Clone the repository:
-
Install Poetry:
-
Create and activate a conda environment:
-
Install project dependencies:
Optional Components¶
Llama.cpp Integration¶
If you plan to use locally host LLMs via Llama.cpp:
Install llama-cpp-python:
For more information, visit the following resources: - LangChain Llama.cpp - llama-cpp-python repository
Ollama Integration¶
If you plan to use Ollama for hosting LLMs:
Install Ollama:
For more information, visit the following resources: - LangChain Ollama
Troubleshooting¶
CUDA errors when compiling rspmm
kernel¶
GFM-RAG requires the nvcc
compiler to compile the rspmm
kernel. If you encounter errors related to CUDA, make sure you have the CUDA toolkit installed and the nvcc
compiler is in your PATH. Meanwhile, make sure your CUDA_HOME variable is set properly to avoid potential compilation errors, eg
Usually, if you install CUDA toolkit via conda, the CUDA_HOME variable is set automatically.
Stuck when compiling rspmm
kernel¶
Sometimes the compilation of the rspmm
kernel may get stuck. If you encounter this issue, try to manually remove the compilation cache under ~/.cache/torch_extensions/
and recompile the kernel.
For more help, please check our GitHub issues or create a new one.