forked from AI_team/Philosophy-RAG-demo
Update readme
This commit is contained in:
parent
6cf26dabce
commit
5258127ae1
15
README.md
15
README.md
@ -8,8 +8,7 @@ A Sogeti Nederland generic RAG demo
|
|||||||
|
|
||||||
#### Unstructered PDF loader (optional)
|
#### Unstructered PDF loader (optional)
|
||||||
|
|
||||||
If you would like to run the application with the unstructered PDF loader, the application requires system dependencies.
|
If you would like to run the application using the unstructered PDF loader (`--unstructured-pdf` flag) you need to install two system dependencies.
|
||||||
The two currently used:
|
|
||||||
|
|
||||||
- [poppler-utils](https://launchpad.net/ubuntu/jammy/amd64/poppler-utils)
|
- [poppler-utils](https://launchpad.net/ubuntu/jammy/amd64/poppler-utils)
|
||||||
- [tesseract-ocr](https://github.com/tesseract-ocr/tesseract?tab=readme-ov-file#installing-tesseract)
|
- [tesseract-ocr](https://github.com/tesseract-ocr/tesseract?tab=readme-ov-file#installing-tesseract)
|
||||||
@ -18,30 +17,24 @@ The two currently used:
|
|||||||
sudo apt install poppler-utils tesseract-ocr
|
sudo apt install poppler-utils tesseract-ocr
|
||||||
```
|
```
|
||||||
|
|
||||||
and run the generic RAG demo with the `--unstructured-pdf` flag.
|
|
||||||
|
|
||||||
> For more information please refer to the [langchain docs.](https://python.langchain.com/docs/integrations/providers/unstructured/)
|
> For more information please refer to the [langchain docs.](https://python.langchain.com/docs/integrations/providers/unstructured/)
|
||||||
|
|
||||||
#### Local LLM (optional)
|
#### Local LLM (optional)
|
||||||
|
|
||||||
The application supports running a local LLM, using Ollama.
|
If you would like to run the application using a local LLM backend (`-b local` flag), you need to install Ollama.
|
||||||
|
|
||||||
To install Ollama, please run following commands
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
curl -fsSL https://ollama.com/install.sh | sh # install Ollama
|
curl -fsSL https://ollama.com/install.sh | sh # install Ollama
|
||||||
ollama pull llama3.1:8b # fetch and dowload specific model
|
ollama pull llama3.1:8b # fetch and download as model
|
||||||
```
|
```
|
||||||
|
|
||||||
Include the model in the `.env` file:
|
Include the downloaded model in the `.env` file:
|
||||||
|
|
||||||
```text
|
```text
|
||||||
LOCAL_CHAT_MODEL="llama3.1:8b"
|
LOCAL_CHAT_MODEL="llama3.1:8b"
|
||||||
LOCAL_EMB_MODEL="llama3.1:8b"
|
LOCAL_EMB_MODEL="llama3.1:8b"
|
||||||
```
|
```
|
||||||
|
|
||||||
And run the generic RAG demo with the `-b local` flag.
|
|
||||||
|
|
||||||
>For more information on installing Ollama, please refer to the Langchain Local LLM documentation, specifically the [Quickstart section](https://python.langchain.com/docs/how_to/local_llms/#quickstart).
|
>For more information on installing Ollama, please refer to the Langchain Local LLM documentation, specifically the [Quickstart section](https://python.langchain.com/docs/how_to/local_llms/#quickstart).
|
||||||
|
|
||||||
### Running generic RAG demo
|
### Running generic RAG demo
|
||||||
|
|||||||
Loading…
Reference in New Issue
Block a user