From 5258127ae10213f04becdefb072df2692627427f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Nielson=20Jann=C3=A9?= Date: Mon, 24 Mar 2025 11:57:34 +0100 Subject: [PATCH] Update readme --- README.md | 15 ++++----------- 1 file changed, 4 insertions(+), 11 deletions(-) diff --git a/README.md b/README.md index e3dda22..0457da3 100644 --- a/README.md +++ b/README.md @@ -8,8 +8,7 @@ A Sogeti Nederland generic RAG demo #### Unstructered PDF loader (optional) -If you would like to run the application with the unstructered PDF loader, the application requires system dependencies. -The two currently used: +If you would like to run the application using the unstructered PDF loader (`--unstructured-pdf` flag) you need to install two system dependencies. - [poppler-utils](https://launchpad.net/ubuntu/jammy/amd64/poppler-utils) - [tesseract-ocr](https://github.com/tesseract-ocr/tesseract?tab=readme-ov-file#installing-tesseract) @@ -18,30 +17,24 @@ The two currently used: sudo apt install poppler-utils tesseract-ocr ``` -and run the generic RAG demo with the `--unstructured-pdf` flag. - > For more information please refer to the [langchain docs.](https://python.langchain.com/docs/integrations/providers/unstructured/) #### Local LLM (optional) -The application supports running a local LLM, using Ollama. - -To install Ollama, please run following commands +If you would like to run the application using a local LLM backend (`-b local` flag), you need to install Ollama. ```bash curl -fsSL https://ollama.com/install.sh | sh # install Ollama -ollama pull llama3.1:8b # fetch and dowload specific model +ollama pull llama3.1:8b # fetch and download as model ``` -Include the model in the `.env` file: +Include the downloaded model in the `.env` file: ```text LOCAL_CHAT_MODEL="llama3.1:8b" LOCAL_EMB_MODEL="llama3.1:8b" ``` -And run the generic RAG demo with the `-b local` flag. - >For more information on installing Ollama, please refer to the Langchain Local LLM documentation, specifically the [Quickstart section](https://python.langchain.com/docs/how_to/local_llms/#quickstart). ### Running generic RAG demo