ollama

The “pkgmatch” package does not access language model (LM) embeddings through external APIs, for reasons explained in vignette("why-local-lms"). The LM embeddings are extracted from a locally-running instance of ollama. That means you need to download and install ollama on your own computer in order to use this package. The following sub-sections describe two distinct ways to do this. You will generally need to follow one and only one of these sections.

Local installation

This sub-section describes how to install and run ollama on your local computer. This may not be possible for everybody, in which case the following sub-section describes how to run ollama within a docker container.

General download instructions are given at https://ollama.com/download. Once downloaded, ollama can be started by calling ollama serve &, where the final & starts the process in the background.

The particular models used to extract the embeddings will be automatically downloaded by this package if needed, or you can do this manually by running the following two commands (in a system console; not in R):

ollama pull jina/jina-embeddings-v2-base-en
ollama pull ordis/jina-embeddings-v2-base-code

You’ll likely need to wait a few tens of minutes for the models to download before proceeding. Once downloaded, both models should appear in the output of ollama list.

Docker

This package comes with a “Dockerfile” containing all code needed to build and run the necessary ollama models within a docker container. This can be either built locally, or downloaded from the GitHub container registry.

Building the Docker container locally

To build the container locally, download the Dockerfile from this link. Then from the same directory as that file, run these lines:

docker build . -t pkgmatch-ollama

Downloading the pre-build Docker container

Alternatively, the Docker container including the ollama models used to run this package is stored in the GitHub container registry, and can be downloaded by running this line:

docker pull ghcr.io/ropensci-review-tools/pkgmatch-ollama:latest

Running the Docker container

Whether you’ve locally built the Docker container, or downloaded the pre-built version, the container needs to be started with the following command:

docker run --rm -d -p 11434:11434 ollama-models

The running container can be stopped by calling docker stop followed the the “CONTAINER ID” listed on the output of docker ps.