---
title: "Before you begin: ollama installation"
output: rmarkdown::html_vignette
vignette: >
  %\VignetteIndexEntry{Before you begin: ollama installation}
  %\VignetteEngine{knitr::rmarkdown}
  %\VignetteEncoding{UTF-8}
---

```{r, include = FALSE}
knitr::opts_chunk$set (
    collapse = TRUE,
    comment = "#>"
)
```

The "pkgmatch" package works through a locally installed and running instance
of [ollama](https://ollama.com) (for reasons described in the
[`vignette("why-local-lms")`](https://docs.ropensci.org/pkgmatch/articles/why-local-lms.html)).
In order to use "pkgmatch", [ollama](https://ollama.com) needs to be installed,
and be running. This vignette describes how to do that. There are two distinct
ways, each described in a separate sub-section. You will generally need to
follow one and only one of these sections.

Note that the [ollama](https://ollama.com) software is around 1GB, and its
usage in `pkgmatch` requires downloading two additional language models of
around 300MB each. Installation will thus use around 2GB of disk space (and
will generally take quite some time).

## Local ollama API endpoint

Regardless of which method you use to install and run
[ollama](https://ollama.com), the "pkgmatch" package presumes by default that
the local ollama instance has an API endpoint at "127.0.0.1:11434". If this is
not the case, alternative endpoints can be set using [the `ollama_set_url()`
function](https://docs.ropensci.org/pkgmatch/reference/ollama_set_url.html) or
by setting the environment variable `OLLAMA_HOST` before pkgmatch is loaded.

## Local installation

This sub-section describes how to install and run ollama on your local
computer. This may not be possible for everybody, in which case the following
sub-section describes how to run ollama within a docker container.

General download instructions are given at https://ollama.com/download. Once
downloaded, ollama can be started by calling `ollama serve &`, where the final
`&` starts the process in the background. Alternatively, omit the `&` to run as
a foreground process. You can then interact with that process elsewhere (for
example in a different terminal shell), and you'll see full logs in the
original shell.

Once ollama is running with `ollama serve`, the particular models used here
will be automatically downloaded by this package when needed. Alternatively,
you can do this manually before using the package for the first time by running
the following two commands (in a system console; not in R):

``` bash
ollama pull jina/jina-embeddings-v2-base-en
ollama pull ordis/jina-embeddings-v2-base-code
```

You'll likely need to wait a few tens of minutes for the models to download
before proceeding. Once downloaded, you should be able to run `ollama list`,
and see both models appear in the output.

## Docker

If you do not or can not install [`ollama`](https://ollama.com) on your local
machine, you can build a Docker container to download and run
[`ollama`](https://ollama.com). For this, you will need to have [Docker
installed](https://docs.docker.com/get-started/get-docker/). (Docker is not
required for the local installation procedure described above; only for this
alternative procedure.)

This package comes with a "Dockerfile" containing all code needed to build and
run the necessary ollama models within a Docker container. This can be either
built locally, or downloaded from the GitHub container registry. 

### Building the Docker container locally

To build the container locally, download the [Dockerfile from this
link](https://github.com/ropensci-review-tools/pkgmatch/blob/main/Dockerfile).
Then from the same directory as that file, run these lines:

``` bash
docker build . -t pkgmatch-ollama
```

### Downloading the pre-build Docker container

Alternatively, the Docker container including the ollama models used to run
this package is stored in the GitHub container registry, and can be downloaded
by running this line:

``` bash
docker pull ghcr.io/ropensci-review-tools/pkgmatch-ollama:latest
```

### Running the Docker container

Whether you've locally built the Docker container, or downloaded the pre-built
version, the container needs to be started with the following command:

``` bash
docker run --rm -d -p 11434:11434 ollama-models
```

The running container can be stopped by calling `docker stop` followed the the
"CONTAINER ID" listed on the output of `docker ps`.