Ollama

Ollama on SeaWulf

Introduction to Ollama on SeaWulf

Ollama is a tool that facilitates the interaction with LLMs (Large Language Models) across different environments and hardware. It provides various configurations and versions for both AMD and NVIDIA hardware, making it adaptable to your system setup. Ollama helps to bring sophisticated AI models into accessible applications for natural language processing tasks.

SeaWulf offers a selection of Ollama versions across different hardware environments to provide flexibility in using these models for a variety of use cases.

Available Ollama Versions

SeaWulf provides the following Ollama versions across different hardware environments:

  • ollama/0.1.44-amd (For AMD hardware)
  • ollama/0.1.44-nvidia (For NVIDIA hardware)
  • ollama/0.3.10-amd (For AMD hardware)
  • ollama/0.5.7-amd (For AMD hardware)
  • ollama/0.5.4 (For Milan nodes)
  • ollama-py/0.3.3 (Python bindings for Ollama)

To load a specific version of Ollama, use the following module load command:

module load ollama/0.5.7-amd