💻System Requirements
How to choose the right AI model to download?
Sanctum currently offers a range of open-source models. Here's a simple guide on how to pick the model that fits you best:
Pay Attention to Memory & Hardware Requirements: Different models have different resource requirements. We recommend having at least 16 GB of memory available for optimal performance.
Identify Your Specific Use Case: Here's a few examples:
For Assistance: If you need help with general inquiries, Llama is a great choice. It's designed to be your everyday chat assistant.
For Coding: CodeLlama is the ideal option if you require assistance with coding tasks. It specializes in providing coding-related support.
For Writing Articles & Stories: If your objective is to generate articles & stories, we recommend the Vicuna model
Which operating systems are supported?
Currently, Sanctum is supported on all the latest versions of MacOS 12+.
Windows & Linux support coming soon.
What is GGML?
Sanctum currently supports models in GGML format. These models are capable of loading and running on a CPU, which sets them apart from GPT-like models that traditionally require a GPU for processing. This feature allows you to run Large Language Models (LLMs) on consumer hardware, making the experience more accessible and versatile.
GGML models offer high performance and efficiency, making them a reliable choice for various tasks within Sanctum.
What is Memory usage?
The Memory usage section in the sidebar shows the amount of memory currently being used by your operating system and all the applications currently running on it.
This can serve as an indicator to understand the performance levels of various AI models operating on your device.
What is TPS?
TPS, or "Tokens per Second," refers to the processing speed of a language model in terms of how many tokens it can generate per one second. It provides an indication of how quickly the model can analyze and produce text based on the complexity of the task and the hardware it's running on. A TPS rate above 1 is good, indicating quick model responses, while below 1 is considered slow.
What is Context size?
Context size is the token-based measurement that signifies the amount of text or data a language model takes into account when processing a question and generating a response. This metric is crucial for understanding how much historical context the model considers to provide relevant and coherent answers.
You can monitor the model Context size in the top right corner of the Sanctum interface.
When the context length reaches its limit, the system will automatically remove older messages from the context to make room for the new ones.
Last updated