Configuration for a local LLM execution engine.
More...
#include <config.hpp>
|
|
std::string | model_path |
| | Path to the model weights file.
|
|
std::string | model_id |
| | Unique identifier for routing.
|
|
int | context_size = 2048 |
| | Context window size in tokens.
|
|
int | n_threads = 4 |
| | CPU threads for inference.
|
|
bool | use_gpu = false |
| | Enable GPU acceleration.
|
Configuration for a local LLM execution engine.
The documentation for this struct was generated from the following file: