/llama.cpp/ is the open-source inference engine that powers Älyverkko
CLI's CPU-based AI processing. It's a critical dependency, in
-particular a standalone executable (=llama-cli=) that handles:
+particular a standalone executable (=llama-completion=) that handles:
- Loading GGUF format models
- Tokenization and detokenization
- Batched/unattended processing capabilities
- Cross-platform compatibility
-Älyverkko CLI acts as a sophisticated wrapper around llama.cpp,
-managing the complex workflow of task processing while leveraging
-llama.cpp's efficient inference capabilities. The =llama_cli_path=
-configuration specifies where to find this executable, which must be
-built separately from source to optimize for your specific
-CPU. Without llama.cpp, Älyverkko CLI couldn't execute any AI tasks -
-it's the actual "brain" behind the system.
+Älyverkko CLI acts as a sophisticated wrapper around llama.cpp
+*llama-completion* executable binary, managing the complex workflow of
+task processing while leveraging llama.cpp's efficient inference
+capabilities. The =llama_cli_path= configuration specifies where to
+find this executable, which must be built separately from source to
+optimize for your specific CPU. Without llama.cpp, Älyverkko CLI
+couldn't execute any AI tasks - it's the actual "brain" behind the
+system.
** Important files and directories
*** Configuration File
- =skills_directory=: Contains YAML skill definition files.
-- =llama_cli_path=: Path to llama.cpp's executable.
+- =llama_cli_path=: Path to llama.cpp's *llama-completion* executable.
**** Generation Parameters
file. Below is an example of how the configuration file might look:
#+begin_src yaml
- tasks_directory: "/home/user/AI/tasks"
- models_directory: "/home/user/AI/models"
- skills_directory: "/home/user/AI/skills"
- llama_cli_path: "/home/user/AI/llama.cpp/build/bin/llama-cli"
+ tasks_directory: "/home/john/AI/tasks"
+ models_directory: "/home/john/AI/models"
+ skills_directory: "/home/john/AI/skills"
+ llama_cli_path: "/home/john/AI/llama.cpp/build/bin/llama-completion"
# Generation parameters
default_temperature: 0.7