* Concept glossary
** General concepts
*** Task
+:PROPERTIES:
+:ID: 140c53cb-8032-4a04-83ed-d1818b1cfc52
+:END:
A /task/ represents a single unit of work submitted to the Älyverkko
-CLI system for AI processing. It consists of two core components:
+CLI system for AI processing.
+Task logically consists of two core components:
- a [[id:89bc60f0-89d4-4e10-ae80-8f824f2e3c55][system prompt]] (defining the AI's role/behavior) and
- a [[id:009e5410-f852-4faa-b81a-f9c98b056ae3][user prompt]] (the specific request or question).
may take minutes to hours.
*** Skill
+:PROPERTIES:
+:ID: 6579abb4-8386-418b-9457-cae6c3345dfb
+:END:
+- See also: [[id:1ba0c510-4c11-4f23-8746-eb9675c3e60c][Setting Up "Skills" (Your Custom Instructions)]]
+
A /skill/ is a predefined behavioral configuration for the AI,
implemented as a YAML file in the skills directory. Each skill
defines:
*** Model Library
++ See also: [[id:2ff850f6-6d7b-4603-879a-59a51d378ffe][Adding AI Models]]
+
The /model library/ is the internal registry of all available AI
models configured in the system. It's constructed during startup
from:
~40GB RAM versus hundreds of GB for full precision.
*** llama.cpp
+:PROPERTIES:
+:ID: 01b0d389-75d4-420f-8d5c-cae29900301f
+:END:
/llama.cpp/ is the open-source inference engine that powers Älyverkko
CLI's CPU-based AI processing. It's a critical dependency, in
The /task directory/ is the designated filesystem location where users
place task files for processing, configured via =tasks_directory= in
-the YAML configuration file. Älyverkko CLI continuously monitors this
+the [[id:fd687508-0a76-4fee-9a1c-4031cb403c60][YAML configuration file]]. Älyverkko CLI continuously monitors this
directory using filesystem watchers for new or modified files. When a
file with a =TOCOMPUTE:= header is detected, it's added to the
processing queue according to its priority. After completion, the
task files here using their preferred text editor, and completed
results appear in the same location.
-Beauty of file based interaction is that there is no imposed user
-interface. User can choose tools or editor that he/she prefers. Also
-tasks directory can be synchronized with Dropbox/Syncthing or similar
-tools between multiple computers or users. This way, travel laptop can
-utilize processing capability or more powerful computer at home.
+You might wonder: /Why deal with text files when everything has
+beautiful interfaces these days?/
+
+Because *this is designed for productivity, not conversation*:
+
+1. *No waiting around*: With CPU inference, responses take
+ minutes/hours. File-based workflow lets you queue tasks and get
+ back to work.
+
+2. *Natural integration*: Works with your existing text editor (VS
+ Code, Emacs, etc.) rather than forcing a new interface.
+
+3. *Version control friendly*: You can track changes to
+ prompts/responses with Git.
+
+4. *Scriptable*: Easily integrate with other tools in your workflow.
+
+5. Tasks directory can be synchronized with Dropbox/Syncthing or
+ similar tools between multiple computers or users. This way, travel
+ laptop can utilize processing capability or more powerful computer
+ at home while being connected to internet at irregular intervals.
+
+
+Think of it like email versus phone calls - sometimes asynchronous
+communication is actually /more/ productive.
** Generation parameters
*** Temperature
creating a flexible "rule cascade" where specialized configurations
override broader ones.
-* Getting started
+* Installation
When you first encounter Älyverkko CLI, the setup process might seem
involved compared to cloud-based AI services. That's completely
*** 2. Building llama.cpp (Your AI Engine)
- *What*: Download and compile the [[https://github.com/ggml-org/llama.cpp][llama.cpp]] project from GitHub.
-- *Why*: This is the actual "brain" that runs large language models on
+- *Why*: This is the [[id:01b0d389-75d4-420f-8d5c-cae29900301f][actual "brain" that runs large language models]] on
*your CPU*. We build from source (rather than using rebuilt
binaries) so it can optimize for /your specific CPU/ - squeezing out
maximum performance from your hardware.
*** 3. Adding AI Models (The Brains)
+:PROPERTIES:
+:ID: 2ff850f6-6d7b-4603-879a-59a51d378ffe
+:END:
- *What*: Download GGUF format model files (typically 4-30GB each)
- *Where*: From Hugging Face Hub ([[https://huggingface.co/models?search=gguf][search "GGUF"]]).
- *Why*: These contain the actual neural networks that power the AI.
#+end_quote
*** 5. Setting Up "Skills" (Your Custom Instructions)
+:PROPERTIES:
+:ID: 1ba0c510-4c11-4f23-8746-eb9675c3e60c
+:END:
- *What*: Create simple YAML files defining how the AI should behave
for different tasks.
coding assistant" vs "be a writing editor").
- *Don't worry*: Start with sample skills ([[https://www3.svjatoslav.eu/projects/alyverkko-cli/examples/skills/default.yaml][default.yaml]],
[[https://www3.svjatoslav.eu/projects/alyverkko-cli/examples/skills/summary.yaml][summary.yaml]]) - you can modify them gradually.
-- *Key insight*: Skills let you create specialized AI personas without
+- *Key insight*: [[id:6579abb4-8386-418b-9457-cae6c3345dfb][Skills let you create specialized AI personas]] without
changing models.
#+begin_quote
#+end_quote
*** 6. Preparing Your First Task (The Magic Moment)
-- *What*: Create a text file with your request, prefixed with
+- *What*: Create [[id:140c53cb-8032-4a04-83ed-d1818b1cfc52][task]] text file with your request, prefixed with
*TOCOMPUTE:*
- *Why*: This triggers the background processing system
- *Key insight*: File-based interaction isn't primitive - it's
intentional design for batch processing.
-** Why Files Instead of a Fancy UI?
-
-You might wonder: /Why deal with text files when everything has
-beautiful interfaces these days?/
-
-Because *this is designed for productivity, not conversation*:
-
-1. *No waiting around*: With CPU inference, responses take
- minutes/hours. File-based workflow lets you queue tasks and get
- back to work.
-2. *Natural integration*: Works with your existing text editor (VS
- Code, Emacs, etc.) rather than forcing a new interface.
-3. *Version control friendly*: You can track changes to
- prompts/responses with Git.
-4. *Resource efficient*: No heavy GUI consuming precious RAM needed
- for AI models.
-5. *Scriptable*: Easily integrate with other tools in your workflow.
-
-Think of it like email versus phone calls - sometimes asynchronous
-communication is actually /more/ productive.
-
-** The Light at the End of the Tunnel
-
-After initial setup (which typically takes 30-60 minutes), here's what
-you get:
-
-- ✅ A silent background process that automatically processes tasks
-- ✅ Complete privacy - no data ever leaves your machine
-- ✅ The ability to run state-of-the-art models without expensive
- hardware.
-- ✅ A system that keeps working while you sleep - queue up 10 tasks
- before bed, get results in the morning.
-
-You fill find that after the first few processed tasks, the initial
-setup effort feels worthwhile. You're not just getting another
-chat bot - you're building a personal AI workstation tailored to your
-specific needs. The initial investment pays dividends every time you
-need serious AI power without compromise.
-
-* Installation
** Requirements
*Operating System:*
: sudo systemctl stop alyverkko-cli
: sudo systemctl disable alyverkko-cli
+** The Light at the End of the Tunnel
+
+After setup, here's what you get:
+
+- ✅ A silent background process that automatically processes tasks
+- ✅ Complete privacy - no data ever leaves your machine (if you don't
+ synchronize tasks directory)
+- ✅ The ability to run state-of-the-art models without overly
+ expensive hardware.
+- ✅ A system that keeps working while you sleep - queue up 10 tasks
+ before bed, get results in the morning.
+
+You fill find that after the first few processed tasks, the initial
+setup effort feels worthwhile. You're not just getting another chat
+bot - you're building a personal AI workstation tailored to your
+specific needs. The initial investment pays dividends every time you
+need serious AI power without compromise.
+
* Usage
** Task file format
:PROPERTIES: