implemented as a YAML file in the skills directory. Each skill
defines:
-- A [[id:89bc60f0-89d4-4e10-ae80-8f824f2e3c55][system prompt]] that establishes the AI's role and behavior
+- A [[id:89bc60f0-89d4-4e10-ae80-8f824f2e3c55][system prompt]] that establishes the AI's role and behavior.
+ - System prompt will have special marker =<TASK-FILE>= where
+ specific user task will be injected.
- Optional generation parameters ([[id:24a0a54b-828b-4c78-8208-504390848fbc][temperature]], [[id:047f5bf7-e964-49ac-a666-c7ac75754e54][top-p]] , etc.)
-- The =<TASK-FILE>= placeholder where user input gets injected
+- Optional [[id:074af969-8a9b-407d-a605-6725fe4e8580][Model]] alias that will be selected when this skill gets
+ invoked.
Skills function as specialized "personas" for different task
types. For example, a =summary.yaml= skill might contain instructions
rather than constantly redefining how the AI should behave.
*** Model
+:PROPERTIES:
+:ID: 074af969-8a9b-407d-a605-6725fe4e8580
+:END:
A /model/ refers to a specific AI language model implementation in
GGUF format, capable of processing tasks. Each model is configured
** Your Setup Journey - What to Expect
-Here's what you'll be doing, explained simply with /why/ each step
-matters:
-
-*** 1. Installing Java & Maven (The Foundation)
+Before we start actual setup, here's brief overview of what you'll be
+doing:
+*Installing Java & Maven (The Foundation)*
- *What*: Install JDK 21+ and Apache Maven
- *Why*: Älyverkko CLI is written in Java - these tools let you build
- and run the application.
- - *Don't worry*: On Debian/Ubuntu, it's just
- : sudo apt install openjdk-21-jdk maven
-
-*Key insight*: Java was chosen because it's cross-platform,
-memory-safe, and perfect for long-running background processes like
-our AI task processor.
+ and run the application. *Key insight*: Java was chosen because it's
+ cross-platform, memory-safe, and perfect for long-running background
+ processes like our AI task processor.
-*** 2. Building llama.cpp (Your AI Engine)
+*Building llama.cpp (Your AI Engine):*
- *What*: Download and compile the [[https://github.com/ggml-org/llama.cpp][llama.cpp]] project from GitHub.
- *Why*: This is the [[id:01b0d389-75d4-420f-8d5c-cae29900301f][actual "brain" that runs large language models]] on
*your CPU*. We build from source (rather than using rebuilt
binaries) so it can optimize for /your specific CPU/ - squeezing out
maximum performance from your hardware.
-*** 3. Adding AI Models (The Brains)
-:PROPERTIES:
-:ID: 2ff850f6-6d7b-4603-879a-59a51d378ffe
-:END:
+*Adding AI Models (The Brains):*
- *What*: Download GGUF format model files (typically 4-30GB each)
- *Where*: From Hugging Face Hub ([[https://huggingface.co/models?search=gguf][search "GGUF"]]).
- *Why*: These contain the actual neural networks that power the AI.
-- *Don't worry*: Start with one model (like Mistral 7B) - you can add
- more later.
-- *Key insight*: GGUF format was created specifically for CPU
- inference.
-
-#+begin_quote
-❓ Why not smaller models? Larger models (even running slowly on CPU)
-produce significantly better results for complex tasks - it's worth
-the wait.
-#+end_quote
-
-*** 4. Running the Interactive Wizard (=alyverkko-cli wizard=)
-- *What*: Launch the configuration wizard that asks simple questions.
-- *Why*: To connect all the pieces without you needing to edit complex
- YAML files.
-- *Don't worry*: It's interactive! You'll answer questions like "Where did
- you put your AI models?" with easy prompts.
-- *Key insight*: This creates your personal
- =~/.config/alyverkko-cli.yaml= file.
-
-#+begin_quote
-🌟 Pro tip: The wizard automatically detects your models and suggests
-reasonable defaults - you're not starting from scratch.
-#+end_quote
-
-*** 5. Setting Up "Skills" (Your Custom Instructions)
-:PROPERTIES:
-:ID: 1ba0c510-4c11-4f23-8746-eb9675c3e60c
-:END:
-- *What*: Create simple YAML files defining how the AI should behave
- for different tasks.
+*Running the Interactive Wizard setup wizard:*
+- *What*: Launch the configuration wizard that asks simple questions.
+ You'll answer questions like "Where did you put your AI models?"
+ with easy prompts. *Key insight*: This creates your personal
+ =~/.config/alyverkko-cli.yaml= file. Note: The wizard automatically
+ detects your models and suggests reasonable defaults - you're not
+ starting from scratch.
+
+*Setting Up "Skills" (Your Custom Instructions)*
+- *What*: You will create simple YAML files defining how the AI should
+ behave for different tasks.
- *Why*: So you don't have to rewrite instructions every time ("be a
- coding assistant" vs "be a writing editor").
-- *Don't worry*: Start with sample skills ([[https://www3.svjatoslav.eu/projects/alyverkko-cli/examples/skills/default.yaml][default.yaml]],
- [[https://www3.svjatoslav.eu/projects/alyverkko-cli/examples/skills/summary.yaml][summary.yaml]]) - you can modify them gradually.
-- *Key insight*: [[id:6579abb4-8386-418b-9457-cae6c3345dfb][Skills let you create specialized AI personas]] without
- changing models.
-
-#+begin_quote
-Idea: Your =writer.yaml= skill might instruct the AI to "always
-provide well-reasoned responses in academic tone"
-#+end_quote
-
-*** 6. Preparing Your First Task (The Magic Moment)
+ coding assistant" vs "be a writing editor"). *Don't worry*: You can
+ start with sample skills and you can modify them gradually.
+
+*Preparing Your First Task (The Magic Moment)*
- *What*: Create [[id:140c53cb-8032-4a04-83ed-d1818b1cfc52][task]] text file with your request, prefixed with
*TOCOMPUTE:*
-- *Why*: This triggers the background processing system
-- *Key insight*: File-based interaction isn't primitive - it's
- intentional design for batch processing.
+- *Why*: This triggers the background processing system and verifies
+ that everything is working correctly.
** Installation
:PROPERTIES:
:ID: 0b705a37-9b84-4cd5-878a-fedc9ab09b12
:END:
+
At the moment, to use Älyverkko CLI, you need to:
- Download sources and build [[https://github.com/ggerganov/llama.cpp][llama.cpp]] project.
- Download [[id:f5740953-079b-40f4-87d8-b6d1635a8d39][sources]] and build Älyverkko CLI project.
#+begin_src yaml
prompt: "Full system prompt text here"
+ model_alias: mistral # Optional
temperature: 0.8 # Optional
top_p: 0.95 # Optional
top_k: 20 # Optional
#+begin_src yaml
temperature: 0.9
top_p: 0.95
+ model_alias: mistral
prompt: |
<|im_start|>system
User will provide you with task that needs to be solved along with
innovations
#+end_example
-
** Task preparation
:PROPERTIES:
:ID: 4b7900e4-77c1-45e7-9c54-772d0d3892ea
* <p>
* Usage:
* <pre>
- * alyverkko-cli mail
+ * alyverkko-cli process
* </pre>
*/
public class TaskProcessorCommand implements Command {
}
try {
- Task task = buildMailQueryFromTaskFile(file);
+ Task task = buildTaskFromFile(file);
TaskProcess aiTask = new TaskProcess(task);
String aiGeneratedResponse = aiTask.runAiQuery();
}
/**
- * Builds a MailQuery object from the contents of a file.
+ * Builds a Task object from the contents of a file.
+ *
+ * This method now implements a three-level hierarchy for model selection:
+ * 1. Explicit model specified in TOCOMPUTE line (the highest priority)
+ * 2. Model alias defined in the skill configuration (if present)
+ * 3. Default "default" model (the lowest priority)
*
* @param file the file to read.
* @return the constructed MailQuery.
* @throws IOException if reading the file fails.
*/
- private Task buildMailQueryFromTaskFile(File file) throws IOException {
+ private Task buildTaskFromFile(File file) throws IOException {
Task result = new Task();
String inputFileContent = getFileContentsAsString(file);
result.systemPrompt = skill.getPrompt();
result.skill = skill;
- // Set AI model
- String modelAlias = fileProcessingSettings.getOrDefault("model", "default");
+ // Set AI model using hierarchy: TOCOMPUTE > skill config > default
+ String modelAlias = fileProcessingSettings.getOrDefault("model",
+ skill.getModelAlias() != null ? skill.getModelAlias() : "default");
Optional<Model> modelOptional = modelLibrary.findModelByAlias(modelAlias);
if (!modelOptional.isPresent()) {
throw new IllegalArgumentException("Model with alias '" + modelAlias + "' not found.");