Sends a prompt to Ollama and returns the generated text.
Usage
call_ollama(
prompt,
model = "tinyllama",
system = NULL,
temperature = 0.3,
max_tokens = 512,
timeout = 120,
verbose = FALSE
)Arguments
- prompt
Character string containing the prompt.
- model
Character string specifying the Ollama model (default: "tinyllama").
- system
Character string with system instructions (default: NULL).
- temperature
Numeric value controlling randomness (default: 0.3).
- max_tokens
Maximum number of tokens to generate (default: 512).
- timeout
Timeout in seconds for the request (default: 60).
- verbose
Logical, if TRUE, prints progress messages.
See also
Other ai:
call_gemini_chat(),
call_llm_api(),
call_openai_chat(),
check_ollama(),
describe_image(),
generate_topic_content(),
get_api_embeddings(),
get_best_embeddings(),
get_content_type_prompt(),
get_content_type_user_template(),
get_recommended_ollama_model(),
list_ollama_models(),
run_rag_search()
Examples
if (FALSE) { # \dontrun{
response <- call_ollama(
prompt = "Summarize these keywords: machine learning, neural networks, AI",
model = "tinyllama"
)
print(response)
} # }
