← All

Ollama 단축키

전체 단축키

Model Management

단축키기능설명
ollama pull <model>Download modelDownload a model. e.g. ollama pull llama3.2
ollama pull <model>:<tag>Pull specific versionPull a specific model version. e.g. llama3.2:1b
ollama listList local modelsShow all downloaded models.
ollama rm <model>Remove modelDelete a downloaded model to free disk space.
ollama show <model>Model infoShow model parameters, template, and system prompt.
ollama cp <src> <dst>Copy modelDuplicate a model under a new name.
ollama push <model>Publish modelPush a custom model to the Ollama registry.

Running & Inference

단축키기능설명
ollama run <model>Run interactiveStart an interactive chat with the model.
ollama run <model> "prompt"Single promptRun a one-shot prompt and exit.
cat file.txt | ollama run <model>Pipe inputPipe file content as the model input.
/byeExit chatExit the interactive chat session.
/clearClear historyClear the current conversation history.
/set system <text>Set system promptAssign a system prompt during chat.
/show infoModel detailsDisplay current model information mid-session.
/show modelfileShow ModelfilePrint the Modelfile for the current model.

Server & API

단축키기능설명
ollama serveStart serverStart the Ollama API server on port 11434.
OLLAMA_HOST=0.0.0.0 ollama serveExpose serverAllow external connections to the Ollama server.
curl localhost:11434/api/generateGenerate APICall the REST API to generate a response.
curl localhost:11434/api/chatChat APICall the REST API for a chat conversation.
curl localhost:11434/api/tagsList models APIList all available models via REST API.
OLLAMA_MODELS=<path>Custom model dirSet a custom path for storing model files.
OLLAMA_NUM_PARALLEL=4Parallel requestsSet number of concurrent inference requests.
OLLAMA_MAX_LOADED_MODELS=2Max loaded modelsSet maximum models kept in GPU memory.

Modelfile

단축키기능설명
FROM <model>Base modelSpecify the base model to build from.
SYSTEM "<text>"System promptSet a custom system prompt for the model.
PARAMETER temperature 0.7TemperatureSet response creativity (0.0–1.0).
PARAMETER num_ctx 4096Context lengthSet the context window size in tokens.
TEMPLATE "{{ .Prompt }}"Prompt templateDefine a custom prompt formatting template.
ollama create <name> -f ModelfileBuild modelCreate a custom model from a Modelfile.