Skip to contents

Chat & Inference

Interact with loaded LLMs to generate text and process batches of prompts.

lms_chat()
Chat Completion with LM Studio
lms_chat_batch()
Batch Chat Completion with LM Studio
lms_chat_native()
Chat Completion via Native API
lms_chat_openai()
Chat Completion via OpenAI Compatibility API
lms_chat_openresponses()
Chat Completion via OpenResponses API
lms_score_expected()
Calculate Expected Scores and Uncertainty from Logprobs

Model Management

Discover, download, load, and manage local models.

list_models()
List available models
lms_download()
Download a model via REST API
lms_download_status()
Get the status of a download job
lms_load()
Load a model via REST API
lms_unload()
Unload a model from memory via REST API
lms_unload_all()
Unload all models from memory

Local Server Management

Control the LM Studio local REST API server.

lms_server_start()
Start the LM Studio local server
lms_server_stop()
Stop the LM Studio local server
lms_server_status()
Check the status of the LM Studio server

Headless Daemon Management

Manage the background llmster daemon required for headless environments (e.g., remote servers, Docker).

lms_daemon_start()
Start the LM Studio headless daemon
lms_daemon_stop()
Stop the LM Studio headless daemon
lms_daemon_status()
Check the global status of LM Studio
with_lms_daemon()
Run code with the LM Studio daemon active

Setup & Configuration

Install, locate, and verify the LM Studio CLI installation.

install_lmstudio()
Help the user install or update LM Studio
has_lms()
Check if LM Studio CLI is installed
check_lms_version()
Check if the installed LM Studio CLI meets the minimum requirement
lms_path()
Get the absolute path to the LMS executable