Skip to contents

The rlmstudio package bridges the gap between R and local Large Language Models by wrapping the LM Studio CLI and its REST API. This vignette covers the GUI Workflow, which is best for visual users on desktop environments like macOS, Windows, and Linux desktops.

While the R package provides functions to manage the entire lifecycle of a local LLM, the LM Studio desktop application provides an excellent visual search function for finding new models and exploring advanced configurations beyond what the API can currently do. You can seamlessly mix and match: use the GUI to discover and tweak models, and use R to automate your chatting and data processing.

Setup and Installation

This package relies on the LM Studio CLI. If you do not have LM Studio installed or need to update your version, the package provides a convenient setup function.

For desktop users, you can use the "browser" method to open the official download page.

library(rlmstudio)

# Open the download page in your default browser
install_lmstudio(method = "browser")
#> ℹ Opening the LM Studio download page in your default browser...
#> ! Please install or update the software, restart R, and try again.

Step-by-Step Guide

1. Start the Server

You have two options for starting the local server. You can open the LM Studio desktop application, navigate to the Developer or Local Server tab, and click “Start”. Alternatively, you can start it directly from R.

# Start the local server on the default port
lms_server_start()
#> ✔ LM Studio server started successfully on the default port.

2. Finding and Managing Models

The LM Studio GUI shines when it comes to discovering models. You can use its built-in search bar to browse Hugging Face, filter by compatibility, and select specific quantizations.

However, if you already know the exact identifier of the model you want, you can download it and manage your inventory directly from R.

# Download a model using its identifier
model <- "google/gemma-3-1b"
job_id <- lms_download(model)
#> ✔ Initiating download for model: "google/gemma-3-1b"... [973ms]
#> ✔ Download job started successfully. Job ID: "job_02c8a1f86e"

lms_download_status(job_id)
#> ── Download Job: "job_02c8a1f86e"
#> Status: completed
#> Progress: 100% (0.72 GB / 0.72 GB)

3. Loading Models

Before you can chat with a model, you must load it into system memory. We include the optional flash_attention = TRUE argument here, which speeds up processing and reduces memory usage on supported hardware.

# Standard load
lms_load(model, flash_attention = TRUE)
#> ✔ Model "google/gemma-3-1b" loaded and verified. [30.8s]

4. Chatting

Interact with the model by sending it text prompts. The lms_chat() function takes a few key arguments to guide the AI’s response:

  • input: This is your main message or question for the model.

  • system_prompt: This is an optional set of background instructions. You use it to tell the AI how to behave, what role to play, or how to format its answers (like asking it to act as an expert R programmer).

Note: The rlmstudio package currently processes each chat request independently. Every time you run the lms_chat() function, you are starting a brand new conversation. The model will not remember previous messages or context from earlier in your R script.

response <- lms_chat(
  model = model,
  input = "Say hello!",
  system_prompt = "Answer in rhymes."
)

cat(response)
#> A friendly face, so bright and new,
#> Hello there, it’s waiting for you!
#>
#> Let's chat and have a joyful spree,
#> Hello there, happy as can be!

5. Teardown

To free up memory and system resources when you are finished, it is best practice to unload your models and stop the local server. Closing the LM Studio GUI will also perform this cleanup if you forget.

# Unload the model
lms_unload(model)
#> ✔ Model "google/gemma-3-1b" unloaded successfully. [431ms]

# Stop the server
lms_server_stop()
#> ✔ LM Studio server stopped successfully.