How to Install LM Studio - A Graphical Application for Running Large Language Models (LLMs)

Updated on 29 April, 2025
How to Install LM Studio - A Graphical Application for Running Large Language Models (LLMs) header image

LM Studio is a graphical desktop application based on llama.cpp that runs Large Language Models (LLMs) locally. It supports GGUF and open-source models from platforms like Hugging Face, which you can download directly from the LM Studio interface. You can run popular models such as Llama, DeepSeek-R1, Mistral, Gemma, Granite, and Phi locally after installation.

This article explains how to install LM Studio on Linux and run Large Language Models (LLMs) locally on your workstation. You will enable API access and use LM Studio to download and run models like DeepSeek-R1, and Gemma 3 for integration in your existing applications.

Prerequisites

Before you begin, you need to:

  • Have access to a GUI-enabled Linux based remote instance with GPU or a desktop workstation with the x86 with AVX2 processor architecture.

    Note
    This article uses Ubuntu 24.04 to demonstrate the installation steps.
  • A Domain name such as example.com if you want to use your LM Studio remotely with TLS encryption.

Install LM Studio

LM Studio is available as a standalone application file you can download and install from the official website. Follow the steps below to download and install LM Studio on your workstation.

  1. Visit the official LM Studio download page.

  2. Select the Linux operating system and the LM Studio version to download.

  3. Click Download LM Studio to download the latest release package for your operating system. If you're using a remote server, copy the download link and use the wget tool to download the AppImage.

    console
    $ wget https://installers.lmstudio.ai/linux/x64/0.3.15-11/LM-Studio-0.3.15-11-x64.AppImage
    
  4. Open a new terminal window.

  5. Install the required dependencies for LM Studio.

    console
    $ sudo apt install libatk1.0-0 libatk-bridge2.0-0 libcups2 libgdk-pixbuf2.0-0 libgtk-3-0 libpango-1.0-0 libcairo2 libxcomposite1 libxdamage1 libasound2t64 libatspi2.0-0
    
  6. Navigate to the directory where you saved the LM Studio AppImage, for example, the Downloads directory.

    console
    $ cd ~/Downloads
    
  7. Add execute privileges to the LM Studio AppImage file.

    console
    $ sudo chmod +x LM-Studio-0.3.15-11-x64.AppImage
    
  8. Extract the LM Studio AppImage contents.

    console
    $ ./LM-Studio-0.3.15-11-x64.AppImage --appimage-extract
    
  9. Navigate to the extracted squashfs-root directory.

    console
    $ cd squashfs-root
    
  10. Change the chrome-sandbox ownership to the root user and group.

    console
    $ sudo chown root:root chrome-sandbox
    
  11. Set the file permission as 4755 to the chrome-sandbox binary file.

    console
    $ sudo chmod 4755 chrome-sandbox
    
  12. Run the lmstudio script to open LM Studio and verify the installation is success.

    console
    $ ./lm-studio
    

    Open LM Studio

    Note
    To run the lm-studio binary on a remote server with no GUI interface, you can use the X11 forwarding by logging into your server using ssh -X linuxuser@hostname. Without this, you'll get an error.

Configure LM Studio as a System Service in Linux

You can start LM Studio by running the lmstudio script on a Linux desktop or lms server start on a Linux server. Configuring LM Studio as a system service creates a desktop application icon that lets you open LM Studio on your server. Follow the steps below to create a new system service to automatically start LM Studio at boot, manage API connections, and run LLMs.

  1. Press Control + C to close LM Studio in your terminal.

  2. Move the LM Studio squash-fs directory to /opt and rename it to /lm-studio.

    console
    $ sudo mv ~/Downloads/squashfs-root/ /opt/lm-studio
    
  3. Note your current display session to use in the system service.

    console
    $ echo $DISPLAY
    

    Output:

    :1
  4. Create a new lmstudio.service file.

    console
    $ sudo nano /etc/systemd/system/lmstudio.service
    
  5. Enter the following service configurations into the file.

    systemd
    [Unit]
    Description=LM Studio Service
    After=network.target
    
    [Service]
    Type=simple
    ExecStart=/opt/lm-studio/lm-studio --run-as-a-service
    Restart=always
    User=<user>
    Group=<group>
    Environment=DISPLAY=:1
    Environment=XDG_RUNTIME_DIR=/run/user/$(id -u <user>)
    
    [Install]
    WantedBy=multi-user.target
    

    In the above file:

    • Set the User variable with your user name.
    • Set the Group variable with your user group.
    • Set the Environment=DISPLAY= variable with your current session's $DISPLAY variable value which you noted earlier.
    • In the last configuration line of [SERVICE], replace <user> with your user name.

    Save the file and close the text editor.

    The system service configuration above runs the LM Studio script in the installation directory with your Linux user profile, allowing you to start LM Studio as a service.

  6. Reload systemd to apply the service configuration.

    console
    $ sudo systemctl daemon-reload
    
  7. Start the LM Studio system service.

    console
    $ sudo systemctl start lmstudio
    

    The LM Studio's GUI interface opens.

  8. View the LM Studio service status and verify it’s running.

    console
    $ sudo systemctl status lmstudio
    

    Output:

    ● lmstudio.service - LM Studio Service
     Loaded: loaded (/etc/systemd/system/lmstudio.service; enabled; preset: enabled)
     Active: active (running) since Thu 2025-04-03 20:38:12 EAT; 397ms ago
      Main PID: 1145694 (lm-studio)
      Tasks: 18 (limit: 37946)
     Memory: 113.7M (peak: 113.7M)
        CPU: 433ms
     CGroup: /system.slice/lmstudio.service
             ├─1145694 /opt/lm-studio/lm-studio --run-as-a-service
             ├─1145697 "/opt/lm-studio/lm-studio --type=zygote --no-zygote-sandbox"
             ├─1145698 /opt/lm-studio/chrome-sandbox /opt/lm-studio/lm-studio --type=zygote
             ├─1145699 "/opt/lm-studio/lm-studio --type=zygote"
             └─1145701 "/opt/lm-studio/lm-studio --type=zygote"
  9. To stop the LM Studio, use:

    console
    $ sudo systemctl stop lmstudio.service
    

Configure LM Studio

You can configure LM Studio directly through the application interface or using the lms cli tool. Follow the steps below to configure LM Studio to browse and download LLMs on your workstation.

  1. Open LM Studio if it's not running.

  2. Click Get your first LLM to set up LM Studio.

    Set up LM Studio LLM Page

  3. Click Download to fetch the model files for the default model.

  4. Click Start New Chat to open the LM Studio chat interface.

  5. Click Select a model to load on the top selection bar.

    Select Model

  6. Verify the downloaded model parameters, version, and size. Then, click the model to load it in your chat interface. For example, load the default deepseek-r1-distill-qwen-7b model.

  7. Enter a prompt like Add 3 random numbers divisible by 10.

  8. Check the number of input tokens and press Enter to send the prompt.

  9. Verify the generated result and processing summary in the model output.

    View Model Resuts

  10. Click Power User or Developer on the bottom navigation bar to switch to advanced configuration options in LM Studio.

  11. Click Discover on the main navigation menu.

    Click Discover

  12. Click Model Search to find new models to download.

  13. Click Runtime to manage the runtime extension packs.

  14. Click Hardware to verify the system architecture and memory.

    Manage LM Studio Hardware

  15. You can change the default Guardrails policy to specify the default setting for loading models based on system memory and performance.

  16. Click App Settings to modify the LM Studio application interface.

    Manage LMSudio App Settings

  17. Click Check for updates to check for newer versions and update LM Studio to the latest version.

Download and Run LLMs in LM Studio

You can browse, download, and run LLMs in LM Studio to interact with models on your workstation. LM Studio includes a default models library linked to Hugging Face, which includes popular LLMs like DeepSeek, and Gemma. You can use the downloaded models locally without an internet connection after download. Follow the steps below to download and run LLMs in LM Studio.

  1. Click Discover on the main navigation menu.

  2. Click Model Search to browse the models library.

  3. Enter a model name in the search bar.

    Search Models

  4. Browse the available models based on:

    • LM Studio Staff Picks: Models quantized and managed by the LM Studio community.
    • Hugging Face: Opensource models from Hugging Face and the LM Studio community repository.
  5. Select a model from the list.

  6. Click Download to fetch the model files and add the model to LM Studio.

  7. Click Downloads in the bottom left corner to monitor downloads in LM Studio.

  8. Click Models to view all downloaded models available in LM Studio.

    View Available Models

  9. Click LLMs or Text Embedding to view the models locally available in LM Studio.

  10. Click Chat to open the LM Studio chat interface.

  11. Select a model to load in LM Studio.

  12. Type a message in the prompt field and press Enter to send the prompt.

    Chat with LLMs in LM Studio

  13. Verify the model processing time, tokens information, and results in the chat output.

  14. Click New Chat in the open Chats pane to create a new chat.

  15. Click New Folder to organize and archive chats in folders.

Enable API Access in LM Studio

Enabling the LM Studio server lets you run LLMs remotely on your system in headless mode without opening the graphical application. This allows you to run LM Studio on a remote server and a custom port to access downloaded models via API. Follow the steps below to enable the LM Studio server and run LLMs with unique OpenAI-like endpoints.

  1. Click Developer to access the LM Studio server options.

  2. Click Settings.

    Open LM Studio API Settings

  3. Replace 1234 with a custom port to set as the server port.

  4. Click Serve on Local Network to enable LM Studio to listen for connections on all IP addresses instead of localhost 127.0.0.1. Keep the option off when using a reverse proxy like Nginx.

  5. Click Just-in-Time Model Loading to automatically load models immediately after an API request.

  6. Click Auto Unload unused JIT loaded models to specify the max Idle TTL to automatically unload inactive models.

  7. Change the server status from Stopped to Running to start the LM Studio API server.

  8. Configure your firewall to allow connections to the LM Studio API server port.

  9. Open a new terminal on your local workstation or use an API testing tool like Postman to test LM Studio API.

  10. Send an API request to the LM Studio API to test the connection to your server. For example, send an API request to the /v1/models endpoint, specifying your workstation's IP and LM Studio port to list all available models.

    console
    $ curl -X GET http://SERVER-IP:1234/v1/models
    

    Output:

    {
      "data": [
        {
          "id": "deepseek-r1-distill-qwen-7b",
          "object": "model",
          "owned_by": "organization_owner"
        },
        {
          "id": "text-embedding-nomic-embed-text-v1.5",
          "object": "model",
          "owned_by": "organization_owner"
        }
      ],
      "object": "list"
    }

Configure Nginx as a Reverse Proxy for LM Studio on Linux

Nginx is an open-source web server and reverse proxy application that securely forwards connections to backend services. Configuring LM Studio with Nginx secures connections to your API server. It enables you to customize API requests with your domain to integrate LLMs with applications such as websites or for RAG processing in your projects. Follow the steps below to install Nginx on your Ubuntu Linux server and configure it as a reverse proxy to forward API requests to the LM Studio server.

  1. Update the APT package index.

    console
    $ sudo apt update
    
  2. Install Nginx.

    console
    $ sudo apt install nginx -y
    
  3. Start the Nginx system service.

    console
    $ sudo systemctl start nginx
    
  4. Create a new lmstudio.conf Nginx server block configuration in the /etc/sites-available directory.

    console
    $ sudo nano /etc/nginx/sites-available/lmstudio.conf
    
  5. Add the following configurations to the lmstudio.conf file. Replace lmstudio.example.com with your actual domain that points to your server's public IP address.

    ini
    server {
        listen 80;
        server_name lmstudio.example.com;
    
        location / {
            proxy_pass http://127.0.0.1:1234;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection 'upgrade';
            proxy_set_header Host $host;
            proxy_cache_bypass $http_upgrade;
        }
    }
    

    Save and close the file.

    The Nginx configuration above listens for HTTP connection requests using the lmstudio.example.com domain and forwards all incoming requests to the 1234 LM Studio API server port.

  6. To bring this configuration into effect, link the lmstudio.conf file to the /etc/nginx/sites-enabled directory.

    console
    $ sudo ln -s /etc/nginx/sites-available/lmstudio.conf /etc/nginx/sites-enabled/
    
  7. Test the Nginx configuration for syntax errors.

    console
    $ sudo nginx -t
    

    Output:

    nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
    nginx: configuration file /etc/nginx/nginx.conf test is successful
  8. Restart Nginx to apply the configuration changes.

    console
    $ sudo systemctl restart nginx
    
  9. Install the Certbot plugin for Nginx to generate SSL certificates.

    console
    $ sudo apt install certbot python3-certbot-nginx -y
    
  10. Allow HTTP connections through the default firewall to enable SSL verifications.

    console
    $ sudo ufw allow http
    
  11. Restart UFW to apply the firewall changes.

    console
    $ sudo ufw reload
    
  12. Generate a new SSL certificate for your lmstudio.example.com domain.

    console
    $ sudo certbot --nginx -d lmstudio.example.com -m email@example.com --agree-tos
    
  13. Restart Nginx to apply the SSL configurations.

    console
    $ sudo systemctl restart nginx
    
  14. Allow HTTPS connections through the firewall.

    console
    $ sudo ufw allow https
    
  15. Restart UFW.

    console
    $ sudo ufw reload
    
  16. Send a GET request to the /v1/models LM Studio server endpoint using your domain to list all available models.

    console
    $ curl -X GET https://lmstudio.example.com/v1/models
    

    Your output should be similar to the one below.

    {
      "data": [
        {
          "id": "deepseek-r1-distill-qwen-7b",
          "object": "model",
          "owned_by": "organization_owner"
        },
        {
          "id": "text-embedding-nomic-embed-text-v1.5",
          "object": "model",
          "owned_by": "organization_owner"
        }
      ],
      "object": "list"
    }
  17. Get Information about a model available on the LM Studio server.

    console
    $ curl https://lmstudio.example.com/api/v0/models/<model-name>
    
  18. Send a request to the completions endpoint, specifying a model and prompt to perform text completions.

    console
    $ curl https://lmstudio.example.com/api/v0/completions \
      -H "Content-Type: application/json" \
      -d '{
        "model": "<model-name>",
        "prompt": "<prompt>",
        "temperature": 0.7,
        "max_tokens": 20,
        "stream": false,
        "stop": "\n"
      }'
    

Conclusion

You have installed LM Studio and run Large Language Models (LLMs) on your workstation. You can download and run open-source models on your desktop workstation using LM Studio, providing multiple options to integrate LLMs into your applications. Installing LM Studio on a remote desktop server allows you to use API endpoints to interact with models and incorporate them into your applications. For more information and application options, visit the LM Studio documentation.

Comments

M
max2veg17:38 - 25 April, 2025

sudo systemctl status lmstudio.service shows fails:

Process: 469388 ExecStart=/opt/lmstudio/lm-studio --run-as-a-service (code=dumped, signal=SEGV)

Showing 1 of 1 comments