How to Install LMStudio - A Graphical Application for Running Large Language Models (LLMs)

Updated on 04 April, 2025
How to Install LMStudio - A Graphical Application for Running Large Language Models (LLMs) header image

LM Studio is a graphical desktop application based on llama.cpp used to run Large Language Models (LLMs) locally. It supports GGUF and opensource models from platforms such as Hugging Face that you can download directly within the LM Studio interface. LM Studio supports GGUF format models on any operating system and MLX format models on macOS. You can run popular models such as Llama, DeepSeek-R1, Mistral, Gemma, Granite, and Phi locally after installation.

This article explains how to install LM Studio on Linux, Mac, or Windows and run Large Language Models (LLMs) locally on your workstation. You will enable API access and use LM Studio to download and run models like DeepSeek-R1, Qwen 2.5, and Gemma 3 for integration in your existing applications.

Prerequisites

Before you begin, you need to:

  • Have access to a desktop workstation with the following processor requirements, depending on your operating system.

    • Mac: M1/M2/M3/M4.
    • Windows: x86 or ARM.
    • Linux: x86 with AVX2.

Install LM Studio

LM Studio is available as a standalone application file you can download and install from the official website. You can install LM Studio on Linux, Mac, or Windows, depending on your workstation's operating system. Follow the steps below to download and install LM Studio on your workstation.

  1. Visit the official LM Studio download page.

  2. Select your operating system, architecture, and the LM Studio version to download.

  3. Click Download LM Studio to download the latest release package for your operating system.

    Download LM Studio

Install LM Studio on Linux
  1. Open a new terminal window.

  2. Navigate to the Downloads directory.

    console
    $ cd Downloads
    
  3. Enable execute privileges on the LM Studio binary depending on the downloaded version, such as LM-Studio-0.3.14-5.

    console
    $ chmod +x LM-Studio-0.3.14-5-x64.AppImage
    
  4. Extract the LM Studio AppImage contents.

    console
    $ ./LM-Studio-0.3.14-5-x64.AppImage --appimage-extract
    
  5. Navigate to the squashfs-root directory.

    console
    $ cd squashfs-root
    
  6. Change the chrome-sandbox ownership to the root user and group.

    console
    $ sudo chown root:root chrome-sandbox
    
  7. Enable the 4755 permissions mode on the chrome-sandbox binary file.

    console
    $ sudo chmod 4755 chrome-sandbox
    
  8. Run the lmstudio script to open LM Studio and verify that the installation is successful.

    console
    $ ./lm-studio
    

    Open LM Studio

Configure LM Studio as a System Service in Linux

You can start LM Studio by running the lmstudio script on a Linux desktop or lms server start on a Linux server. Configuring LM Studio as a system service creates a desktop application icon that lets you open LM Studio on your server. Follow the steps below to create a new system service to automatically start LM Studio at boot, manage API connections, and run LLMs.

  1. Press Control + C to close LM Studio in your terminal.

  2. Move the LM Studio squash-fs directory to /opt and rename it to /lm-studio.

    console
    $ sudo mv ~/Downloads/squashfs-root/ /opt/lm-studio
    
  3. Check your current display session to use in the system service.

    console
    $ echo $DISPLAY
    

    Output:

    :1
  4. Create a new lmstudio.service file.

    console
    $ sudo nano /etc/systemd/system/lmstudio.service
    
  5. Enter the following service configurations into the file. Replace the :1 display session and the user and group information with your actual details.

    systemd
    [Unit]
    Description=LM Studio Service
    After=network.target
    
    [Service]
    Type=simple
    ExecStart=/opt/lm-studio/lm-studio --run-as-a-service
    Restart=always
    User=<user>
    Group=<group>
    Environment=DISPLAY=:1
    Environment=XDG_RUNTIME_DIR=/run/user/$(id -u <user>)
    
    [Install]
    WantedBy=multi-user.target
    

    Save the file and close the text editor.

    The system service configuration above runs the LM Studio script in the installation directory with your Linux user profile, allowing you to start LM Studio as a service.

  6. Reload systemd to apply the service configuration.

    console
    $ sudo systemctl daemon-reload
    
  7. Enable the LM Studio service to start at boot.

    console
    $ sudo systemctl enable lmstudio
    
  8. Start the LM Studio system service.

    console
    $ sudo systemctl start lmstudio
    
  9. View the LM Studio service status and verify that it’s running.

    console
    $ sudo systemctl status lmstudio
    

    Output:

    ● lmstudio.service - LM Studio Service
     Loaded: loaded (/etc/systemd/system/lmstudio.service; enabled; preset: enabled)
     Active: active (running) since Thu 2025-04-03 20:38:12 EAT; 397ms ago
      Main PID: 1145694 (lm-studio)
      Tasks: 18 (limit: 37946)
     Memory: 113.7M (peak: 113.7M)
        CPU: 433ms
     CGroup: /system.slice/lmstudio.service
             ├─1145694 /opt/lm-studio/lm-studio --run-as-a-service
             ├─1145697 "/opt/lm-studio/lm-studio --type=zygote --no-zygote-sandbox"
             ├─1145698 /opt/lm-studio/chrome-sandbox /opt/lm-studio/lm-studio --type=zygote
             ├─1145699 "/opt/lm-studio/lm-studio --type=zygote"
             └─1145701 "/opt/lm-studio/lm-studio --type=zygote"
  10. Verify that LM Studio opens with a new system tray icon.

  11. Right-click the LM Studio tray icon to manage the application.

    LM Studio Tray Icon

Configure LM Studio

You can configure LM Studio directly through the application interface or using the lms cli tool. Follow the steps below to configure LM Studio to browse and download LLMs on your workstation.

  1. Open LM Studio if it's not running.

  2. Click Get your first LLM to set up LM Studio.

    Set up LM Studio LLM Page

  3. Verify the default selected model and click Download to fetch the model files.

  4. Click Start New Chat to open the LM Studio chat interface.

  5. Click Select a model to load on the top selection bar.

    Select Model

  6. Verify the downloaded model parameters, version, and size. Then, click the model to load it in your chat interface. For example, load the default deepseek-r1-distill-qwen-7b model.

  7. Enter a prompt like Add 3 random numbers divisible by 10.

  8. Check the number of input tokens and press Enter to send the prompt.

  9. Verify the generated result and processing summary in the model output.

    View Model Resuts

  10. Click Power User or Developer on the bottom navigation bar to enable advanced configuration options in LM Studio.

  11. Click Discover on the main navigation menu.

    Click Discover

  12. Click Model Search to search for new models to download.

  13. Click Runtime to manage the runtime extension packs.

  14. Click Hardware to verify the system architecture and memory.

    Manage LM Studio Hardware

  15. Change the default Guardrails policy to specify the default setting for loading models based on the system memory and performance.

  16. Click App Settings to modify the LM Studio application interface.

    Manage LMSudio App Settings

  17. Click Check for updates to check for newer versions and update LM Studio to the latest version.

Download and Run LLMs in LM Studio

You can browse, download, and run LLMs in LM Studio to interact with models on your workstation. LM Studio includes a default models library linked to Hugging Face, which includes popular LLMs like DeepSeek, and Gemma. You can use the downloaded models locally without an internet connection after download. Follow the steps below to download and run LLMs in LM Studio.

  1. Click Discover on the main navigation menu.

  2. Click Model Search to browse the models library.

  3. Enter a model name in the search bar.

    Search Models

  4. Browse the available models based on:

    • LM Studio Staff Picks: Models quantized and managed by the LM Studio community.
    • Hugging Face: Opensource models from Hugging Face and the LM Studio community repository.
  5. Select a model from the list.

  6. Click Download to fetch the model files and add the model to LM Studio.

  7. Click Downloads in the bottom left corner to monitor downloads in LM Studio.

  8. Click Models to view all downloaded models available in LM Studio.

    View Available Models

  9. Click LLMs or Text Embedding to view the models locally available in LM Studio.

  10. Click Chat to open the LM Studio chat interface.

  11. Select a model to load in LM Studio.

  12. Type a message in the prompt field and press Enter to send the prompt.

    Chat with LLMs in LM Studio

  13. Verify the model processing time, tokens information, and results in the chat output.

  14. Click New Chat in the open Chats pane to create a new chat.

  15. Click New Folder to organize and archive chats in folders.

Enable API Access in LM Studio

Enabling LM Studio server lets you run LLMs remotely on your system in headless mode without opening the graphical application. This allows you to run LM Studio on a remote server and a custom port to access downloaded models via API. Follow the steps below to, enable the LM Studio server and run LLMs with unique OpenAI-like endpoints.

  1. Click Developer to access the LM Studio server options.

  2. Click Settings.

    Open LM Studio API Settings

  3. Replace 1234 with a custom port to set as the server port.

  4. Click Serve on Local Network to enable LM Studio to listen for connections on all IP addresses instead of localhost 127.0.0.1. Keep the option off when using a reverse proxy like Nginx.

  5. Click Just-in-Time Model Loading to automatically load models immediately after an API request.

  6. Click Auto Unload unused JIT loaded models to specify the max Idle TTL to automatically unload inactive models.

  7. Change the server status from Stopped to Running to start the LM Studio API server.

  8. Allow connections to the LM Studio API server port through the firewall, depending on your operating system. Visit the following resources to configure your firewall.

  9. Open a new terminal on your local workstation or install an API platform like Postman.

  10. Send an API request to the LM Studio API to test the connection to your server. For example, send an API request to the /v1/models endpoint, specifying your workstation's IP and LM Studio port to list all available models.

    console
    $ curl -X GET http://SERVER-IP:1234/v1/models
    

    Output:

    {
      "data": [
        {
          "id": "deepseek-r1-distill-qwen-7b",
          "object": "model",
          "owned_by": "organization_owner"
        },
        {
          "id": "text-embedding-nomic-embed-text-v1.5",
          "object": "model",
          "owned_by": "organization_owner"
        }
      ],
      "object": "list"
    }

Configure Nginx as a Reverse Proxy for LM Studio on Linux

Nginx is an opensource web server and reverse proxy application that securely forwards connections to backend services. Configuring LM Studio with Nginx secures connections to your API server and enables you to customize API requests with your domain to integrate LLMs with applications such as websites or for RAG processing in your projects. Follow the steps below to install Nginx on a Linux server running Ubuntu and configure it as a reverse proxy to forward API requests to the LM Studio server.

  1. Update the APT package index.

    console
    $ sudo apt update
    
  2. Install Nginx.

    console
    $ sudo apt install nginx -y
    
  3. Start the Nginx system service.

    console
    $ sudo systemctl start nginx
    
  4. Create a new lmstudio.conf Nginx server block configuration in the /etc/sites-available directory.

    console
    $ sudo nano /etc/nginx/sites-available/lmstudio.conf
    
  5. Add the following configurations to the lmstudio.conf file. Replace lmstudio.example.com with your actual domain that's pointed to your server's public IP address.

    nginx
    server {
        listen 80;
        server_name lmstudio.example.com;
    
        location / {
            proxy_pass http://127.0.0.1:1234;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection 'upgrade';
            proxy_set_header Host $host;
            proxy_cache_bypass $http_upgrade;
        }
    }
    

    Save and close the file.

    The Nginx configuration above listens for HTTP connection requests using the lmstudio.example.com domain, forwarding all incoming requests to the 1234 LM Studio API server port.

  6. Link the lmstudio.conf file to the /etc/nginx/sites-enabled directory to activate it.

    console
    $ sudo ln -s /etc/nginx/sites-available/lmstudio.conf /etc/nginx/sites-enabled/
    
  7. Test the Nginx configuration for syntax errors.

    console
    $ sudo nginx -t
    

    Output:

    nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
    nginx: configuration file /etc/nginx/nginx.conf test is successful
  8. Restart Nginx to apply the configuration changes.

    console
    $ sudo systemctl restart nginx
    
  9. Install the Certbot plugin for Nginx to generate SSL certificates.

    console
    $ sudo apt install certbot python3-certbot-nginx -y
    
  10. Allow HTTP connections through the default firewall to enable SSL verifications.

    console
    $ sudo ufw allow http
    
  11. Restart UFW to apply the firewall changes.

    console
    $ sudo ufw reload
    
  12. Generate a new SSL certificate for your lmstudio.example.com domain.

    console
    $ sudo certbot --nginx -d lmstudio.example.com -m email@example.com --agree-tos
    
  13. Restart Nginx to apply the SSL configurations.

    console
    $ sudo systemctl restart nginx
    
  14. Allow HTTPS connections through the firewall.

    console
    $ sudo ufw allow 'Nginx Full'
    
  15. Restart UFW.

    console
    $ sudo ufw reload
    
  16. Send a GET request to the /v1/models LM Studio server endpoint using your domain to list all available models.

    console
    $ curl -X GET https://lmstudio.example.com/v1/models
    

    Your output should be similar to the one below.

    {
      "data": [
        {
          "id": "deepseek-r1-distill-qwen-7b",
          "object": "model",
          "owned_by": "organization_owner"
        },
        {
          "id": "text-embedding-nomic-embed-text-v1.5",
          "object": "model",
          "owned_by": "organization_owner"
        }
      ],
      "object": "list"
    }
  17. Get Information about a model available on the LM Studio server.

    console
    $ curl https://lmstudio.example.com/api/v0/models/<model-name>
    
  18. Send a request to the completions endpoint, specifying a model and prompt to perform text completions.

    console
    $ curl https://lmstudio.example.com/api/v0/completions \
      -H "Content-Type: application/json" \
      -d '{
        "model": "<model-name>",
        "prompt": "<prompt>",
        "temperature": 0.7,
        "max_tokens": 20,
        "stream": false,
        "stop": "\n"
      }'
    

Conclusion

You have installed LM Studio and run Large Language Models (LLMs) on your workstation. You can download and run opensource models on your desktop workstation using LM Studio, providing you with multiple options to integrate LLMs into your applications. Installing LM Studio on a remote desktop server allows you to use API endpoints to interact with models and integrate them into your applications. Visit the LM Studio documentation for more information and application options.

Comments

No comments yet.