How to Use Vultr's Elasticsearch Marketplace Application

Updated on 11 December, 2025
Guide
Learn how to deploy, configure, and optimize Vultr's Elasticsearch Marketplace Application for powerful search capabilities in your applications and websites.
How to Use Vultr's Elasticsearch Marketplace Application header image

Elasticsearch is a distributed, open-source search and analytics engine built on Apache Lucene. It delivers real-time indexing, full-text search, aggregations, and vector-based retrieval at scale. The Vultr Marketplace provides a pre-configured Elasticsearch instance, enabling quick deployment and setup on a Vultr server.

This guide explains deploying and using Vultr's Elasticsearch Marketplace Application. You will deploy an instance, configure security, verify health, index and search data, set up snapshots to Vultr Object Storage, and add basic monitoring and integrations.

Deploy Vultr's Elasticsearch Marketplace Application

  1. Log in to your Vultr Customer Portal and click the Deploy Server button.

  2. Select your preferred server type.

  3. Choose a server location.

  4. Select a server plan with at least 4GB RAM and 2 CPU cores for production workloads.

  5. Click the Configure button to proceed.

  6. Under Marketplace Apps, search for Elasticsearch and select it as the Marketplace Application.

  7. Select the Limited Login option from the Additional Features section to create a limited user with sudo access.

  8. Review your configurations and click the Deploy Now button to start deployment.

    Note
    It may take up to 10 minutes for your server to finish installing Elasticsearch.
  9. After the instance shows the status of Running, navigate to the Server Overview page and copy the SSH connection details.

Initial Setup and Configuration

After deployment, complete essential baseline tasks to make the instance reachable and secure. You will map a friendly domain, confirm service health, lock down network access, and enable built-in security before exposing the HTTP API to the internet.

  1. Create a DNS A record pointing to your server's IP address, such as elastic.example.com.

  2. Connect to your Vultr server instance over SSH using the connection details from the Server Information page.

Verify Elasticsearch Installation

  1. Check the Elasticsearch service status.

    console
    $ sudo systemctl status elasticsearch
    

    The service should show as active (running).

  2. Verify the installed Elasticsearch version.

    console
    $ sudo /usr/share/elasticsearch/bin/elasticsearch -V
    

    Output:

    Version: 8.19.5, Build: deb/d6dd0417f05cd69706f4f103c69bbb8b7688db9c/2025-10-03T16:35:50.165700789Z, JVM: 25

Configure Firewall Security

Secure your server by configuring the firewall to allow only necessary traffic.

  1. Allow SSH connections.

    console
    $ sudo ufw allow OpenSSH
    
  2. Allow HTTP and HTTPS traffic for Nginx and Certbot.

    console
    $ sudo ufw allow 80/tcp
    $ sudo ufw allow 443/tcp
    
  3. Enable the firewall.

    console
    $ sudo ufw enable
    
  4. Verify firewall status.

    console
    $ sudo ufw status
    

    Port 9200 is used by Elasticsearch's HTTP API. You will remove this rule after enabling SSL through the Nginx reverse proxy.

Configure Reverse Proxy with Nginx

Set up Nginx as a reverse proxy to serve Elasticsearch over standard HTTPS port with authentication, securing public access.

  1. Install the Nginx web server package.

    console
    $ sudo apt install nginx -y
    
  2. Create a password file for basic authentication.

    console
    $ sudo apt install apache2-utils -y
    $ sudo htpasswd -c /etc/nginx/.htpasswd admin
    

    Enter a secure password when prompted. This creates the admin user for accessing Elasticsearch.

  3. Create an Nginx virtual host configuration for Elasticsearch.

    console
    $ sudo nano /etc/nginx/sites-available/elasticsearch
    
    ini
    server {
        listen 80;
        server_name elastic.example.com;
    
        auth_basic "Elasticsearch Authentication";
        auth_basic_user_file /etc/nginx/.htpasswd;
    
        location / {
            proxy_pass http://localhost:9200;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_read_timeout 300;
            proxy_send_timeout 300;
        }
    }
    

    Replace elastic.example.com with your domain name.

    Save and close the file.

  4. Enable the Elasticsearch server block.

    console
    $ sudo ln -s /etc/nginx/sites-available/elasticsearch /etc/nginx/sites-enabled/
    
  5. Test the Nginx configuration syntax.

    console
    $ sudo nginx -t
    

    Output:

    nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
    nginx: configuration file /etc/nginx/nginx.conf test is successful
  6. Reload Nginx to apply the changes.

    console
    $ sudo systemctl reload nginx
    

Secure Elasticsearch with Public SSL/TLS

Enable HTTPS access through the Nginx reverse proxy using Let's Encrypt certificates.

  1. Install Certbot and the Nginx plugin.

    console
    $ sudo apt install certbot python3-certbot-nginx -y
    
  2. Request an SSL certificate for your domain.

    console
    $ sudo certbot --nginx -d elastic.example.com
    

    Follow the prompts and select the option to redirect HTTP traffic to HTTPS when asked.

  3. Verify SSL certificate auto-renewal.

    console
    $ sudo certbot renew --dry-run
    
  4. Access Elasticsearch securely at https://elastic.example.com.

  5. Test the public endpoint with basic authentication:

    console
    $ curl -X GET "https://elastic.example.com/" -u admin:YOUR_PASSWORD
    

    You can now use this public HTTPS endpoint for external access, while server-side commands can continue using http://localhost:9200.

  6. Close direct port 9200 access now that traffic goes through the Nginx reverse proxy.

    console
    $ sudo ufw delete allow 9200/tcp
    

Configure OS-Level Tuning

Elasticsearch requires specific operating system settings for optimal performance and stability.

  1. Increase the virtual memory map count.

    console
    $ sudo sysctl -w vm.max_map_count=262144
    
  2. Disable swap to prevent performance degradation.

    console
    $ sudo swapoff -a
    
  3. Persist these settings across reboots.

    console
    $ sudo nano /etc/sysctl.d/99-elasticsearch.conf
    

    Add the following lines:

    ini
    vm.max_map_count=262144
    

    Save and close the file.

  4. Configure JVM heap size (set to approximately 50% of available RAM, maximum 32GB).

    console
    $ sudo nano /etc/elasticsearch/jvm.options.d/heap.options
    

    Add the following lines (adjust based on your server's RAM):

    ini
    -Xms2g
    -Xmx2g
    

    For a server with 8GB RAM, use -Xms4g and -Xmx4g. Save and close the file.

  5. Restart Elasticsearch to apply all changes.

    console
    $ sudo systemctl restart elasticsearch
    

Explore Elasticsearch Features

Elasticsearch provides powerful search and analytics capabilities through its REST API. Verify your cluster is healthy before indexing data.

  1. Check cluster health status.

    console
    $ curl -X GET "http://localhost:9200/_cluster/health?pretty"
    

    A green status indicates all shards are allocated. yellow is normal for single-node setups (no replicas).

  2. List all indices.

    console
    $ curl -X GET "http://localhost:9200/_cat/indices?v"
    

Index and Search Data

Learn the core Elasticsearch workflow by creating an index, adding documents, and performing searches with filters and aggregations.

  1. Create a new index with mapping for products.

    console
    $ curl -X PUT "http://localhost:9200/products" \
      -H 'Content-Type: application/json' -d'
    {
      "mappings": {
        "properties": {
          "name": { "type": "text" },
          "description": { "type": "text" },
          "price": { "type": "float" },
          "category": { "type": "keyword" }
        }
      }
    }'
    
  2. Add a sample product document.

    console
    $ curl -X POST "http://localhost:9200/products/_doc/1" \
      -H 'Content-Type: application/json' -d'
    {
      "name": "Laptop Pro 15",
      "description": "High-performance laptop with 16GB RAM",
      "price": 1299.99,
      "category": "Electronics"
    }'
    
  3. Search with filters and aggregations.

    console
    $ curl -X GET "http://localhost:9200/products/_search?pretty" \
      -H 'Content-Type: application/json' -d'
    {
      "query": {
        "bool": {
          "must": [{ "match": { "description": "laptop" }}],
          "filter": [{ "range": { "price": { "gte": 1000 }}}]
        }
      },
      "aggs": {
        "avg_price": { "avg": { "field": "price" }}
      }
    }'
    

Integrations and Monitoring

Extend Elasticsearch's capabilities by connecting visualization dashboards, log shippers, and alerting systems. This section demonstrates setting up Kibana for interactive exploration, plus highlights other ecosystem tools to build a complete observability stack.

Set Up Kibana for Visualization

Kibana is the official visualization platform for Elasticsearch, providing dashboards, charts, and exploration tools.

  1. Install Kibana.

    console
    $ sudo apt install -y kibana
    
  2. Configure Kibana to connect to Elasticsearch.

    console
    $ sudo nano /etc/kibana/kibana.yml
    

    Update the configuration:

    yaml
    server.host: "0.0.0.0"
    elasticsearch.hosts: ["http://localhost:9200"]
    

    Save and close the file.

  3. Start and enable Kibana.

    console
    $ sudo systemctl start kibana
    $ sudo systemctl enable kibana
    
  4. Access Kibana at http://elastic.example.com:5601 or configure an Nginx reverse proxy with SSL similar to Elasticsearch for production use.

Additional Integration Options

Elasticsearch supports integrations with various external tools:

  • Logstash: Data processing pipeline for ingesting, transforming, and enriching data before indexing.
  • Beats: Lightweight data shippers for logs, metrics, and uptime monitoring (Filebeat, Metricbeat, Heartbeat).
  • APM (Application Performance Monitoring): Monitor application performance and errors.
  • Machine Learning: Detect anomalies and patterns in your data.
  • Alerting: Set up alerts based on queries and conditions.

Configure Snapshots to Vultr Object Storage

Protect your data by configuring automated backups to Vultr Object Storage, which is S3-compatible. Use the Elasticsearch keystore to securely store credentials instead of plaintext in configuration files.

  1. Create a bucket in Vultr Object Storage. Note your bucket name and access credentials.

  2. Add your Vultr Object Storage credentials to the Elasticsearch keystore.

    console
    $ sudo /usr/share/elasticsearch/bin/elasticsearch-keystore add s3.client.default.access_key
    

    Enter your access key when prompted.

    console
    $ sudo /usr/share/elasticsearch/bin/elasticsearch-keystore add s3.client.default.secret_key
    

    Enter your secret key when prompted.

  3. Restart Elasticsearch to load the keystore changes.

    console
    $ sudo systemctl restart elasticsearch
    
  4. Register the S3 snapshot repository.

    console
    $ curl -X PUT "http://localhost:9200/_snapshot/vultr_s3" \
      -H 'Content-Type: application/json' -d'
    {
      "type": "s3",
      "settings": {
        "bucket": "YOUR_BUCKET_NAME",
        "endpoint": "ewr1.vultrobjects.com",
        "protocol": "https",
        "path_style_access": true
      }
    }'
    

    Replace YOUR_BUCKET_NAME with your bucket name and ewr1.vultrobjects.com with your region's endpoint (e.g., sjc1.vultrobjects.com for San Jose).

  5. Verify the repository and create your first snapshot.

    console
    $ curl -X GET "http://localhost:9200/_snapshot/vultr_s3/_all?pretty"
    
    $ curl -X PUT "http://localhost:9200/_snapshot/vultr_s3/snap-$(date +%F)?wait_for_completion=false&pretty"
    
  6. List all snapshots.

    console
    $ curl -X GET "http://localhost:9200/_snapshot/vultr_s3/_all?pretty"
    
  7. Restore a snapshot.

    console
    $ curl -X POST "http://localhost:9200/_snapshot/vultr_s3/SNAPSHOT_NAME/_restore?pretty"
    

Best Practices and Performance Tuning

Use these recommendations to keep your single-node deployment reliable and responsive. You will configure retention with Index Lifecycle Management (ILM), standardize defaults with index templates, monitor JVM memory usage, and enable diagnostics like slow query logs. Adjust values to match your data size and workload profile.

Index Management and Performance

  1. Set up index lifecycle policies to automatically delete old indices (example: 30 days).

    console
    $ curl -X PUT "http://localhost:9200/_ilm/policy/retention_policy" \
      -H 'Content-Type: application/json' -d'
    {
      "policy": {
        "phases": {
          "delete": {
            "min_age": "30d",
            "actions": { "delete": {} }
          }
        }
      }
    }'
    
  2. Monitor JVM heap usage and cluster performance.

    console
    $ curl -X GET "http://localhost:9200/_nodes/stats/jvm?pretty"
    

Troubleshooting

This section covers common issues and diagnostic commands to help resolve problems with your Elasticsearch instance.

  1. Verify Elasticsearch service is running and view recent logs.

    console
    $ sudo systemctl status elasticsearch
    $ sudo journalctl -u elasticsearch -e
    
  2. Verify cluster health.

    console
    $ curl -X GET "http://localhost:9200/_cluster/health?pretty"
    

    A green status indicates all shards are allocated. yellow means some replica shards are unallocated (normal for single-node). red indicates data loss.

  3. Diagnose unallocated shards.

    console
    $ curl -X GET "http://localhost:9200/_cluster/allocation/explain?pretty"
    

Use Cases

Elasticsearch excels in various real-world scenarios:

  • Full-Text Search: Power website search, e-commerce product catalogs, and documentation portals with fast, relevant results.
  • Centralized Log Management: Collect, parse, and analyze logs from multiple sources for troubleshooting and security monitoring.
  • Operational Analytics: Track metrics, KPIs, and business data with real-time aggregations and visualizations.
  • Application Performance Monitoring: Monitor application health, trace requests, and identify bottlenecks.
  • Security Analytics: Detect threats, analyze intrusion patterns, and monitor security events across your infrastructure.

Conclusion

In this guide, you deployed Vultr's Elasticsearch Marketplace Application and configured it for production use. You secured the instance with firewall rules, authentication, and TLS encryption, applied OS-level tuning for optimal performance, and explored core search capabilities including indexing, querying, and aggregations. You configured role-based access control, set up snapshots to Vultr Object Storage for data protection, and integrated monitoring tools like Kibana. With these production-ready configurations, you can build scalable search applications, centralize log management, and perform real-time analytics on your data.

Comments