How to Use Vultr Object Storage with PHP
Introduction
Using Vultr Object Storage can give flexibility and cloud storage that allows applications greater flexibility and access worldwide. Vultr Object Storage is compatible with a subset of the S3 API. See our compatibility matrix for details. In this guide you'll learn how to combine this technology with PHP and create a user portal that allows uploads directly to Vultr's Object Storage.
This tutorial walks through building a web site, using Nginx, allowing direct image uploads to Vultr Object Storage using PHP and JavaScript.
Prerequisites
- Deploy a new Vultr Ubuntu 20.04 (x64) cloud server
- Update the server according to the Ubuntu best practices guide
Install Support Libraries
For this install, add two repositories to ensure the latest versions of PHP and Nginx. To do that, add two repositories from one of the Ubuntu developers:
# sudo add-apt-repository -y ppa:ondrej/php
# sudo add-apt-repository -y ppa:ondrej/nginx-mainline
After adding the repositories, update apt, install PHP, unzip, nginx, and Composer:
# sudo apt update
# sudo apt install -y -q php8.0-{cli,fpm,mysql,gd,soap,mbstring,bcmath,common,xml,curl,imagick}
# sudo apt install -y -q unzip
# sudo apt install -y -q nginx
# sudo curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
Add an Upload User
To add extra security, add an upload
user to the system. This runs the PHP FastCGI Process Manager (PHP-FPM) as well as the permissions for the web site and log files.
# sudo useradd upload
# sudo usermod -a -G upload www-data
# sudo mkdir /var/log/upload
# sudo chown -R upload:upload /var/log/upload
# sudo mkdir /var/www/upload
# sudo chown -R upload:upload /var/www/upload
# sudo chmod 770 /var/www/upload
# sudo mkdir /opt/php
# sudo chown -R upload:upload /opt/php
Configure Nginx PHP and PHP-FPM Pool
To support file uploads, change the php.ini
settings that the PHP-FPM CGI process uses. Edit /etc/php/8.0/fpm/php.ini
and change upload_max_filesize = 2M
to upload_max_filesize = 10M
. Next, change post_max_size = 8M
to be post_max_size = 10M
. These two values should match and is the largest file upload size. Save the file and exit.
Make a copy of the default www.conf
, which runs the PHP-FPM Pool:
# sudo cp /etc/php/8.0/fpm/pool.d/www.conf /etc/php/8.0/fpm/pool.d/upload.conf
Now rename the original file, essentially disabling it:
# sudo mv /etc/php/8.0/fpm/pool.d/www.conf /etc/php/8.0/fpm/pool.d/www.dist
The extension .dist is commonly used to denote the default file in the distribution. Furthermore, by naming it something other than .conf ensures the interpreter does't read the file.
Now open /etc/php/8.0/fpm/pool.d/upload.conf
and make the following changes:
- Change the line
[www]
to[upload]
- Change
user = www-data
touser = upload
- Change
group = www-data
togroup = upload
- Change
listen = /run/php/php8.0-fpm.sock
tolisten = /run/php/php8.0-fpm-upload.sock
- Change
pm = dynamic
topm = ondemand
- Change
pm.max_children = 5
topm.max_children = 22
(NOTE: This value is dependent on the total RAM, Reserved Memory, Buffer and Process size. This calculator can help set the exact value.) - Change
pm.start_servers = 2
topm.start_servers = 5
- Change
pm.min_spare_servers = 2
topm.min_spare_servers = 5
- Change
pm.max_spare_servers = 3
topm.max_spare_servers = 16
- Change
;pm.process_idle_timeout = 10s;
topm.process_idle_timeout = 10s;
(remove the comma at the front)
Save the file and exit. Restart PHP-FPM by running sudo service php8.0-fpm restart
. PHP-FPM uses the new values now.
Configure Nginx
Create Snippets
To help secure Nginx, add a snippets.d
directory with more configurations that the web server accesses.
# sudo mkdir /etc/nginx/snippets.d
After creating the directory, create the support files with the following content:
# nano /etc/nginx/snippets.d/deny-git.conf
location ~ /\.git {
deny all;
}
# nano /etc/nginx/snippets.d/deny-composer.conf
location ~ /vendor/\.cache {
deny all;
}
location ~ /(composer.json|composer.lock) {
deny all;
}
# nano /etc/nginx/snippets.d/deny-htaccess.conf
location ~ /\.ht {
deny all;
}
# nano /etc/nginx/snippets.d/deny-env.conf
location ~ /\.env {
deny all;
}
# nano /etc/nginx/snippets.d/deny-license-readme.conf
location ~ /(LICENSE.md|README.md) {
deny all;
}
# nano /etc/nginx/snippets.d/add-headers.conf
add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
Create Nginx Sites
To secure Nginx, create two new sites. Using an editor, create /etc/nginx/sites-available/no-default
with the following:
server {
listen 80 default_server deferred;
listen [::]:80 default_server deferred;
server_name _;
# Return 444 (No Response)
return 444;
}
This prevents requests to the server that don't match the fully qualified domain.
Create another file /etc/nginx/sites-available/upload
with the following (make sure to change example.com
and www.example.com
to your domain in the server_name
directive and the access_log
and error_log
directives):
server {
server_name www.example.com example.com;
root /var/www/upload;
index index.php index.html index.htm;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php8.0-fpm-upload.sock;
}
error_page 404 /;
include snippets.d/*.conf
access_log /var/log/nginx/ssl.example.access.log combined;
error_log /var/log/nginx/ssl.example.error.log;
}
Link the New Sites
After creating the configuration files, unlink the default site and link the two new sites:
# sudo rm /etc/nginx/sites-enabled/default
# sudo ln -s /etc/nginx/sites-available/no-default /etc/nginx/sites-enabled/no-default
# sudo ln -s /etc/nginx/sites-available/upload /etc/nginx/sites-enabled/upload
Restart Nginx and Test
After saving the supplemental files and making the site configuration changes, check the Nginx configuration by running:
# sudo nginx -t
If there are no errors, Nginx returns:
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
At this point, restart Nginx by running service nginx restart
Create Object Storage
Now that the web server is ready, create Vultr Object Storage. Create Object Storage by logging in to my.vultr.com and navigating to Products -> Objects. Add Object Storage and give it a label. After creation, take note of the Hostname
, the Secret Key
, and the Access Key
.
Create a Configuration File
- For the remaining tasks, change to the
upload
user by runningsu - upload
.
When changing to this user, you might receive a warning. This is OK.
su: warning: cannot change directory to /home/upload: No such file or directory
Using the credentials from Object Storage, create /opt/php/config.php
with the following contents - changing the values to suit your preferences:
<?php
# This is Hostname value in the Vultr Portal
define('S3_HOST_NAME', 'ewr1.vultrobjects.com');
# This is the Secret Key in the Vultr Portal
define('VULTR_SECRET_KEY', 'ABCDEFghijklMNOPqrstHhMv2bRrUQCDzOEz7VBX');
# This is the Access Key in the Vultr Portal
define('VULTR_ACCESS_KEY', '123456789L58XDO5VQDQ');
# Log File location
define('LOG_FILE', '/var/log/upload/upload.log');
# Log File Tag
define('LOG_TAG', 'upload');
# Bucket Name
define('BUCKET_NAME', 'upload-bucket');
# Bucket Default ACL
# This should be either private or public-read
define('BUCKET_ACL', 'private');
# If this is true it will save a copy of the file locally as well as in S3
define('SAVE_LOCAL', false);
# Local Directory (web root + this directory)
define('LOCAL_DIR', 'uploads');
After changing the values to accommodate your needs, save and close the file.
Configure the Web Portal
The following steps are still ran as the upload
user
Update Composer Packages
Use composer to get the core S3 API files. To download these files, change to the web site directory and run the following:
$ cd /var/www/upload
$ composer require aws/aws-sdk-php
$ composer require monolog/monolog
This installs various libraries used in the next step.
Create the Default Structure
Ensure you're in the web site directory:
$ cd /var/www/upload
Create a directory structure to support the uploads:
$ mkdir css
$ mkdir functions
$ mkdir js
Create the Cascading Style Sheet
To support the upload add a style sheet. Create /var/www/upload/css/main.css
with the following:
#drop-area {
border: 2px dashed #ccc;
border-radius: 20px;
width: 480px;
font-family: sans-serif;
margin: 100px auto;
padding: 20px;
}
#drop-area.highlight {
border-color: purple;
}
p {
margin-top: 0;
}
.my-form {
margin-bottom: 10px;
}
#gallery {
margin-top: 10px;
}
#gallery img {
width: 150px;
margin-bottom: 10px;
margin-right: 10px;
vertical-align: middle;
}
.button {
display: inline-block;
padding: 10px;
background: #ccc;
cursor: pointer;
border-radius: 5px;
border: 1px solid #ccc;
}
.button:hover {
background: #ddd;
}
#fileElem {
display: none;
}
Save the file and exit.
Create the Supporting JavaScript
To complete the upload task, JavaScript provides the back end. Create /var/www/upload/js/main.js
with the following:
let dropArea = document.getElementById('drop-area')
let filesDone = 0
let filesToDo = 0
let progressBar = document.getElementById('progress-bar')
;
['dragenter', 'dragover', 'dragleave', 'drop'].forEach(eventName => {
dropArea.addEventListener(eventName, preventDefaults, false)
})
function preventDefaults(e) {
e.preventDefault()
e.stopPropagation()
}
;
['dragenter', 'dragover'].forEach(eventName => {
dropArea.addEventListener(eventName, highlight, false)
})
;
['dragleave', 'drop'].forEach(eventName => {
dropArea.addEventListener(eventName, unhighlight, false)
})
function highlight(e) {
dropArea.classList.add('highlight')
}
function unhighlight(e) {
dropArea.classList.remove('highlight')
}
dropArea.addEventListener('drop', handleDrop, false)
function handleDrop(e) {
let dt = e.dataTransfer
let files = dt.files
handleFiles(files)
}
function handleFiles(files) {
files = [...files]
initializeProgress(files.length)
files.forEach(uploadFile)
}
function uploadFile(file) {
let url = '/php.upload.php'
let formData = new FormData()
formData.append('file', file)
fetch(url, {
method: 'POST',
body: formData
})
.then(handleErrors)
.then(function(response) {
if (!response.ok) {
throw Error(response.statusText);
}
return response;
}).then(function(response) {
previewFile(file);
progressDone;
}).catch(function(error) {
console.log(error);
});
}
function handleErrors(response) {
if (!response.ok) {
alert("The file did NOT get uploaded.");
}
return response;
}
function previewFile(file) {
let reader = new FileReader()
reader.readAsDataURL(file)
reader.onloadend = function() {
let img = document.createElement('img')
img.src = reader.result
document.getElementById('gallery').appendChild(img)
}
}
function initializeProgress(numfiles) {
progressBar.value = 0
filesDone = 0
filesToDo = numfiles
}
function progressDone() {
filesDone++
progressBar.value = filesDone / filesToDo * 100
}
Save the file and exit.
Create the PHP Functions
Create the supporting PHP functions file /var/www/upload/functions/main.php
with the following content:
<?php
function generatev4GUID()
{
if (function_exists('com_create_guid') === true) {
return trim(com_create_guid(), '{}');
}
$data = openssl_random_pseudo_bytes(16);
$data[6] = chr(ord($data[6]) & 0x0f | 0x40);
$data[8] = chr(ord($data[8]) & 0x3f | 0x80);
return vsprintf('%s%s-%s-%s-%s-%s%s%s', str_split(bin2hex($data), 4));
}
function putS3Object($s3, $log, $bucket, $key, $file)
{
try {
$result = $s3->putObject([
'Bucket' => $bucket,
'Key' => $key,
'SourceFile' => $file,
'ACL' => BUCKET_ACL
]);
} catch (Aws\S3\Exception\S3Exception $e) {
$log->error($e->getAwsErrorMessage() . " " . __FILE__ . " " . __LINE__);
header('HTTP/1.1 500 Internal Server Error');
exit;
}
}
function createS3Bucket($s3, $log, $bucketName)
{
$log->error("Gonna make bucket $bucketName");
try {
$result = $s3->createBucket([
'Bucket' => $bucketName,
]);
} catch (Aws\S3\Exception\S3Exception $e) {
$log->error($e->getAwsErrorMessage() . " " . __FILE__ . " " . __LINE__);
header('HTTP/1.1 500 Internal Server Error');
exit;
}
}
function loadS3Buckets($s3, $log)
{
$log->debug("Loading Bucket Names");
$buckets = $s3->listBuckets();
foreach ($buckets['Buckets'] as $bucket) {
$GLOBALS['bucketArray'][] = $bucket['Name'];
}
}
Save the file and exit.
Create the Main Process File
Create /var/www/upload/php.upload.php
which is the file that takes the upload data from JavaScript (configured above in the uploadFile(file)
block of main.js
). Add the following to this file:
<?php
include_once "vendor/autoload.php";
include_once "/opt/php/config.php";
include_once "functions/main.php";
$log = new Monolog\Logger(LOG_TAG);
$log->pushHandler(new Monolog\Handler\StreamHandler(LOG_FILE, Monolog\Logger::INFO));
$log->info("File Uploader called from " . $_SERVER['REMOTE_ADDR']);
$validExtensions = [
"jpg",
"gif",
"jpeg",
"png"
];
$s3 = new Aws\S3\S3Client([
'version' => 'latest',
'region' => 'us-east-1',
'endpoint' => 'https://' . S3_HOST_NAME,
'credentials' => [
'key' => VULTR_ACCESS_KEY,
'secret' => VULTR_SECRET_KEY,
]
]);
$bucketArray = [];
loadS3Buckets($s3, $log);
if (!in_array(BUCKET_NAME, $bucketArray)) {
createS3Bucket($s3, $log, BUCKET_NAME);
$s3->waitUntil('BucketExists', ['Bucket' => BUCKET_NAME]);
$log->info(BUCKET_NAME . " didn't exist. Creating...");
}
$fdata = $_FILES['file'];
$tmp_name = $fdata['tmp_name'];
$real_name = $fdata['name'];
$error = $fdata['error'];
$size = $fdata['size'];
$ftype = $fdata['type'];
$ext = strtolower(pathinfo($real_name, PATHINFO_EXTENSION));
if (!in_array($ext, $validExtensions)) {
$log->error($real_name . " was attempted to be uploaded but had an illegal extension.");
header('HTTP/1.1 500 Internal Server Error');
exit;
}
$fileGuid = generatev4GUID();
$nfn = $fileGuid . "." . $ext;
$log->info($nfn);
if ($error == 0) {
if (SAVE_LOCAL) {
$upload_file = $_SERVER['DOCUMENT_ROOT'] . "/" . LOCAL_DIR . "/" . $nfn;
if (!is_dir($_SERVER['DOCUMENT_ROOT'] . "/" . LOCAL_DIR)) {
mkdir($_SERVER['DOCUMENT_ROOT'] . "/" . LOCAL_DIR, 0755, true);
}
move_uploaded_file($tmp_name, $upload_file);
putS3Object($s3, $log, BUCKET_NAME, $nfn, $upload_file);
$log->info($real_name . " was moved to " . $tmp_name . " and then uploaded as " . $upload_file);
} else {
putS3Object($s3, $log, BUCKET_NAME, $nfn, $tmp_name);
unlink($tmp_name);
$log->info($real_name . " simply got uploaded as " . $upload_file);
}
} else {
$log->error($real_name . " was attempted to be uploaded");
$log->error("Something went wrong along the way");
header('HTTP/1.1 500 Internal Server Error');
exit;
}
Save the file and exit.
Create the HTML Portal
The last file that ties it all together is the HTML form that references the files above. Create /var/www/upload/index.php
with the following content:
<html>
<head>
<title>
Vultr Object Storage Image Uploader
</title>
<link rel="stylesheet" type="text/css" href="/css/main.css">
</head>
<body>
<div id="drop-area">
<form class="my-form">
<p>Upload multiple files using the file dialog or by dragging and dropping images into this dashed region</p>
<input type="file" id="fileElem" multiple accept="image/*" onchange="handleFiles(this.files)">
<label class="button" for="fileElem">Select files manually</label>
</form>
<progress id="progress-bar" max=100 value=0></progress>
<div id="gallery"></div>
</div>
<script src="/js/main.js"></script>
</body>
</html>
Save the file and exit.
Configuring Settings
Leaving the default settings creates a log file /var/log/upload/upload.log
which shows every upload. The files are not stored on the local drive by default, they get uploaded to Object Storage and they can only be accessible by logging in. This is configured in the settings in /opt/php/config.php
. This file also controls the Object Storage permissions. Setting BUCKET_ACL
to public-read
makes the files publicly available. The public location is https://S3_HOST_NAME/BUCKET_NAME/FILENAME.ext
Securing Nginx with LetsEncrypt
After configuring the web server, install and secure it with LetsEncrypt.
Conclusion
Vultr Object Storage is compatible with a subset of the S3 API. See our compatibility matrix for details. Vultr Object Storage is a great storage tool and with PHP and a simple web server, makes a robust storage and delivery solution. Using technologies like these make files available and more accessible for people around the world.