Welcome! First of all we want to thank you for purchasing our Amazon S3 File Uploader Template
We really do appreciate every sale. If you like our work please do not forget to rate it. It helps us in developing new and better items
In the following sections we will explain how to set up and use it the easiest way possible. If you have any questions that you feel should have been in this document you can contact us through our profile page on codecanyon.net/user/berkinedesign and we'll get back to you as soon as possible. Thanks so much!!!
For questions on basic HTML, JavaScript or CSS editing - please give your question a quick Google or visit W3Schools as template issues get top priority. You will need some knowledge of HTML/JS/PHP/CSS to edit this file uploader.
Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.
This means customers of all sizes and industries can use it to store and protect any amount of data for a range of use cases, such as websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics. Amazon S3 provides easy-to-use management features so you can organize your data and configure finely-tuned access controls to meet your specific business, organizational, and compliance requirements. Amazon S3 is designed for 99.999999999% (11 9's) of durability, and stores data for millions of applications for companies all around the world.
Amazon S3 is object storage built to store and retrieve any amount of data from anywhere on the Internet. It’s a simple storage service that offers an extremely durable, highly available, and infinitely scalable data storage infrastructure at very low costs.
To get started you need to have following things enabled:
Already comes included with the app, in case if you want to install yourself follow the instructions below:
Please visit the following link for the up to date instructions regarding installation of AWS PHP SDK directly from Amazon Web Services page
In order to run Amazon S3 File uploader correctly, you need to update the following lines accordingly:
Create AWS IAM User with S3 Upload Object Policies attached, download and save Access Key and Secret Access Key of the User
Create Amazon S3 bucket for uploading objects(files)
Edit the 'config.php' file accordingly:
This is a special, INI-formatted file stored under your HOME directory (~/.aws/credentials)
aws_access_key_id = ANOTHER_AWS_ACCESS_KEY_ID
aws_secret_access_key = ANOTHER_AWS_SECRET_ACCESS_KE
return [ 's3' => [ /* 'accessKey' => '', */ # IAM User Access key (in case if you want to hard code directly - NOT RECOMMENDED ) /* 'secretAccessKey' => '', */ # IAM User Secret Access key (in case if you want to hard code directly - NOT RECOMMENDED) 'profile' => 'default', # AWS credentials profile to specify your credentials (by default the profile name is "default" in ~/.aws/credentials) /* 'region' => 'eu-west-1', */ # AWS Region (selected data center as needed, in this case "Ireland Region was selected" - All AWS Regions: https://docs.aws.amazon.com/general/latest/gr/rande.html) - *****UPDATE REGION NAME ACCORDINGLY***** /* 'version' => 'latest', */ /* 'bucketName' => 'XXXXXXXXXX'*/ # Amazon S3 Bucket Name (must be unique) - *****UPDATE BUCKET NAME ACCORDINGLY***** ] ];
Provide AWS IAM Access Key and Secret Access Key hard coded into the code (NOT RECOMMENDED)
return [ 's3' => [ 'accessKey' => '', # IAM User Access key (in case if you want to hard code directly - NOT RECOMMENDED ) 'secretAccessKey' => '', # IAM User Secret Access key (in case if you want to hard code directly - NOT RECOMMENDED) /* 'profile' => 'default', */ # AWS credentials profile to specify your credentials (by default the profile name is "default" in ~/.aws/credentials) /* 'region' => 'eu-west-1', */ # AWS Region (selected data center as needed, in this case "Ireland Region was selected" - All AWS Regions: https://docs.aws.amazon.com/general/latest/gr/rande.html) - *****UPDATE REGION NAME ACCORDINGLY***** /* 'version' => 'latest', */ /* 'bucketName' => 'XXXXXXXXXX'*/ # Amazon S3 Bucket Name (must be unique) - *****UPDATE BUCKET NAME ACCORDINGLY***** ] ];
Provide Amazon S3 Bucket name to which create IAM user has 'putObject and getObject' permissions
return [ 's3' => [ /* 'accessKey' => '', */ # IAM User Access key (in case if you want to hard code directly ) /*'secretAccessKey' => '', */ # IAM User Secret Access key (in case if you want to hard code directly) /* 'profile' => 'default', */ # AWS credentials profile to specify your credentials /* 'region' => 'eu-west-1', */ # AWS Region (selected data center) /* 'version' => 'latest', */ 'bucketName' => 'XXXXXXX' # Amazon S3 Bucket Name (must be unique) ] ];
'start.php' initializes Amazon S3 Client contructor and imports 'configs.php' file
/*
* Initializing Amazon S3 client constructor and pass S3 parameters
*/
# AWS PHP SDK - Required to run AWS APIs
require 'vendor/autoload.php'; # Make sure path to "vendor" folder (AWS PHP SDK v3) is shown properly - ***** VERY IMPORTANT TO SHOW THE RIGHT PATH (in this example, AWS PHP SDK v3 is located in the root directory of the app) *****
# Amazon S3 configurations
$config = require('config.php');
/*
* Initializing Amazon s3 client constructor and pass S3 parameters
*/
$s3 = new Aws\S3\S3Client([
/* 'credentials' => [ # WARNING: We don't recommend hardcoding any of your security keys in the code
'key' => $config['s3']['accessKey'], # In case you still want to hard code your access & secret access keys in your code
'secret' => $config['s3']['secretAccessKey'], # Uncomment 'credentials' and comment out 'profile'
], */
'profile' => $config['s3']['profile'],
'region' => $config['s3']['region'],
'version' => $config['s3']['version']
]);
It is the mail php file for uploding Single File and Multi Files:
# Upload to Amazon S3 Bucket
$result = $s3 -> putObject([
'Bucket' => $config['s3']['bucketName'], # Bucket Name
'Key' => "codecanyon-s3/{$file_name}", # S3 Object Name
'SourceFile' => $file_tmp_name, # S3 Object Content
'ServerSideEncryption' => 'AES256', # Server Side Encryption (optional)
'StorageClass' => 'STANDARD', # S3 Storage Type - Can be one of the follwoing: STANDARD|REDUCED_REDUNDANCY|GLACIER|STANDARD_IA|ONEZONE_IA|INTELLIGENT_TIERING|DEEP_ARCHIVE
'ACL' => 'public-read', # S3 Object Access Control List - Can be one of the following: private|public-read|public-read-write|authenticated-read|aws-exec-read|bucket-owner-read|bucket-owner-full-control
'ContentDisposition' => 'attachment' # Allows you to download the file without opening it in the browser
]);
# Status of the S3 upload (returns 200 if uploaded successfully)
$status_code = $result['@metadata']['statusCode'];
# If successfully uploaded to S3 Bucket
if ($status_code === 200) {
# Public URL of the Upload S3 object
$public_url = $result['@metadata']['effectiveUri'];
# Store as a session variable (needed for attaching to the email)
$_SESSION['file-link'] = $public_url;
# Return Results
$response_array['public-url'] = $public_url;
$response_array['message'] = 'File Successfully Uploaded.';
$response_array['status'] = 'success';
header('Content-type: application/json');
echo json_encode($response_array, JSON_UNESCAPED_SLASHES);
} else {
$response_array['message'] = 'There was an Error during File Upload.';
$response_array['status'] = 'error';
header('Content-type: application/json');
echo json_encode($response_array, JSON_UNESCAPED_SLASHES);
}
# Status of the S3 upload (returns 200 if uploaded successfully)
$status_code = $result['@metadata']['statusCode'];
# If successfully uploaded to S3 Bucket
if ($status_code === 200) {
# Return uploaded S3 object
$s3_object = $s3 -> getCommand('GetObject',[
'Bucket' => $config['s3']['bucketName'],
'Key' => "codecanyon-s3/{$file_name}"
]);
# Create S3 Object Presigned Private URL with expiration time
$request = $s3->createPresignedRequest($s3_object , strtotime($_POST['timer'] . ' minutes'));
# Return URL of the S3 Object with Presigned Private URL
$presigned_url = (string)$request->getUri();
$private_url = $presigned_url;
# Time notification until which private link is valid
$time = date('Y-m-d H:i:s', strtotime($_POST['timer'] . ' minute'));
# Time notification message for the user
$private_url_duration = "*Private link valid for " . $_POST['timer'] . " minutes until " . $time . " GMT+1";
# Return Results
$response_array['private-url'] = $private_url;
$response_array['private-url-duration'] = $private_url_duration;
$response_array['message'] = 'File Successfully Uploaded.';
$response_array['status'] = 'success';
header('Content-type: application/json');
echo json_encode($response_array, JSON_UNESCAPED_SLASHES);
# Store as session variables (needed for attaching to the email)
$_SESSION['file-link'] = $private_url;
$_SESSION['url-time'] = $private_url_duration;
} else {
$response_array['message'] = 'File was Not Uploaded Successfully.';
$response_array['status'] = 'error';
header('Content-type: application/json');
echo json_encode($response_array, JSON_UNESCAPED_SLASHES);
}
$result = $s3->createMultipartUpload([
'Bucket' => $config['s3']['bucketName'], # Name of the bucket to which the object is being uploaded
'Key' => $file_name,
'StorageClass' => 'STANDARD', # S3 Storage Type - Can be one of the follwoing: STANDARD|REDUCED_REDUNDANCY|GLACIER|STANDARD_IA|ONEZONE_IA|INTELLIGENT_TIERING|DEEP_ARCHIVE
'ACL' => 'public-read', # S3 Object Access Control List - Can be one of the following: private|public-read|public-read-write|authenticated-read|aws-exec-read|bucket-owner-read|bucket-owner-full-control
'ContentDisposition' => 'attachment' # Allows you to download the file without opening it in the browser
]);
$uploadId = $result['UploadId'];
# If "Public S3 File Link" radio button was selected:
# ===================================================
if ($_POST['radiobutton-multipart'] === 'public-link') {
try {
$file = fopen($file_tmp_name, 'rb');
$partNumber = 1;
# Upload the file in parts.
while (!feof($file)) {
$result = $s3->uploadPart([
'Bucket' => $config['s3']['bucketName'], # (string, required) Name of the bucket to which the object is being uploaded
'Key' => $file_name, # (string, required) Key to use for the object being uploaded
'UploadId' => $uploadId,
'PartNumber' => $partNumber,
'Body' => fread($file, 5 * 1024 * 1024), # (int, default=int(5242880)) Part size, in bytes, to use when doing a multipart upload. This must between 5 MB and 5 GB, inclusive.
]);
$parts['Parts'][$partNumber] = [
'PartNumber' => $partNumber,
'ETag' => $result['ETag'],
];
$partNumber++;
}
fclose($file);
// Complete the multipart upload.
$result = $s3->completeMultipartUpload([
'Bucket' => $config['s3']['bucketName'],
'Key' => $file_name,
'UploadId' => $uploadId,
'MultipartUpload' => $parts,
]);
# Status of the S3 upload (returns 200 if uploaded successfully)
$status_code = $result['@metadata']['statusCode'];
# If successfully uploaded to S3 Bucket
if ($status_code === 200) {
# Public URL of the Upload S3 object
$public_url = $result['Location'];
# Store as a session variable (needed for attaching to the email)
$_SESSION['file-link'] = $public_url;
$response_array['public-url'] = $public_url;
$response_array['message'] = 'File Successfully Uploaded.';
$response_array['status'] = 'success';
header('Content-type: application/json');
echo json_encode($response_array, JSON_UNESCAPED_SLASHES);
} else {
$response_array['message'] = 'There was an Error during File Upload.';
$response_array['status'] = 'error';
header('Content-type: application/json');
echo json_encode($response_array, JSON_UNESCAPED_SLASHES);
}
You can use any main sender application as you prefer, this is just an example where we used built in PHP Email Function:
Emails are sent via standard built-in PHP Email function. If they are missing make sure that you have sendmail and mailx packages installed and configured accordingly.
Form at the main page has only Email field as required, name and email content are allowed to be emppty.
File links will be included automatically after uploading a file
# Check if Send Email Button was clicked
if (isset($_POST['email-submit'])) {
# Uploaded Amazon S3 Object URL
if(isset($_SESSION['file-link'])) {
$message_file_link = $_SESSION['file-link'];
} else {
$message_file_link = "File has not been uploaded.";
}
# Uploaded Amazon S3 Object URL Time (Public link will be valid for 1 hour)
if(isset($_SESSION['url-time'])) {
$message_file_link_timer = $_SESSION['url-time'];
} else {
$message_file_link_timer = 'The file will be available for 1 hour until ' . date('Y-m-d H:i:s', strtotime('1 hour'));;
}
# Email Message Content Bobdy with S3 Object Link
$message_content = XXXXXXXXX;
# Check if Email was included in the Form field
if (isset($_POST['form-email'])) {
$email_to = '';
$email_from = 'XXXXXXXXXXXXXXXXXXXX'; # *****INCLUDE YOUR EMAIL ADDRESS*****
$general_email_expression = '/^[A-Za-z0-9._%-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,4}$/';
# Check if email structure is valid
if(preg_match($general_email_expression, $_POST['form-email']) == 0) {
$email_error = 'The email address you entered is not valid. Make sure your email is correct.';
$message_box = "error";
} else {
function clean_string($string) {
$bad = array("content-type","bcc:","to:","cc:","href");
return str_replace($bad,"",$string);
}
# Assign Receiver's email address
$filtered_email = filter_input(INPUT_POST, 'form-email', FILTER_SANITIZE_STRING);
$email_to = clean_string($filtered_email);
# Assign Receiver's name (optional)
if (isset($_POST['form-name'])) {
$email_subject = 'File Link shared by ' . filter_input(INPUT_POST, 'form-name', FILTER_SANITIZE_STRING);
$email_sender_name = filter_input(INPUT_POST, 'form-name', FILTER_SANITIZE_STRING);
} else {
$email_subject = 'File Link uploaded at XXXXXXXXXXXX'; # YOU CAN INCLUDE YOUR SITE NAME
$email_sender_name = '';
}
# User Submitted Message (optional)
if (isset($_POST['form-content'])) {
$email_user_message = filter_input(INPUT_POST, 'form-content', FILTER_SANITIZE_STRING);
} else {
$email_user_message = '';
}
# Custom Email Body
$email_body = "Hello " . $email_sender_name .
",\n\nA file has been uploaded by you or for you via our portal. \n\n" .
"Sender message: " . $email_user_message . "\n"
. $message_content . "\n"
. $message_file_link_timer .
"\n\nThank you for using our service!";
# Email Sender (update accordingly)
$email_header = 'From: ' . $email_from . "\r\n".
'Reply-To: ' . $email_from . "\r\n" .
'X-Mailer: PHP/' . phpversion();
# Google reCaptcha v2 Check
if(isset($_POST['g-recaptcha-response']) && !empty($_POST['g-recaptcha-response'])) {
# Your Secret Key
$secret = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'; # ***** INCLUDE YOUR reCAPTCHA SECRET KEY *****
$verifyResponse = file_get_contents('https://www.google.com/recaptcha/api/siteverify?secret='.$secret.'&response='.$_POST['g-recaptcha-response']);
$responseData = json_decode($verifyResponse);
# If reCaptcha returned success
if($responseData->success) {
# Send Email
if (mail($email_to, $email_subject, $email_body, $email_header)) {
$email_success = "Email has been sent successfully.";
$message_box_success = "success";
$_SESSION = array(); # Clean session variables
} else {
$email_error = "There was an error, please try again.";
$message_box = "error";
}
# If reCaptch was not successfule
}
} # End # Google reCaptcha v2 Check
}
} # End Check if Email was included in the Form field
}
Before you can upload data to Amazon S3, you must create a bucket in one of the AWS Regions to store your data in. After you create a bucket, you can upload an unlimited number of data objects to the bucket.
A bucket is owned by the AWS account that created it. By default, you can create up to 100 buckets in each of your AWS accounts. If you need additional buckets, you can increase your account bucket limit to a maximum of 1,000 buckets by submitting a service limit increase. For information about how to increase your bucket limit, see AWS Service Limits in the AWS General Reference.
Buckets have configuration properties, including their geographical region, who has access to the objects in the bucket, and other metadata.
To create an S3 bucket
Sign in to the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3/.
Choose Create bucket.
On the Name and region page, type a name for your bucket and choose the AWS Region where you want the bucket to reside. Complete the fields on this page as follows:
For Bucket name, type a unique DNS-compliant name for your new bucket. Follow these naming guidelines:
The name must be unique across all existing bucket names in Amazon S3.
The name must not contain uppercase characters.
The name must start with a lowercase letter or number.
The name must be between 3 and 63 characters long.
After you create the bucket you cannot change the name, so choose wisely.
Choose a bucket name that reflects the objects in the bucket because the bucket name is visible in the URL that points to the objects that you're going to put in your bucket.
For Region, choose the AWS Region where you want the bucket to reside. Choose a Region close to you to minimize latency and costs, or to address regulatory requirements. Objects stored in a Region never leave that Region unless you explicitly transfer them to another Region. For a list of Amazon S3 AWS Regions, see Regions and Endpoints in the Amazon Web Services General Reference.
(Optional) If you have already set up a bucket that has the same settings that you want to use for the new bucket that you want to create, you can set it up quickly by choosing Copy settings from an existing bucket, and then choosing the bucket whose settings you want to copy.
The settings for the following bucket properties are copied: versioning, tags, and logging.
Do one of the following:
If you copied settings from another bucket, choose Create. You're done, so skip the following steps.
If not, choose Next.
On the Configure options page, you can configure the following properties and Amazon CloudWatch metrics for the bucket. Or, you can configure these properties and CloudWatch metrics later, after you create the bucket.
Versioning
Select Keep all versions of an object in the same bucket. to enable object versioning for the bucket.
Server access logging
Select Log requests for access to your bucket. to enable server access logging on the bucket. Server access logging provides detailed records for the requests that are made to your bucket.
Tags
You can use cost allocation bucket tags to annotate billing for your use of a bucket. Each tag is a key-value pair that represents a label that you assign to a bucket.
To add a tag, enter a Key and a Value. Choose Add another to add another tag.
Object-level logging
Select Record object-level API activity by using CloudTrail for an additional cost to enable object-level logging with CloudTrail.
Default encryption
Select Automatically encrypt objects when they are stored in S3 to enable default encryption for the bucket. You can enable default encryption for a bucket so that all objects are encrypted when they are stored in the bucket.
Object lock
Select Permanently allow objects in this bucket to be locked if you want to be able to lock objects in the bucket. Object lock requires that you enable versioning on the bucket.
CloudWatch request metrics
Select Monitor requests in your bucket for an additional cost. to configure CloudWatch request metrics for the bucket.
Choose Next.
On the Set permissions page, you manage the permissions that are set on the bucket that you are creating.
Under Block public access (bucket settings), we recommend that you do not change the default settings that are listed under Block all public access. You can change the permissions after you create the bucket.
Warning
We highly recommend that you keep the default access settings for blocking public access to the bucket that you are creating. Public access means that anyone in the world can access the objects in the bucket.
If you intend to use the bucket to store Amazon S3 server access logs, in the Manage system permissions list, choose Grant Amazon S3 Log Delivery group write access to this bucket.
When you're done configuring permissions on the bucket, choose Next.
On the Review page, verify the settings. If you want to change something, choose Edit. If your current settings are correct, choose Create bucket.
You can use the AWS Management Console to create IAM users.
To create one or more IAM users (console)
Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/.
In the navigation pane, choose Users and then choose Add user.
Type the user name for the new user. This is the sign-in name for AWS. If you want to add more than one user at the same time, choose Add another user for each additional user and type their user names. You can add up to 10 users at one time.
Note
User names can be a combination of up to 64 letters, digits, and these characters: plus (+), equal (=), comma (,), period (.), at sign (@), and hyphen (-). Names must be unique within an account. They are not distinguished by case. For example, you cannot create two users named TESTUSER and testuser.
Select the type of access this set of users will have. You can select programmatic access, access to the AWS Management Console, or both.
Select Programmatic access if the users require access to the API, AWS CLI, or Tools for Windows PowerShell. This creates an access key for each new user. You can view or download the access keys when you get to the Final page.
Select AWS Management Console access if the users require access to the AWS Management Console. This creates a password for each new user.
For Console password, choose one of the following:
Autogenerated password. Each user gets a randomly generated password that meets the account password policy in effect (if any). You can view or download the passwords when you get to the Final page.
Custom password. Each user is assigned the password that you type in the box.
Choose Next: Permissions.
On the Set permissions page, specify how you want to assign permissions to this set of new users. Choose one of the following three options:
Add user to group. Choose this option if you want to assign the users to one or more groups that already have permissions policies. IAM displays a list of the groups in your account, along with their attached policies. You can select one or more existing groups, or choose Create group to create a new group.
Copy permissions from existing user. Choose this option to copy all of the group memberships, attached managed policies, embedded inline policies, and any existing permissions boundaries from an existing user to the new users. IAM displays a list of the users in your account. Select the one whose permissions most closely match the needs of your new users.
Attach existing policies to user directly. Choose this option to see a list of the AWS managed and customer managed policies in your account. Select the policies that you want to attach to the new users or choose Create policy to open a new browser tab and create a new policy from scratch. After you create the policy, close that tab and return to your original tab to add the policy to the new user. As a best practice, we recommend that you instead attach your policies to a group and then make users members of the appropriate groups.
(Optional) Set a permissions boundary. This is an advanced feature.
Open the Set permissions boundary section and choose Use a permissions boundary to control the maximum user permissions. IAM displays a list of the AWS managed and customer managed policies in your account. Select the policy to use for the permissions boundary or choose Create policy to open a new browser tab and create a new policy from scratch.
Choose Next: Tags.
(Optional) Add metadata to the user by attaching tags as key-value pairs.
Choose Next: Review to see all of the choices you made up to this point. When you are ready to proceed, choose Create user.
To view the users' access keys (access key IDs and secret access keys), choose Show next to each password and access key that you want to see. To save the access keys, choose Download .csv and then save the file to a safe location.
Important
This is your only opportunity to view or download the secret access keys, and you must provide this information to your users before they can use the AWS API. Save the user's new access key ID and secret access key in a safe and secure place. You will not have access to the secret keys again after this step.
Provide each user with his or her credentials. On the final page you can choose Send email next to each user. Your local mail client opens with a draft that you can customize and send. The email template includes the following details to each user:
Support includes bugs fixing, and general problem solving with features explained on the template’s official sales page.
Once again, thank you so much for purchasing this Amazon S3 core uploader. As I said at the beginning, I'd be glad to help you if you have any questions relating to this template. No guarantees, but I'll do my best to assist. If you have a more general question relating to the templates on CodeCanyon, you might consider visiting the forums and asking your question in the "Item Discussion" section.
Regards,
Berkine Design