How to Build an Amazon AWS S3 Upload Tool with Presigned URLsUploading files directly from clients (browsers, mobile apps) to Amazon S3 without routing the data through your backend improves scalability, reduces server load and bandwidth costs, and simplifies architecture. Presigned URLs let your server generate time-limited, permission-scoped URLs that clients use to upload files directly to S3. This article walks through designing, implementing, and securing an S3 upload tool using presigned URLs, with examples for backend generation, frontend upload, multipart uploads, and best practices.
Overview: What are presigned URLs?
A presigned URL is a URL that includes temporary credentials allowing whoever holds it to perform a specific S3 operation (GET, PUT, POST) on a given object for a limited time. Your backend signs the URL using AWS credentials; the client uses it to upload directly to S3. Key benefits:
- Direct client-to-S3 uploads reduce backend bandwidth and CPU.
- Fine-grained control — you can limit the object key, content type, maximum size, and expiration.
- Time-limited access — presigned URLs expire after a short window.
Architecture and flow
- Client requests an upload token (presigned URL) from your backend, providing metadata (file name, content type, size).
- Backend performs authorization and generates a presigned URL for an S3 PUT or POST.
- Backend returns the presigned URL and any required fields (for POST).
- Client uploads directly to S3 using the URL.
- (Optional) Client notifies backend that upload completed or S3 triggers a notification (SNS, SQS, Lambda) to process the uploaded object.
Components:
- Backend service (auth, signing, policy enforcement)
- AWS S3 bucket with appropriate CORS and lifecycle policies
- Client (web, mobile, or CLI)
- Optional processing (Lambda, Step Functions) and notifications
Choosing PUT vs POST vs Multipart
- PUT: Simple single-request uploads. Good for small-to-medium files. You generate a presigned PUT URL and user uploads with that URL.
- POST (form upload): Allows policy-based restrictions on fields (content-type, size) and is widely used in browsers. Uses multipart/form-data and may be better for structured constraints.
- Multipart Upload with presigned URLs: Required for large files (hundreds of MBs to many GBs). You create a multipart upload on the server (or via SDK), generate presigned URLs for each part, upload parts in parallel from the client, then complete the multipart upload.
For many applications, start with PUT or POST and add multipart support for large files.
Security considerations
- Keep presigned URL expiration short (e.g., 1–15 minutes) depending on client network characteristics.
- Validate filename, content-type, and file size server-side before generating the URL.
- Use server-side authorization to ensure only authorized users request URLs.
- Apply S3 bucket policies to limit actions and allowed origins (CORS).
- Consider encryption: enable SSE-S3 or SSE-KMS for server-side encryption or instruct clients to use client-side encryption when needed.
- Use least-privilege IAM roles for the backend: allow only s3:PutObject, s3:ListMultipartUploadParts, s3:CompleteMultipartUpload, etc., on the specific bucket/prefix.
- Scan or post-process uploads (virus/malware scanning) before making objects publicly accessible.
Setup: S3 bucket and IAM role
- Create an S3 bucket (e.g., my-app-uploads).
- Configure CORS to allow your web clients to PUT or POST:
<CORSConfiguration> <CORSRule> <AllowedOrigin>https://example.com</AllowedOrigin> <AllowedMethod>PUT</AllowedMethod> <AllowedMethod>POST</AllowedMethod> <AllowedHeader>*</AllowedHeader> <ExposeHeader>ETag</ExposeHeader> <MaxAgeSeconds>3000</MaxAgeSeconds> </CORSRule> </CORSConfiguration>
- Create an IAM policy for presign generation with least privilege:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:PutObject", "s3:AbortMultipartUpload", "s3:ListMultipartUploadParts", "s3:CompleteMultipartUpload", "s3:CreateMultipartUpload" ], "Resource": "arn:aws:s3:::my-app-uploads/*" } ] }
Attach this policy to the role or user your backend uses.
Backend examples: Generating presigned URLs
Below are concise examples for generating presigned PUT URLs and multipart presigned URLs using popular languages/SDKs. All examples use AWS SDK v3 or v2 where noted; adjust for your environment.
Node.js (AWS SDK v3) — presigned PUT:
// npm: @aws-sdk/client-s3, @aws-sdk/s3-request-presigner import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3"; import { getSignedUrl } from "@aws-sdk/s3-request-presigner"; const s3 = new S3Client({ region: "us-east-1" }); export async function generatePresignedPutUrl(bucket, key, expiresSeconds=300) { const cmd = new PutObjectCommand({ Bucket: bucket, Key: key }); const url = await getSignedUrl(s3, cmd, { expiresIn: expiresSeconds }); return url; }
Python (boto3) — presigned PUT:
import boto3 s3 = boto3.client('s3', region_name='us-east-1') def generate_presigned_put(bucket, key, expires_in=300): url = s3.generate_presigned_url( ClientMethod='put_object', Params={'Bucket': bucket, 'Key': key}, ExpiresIn=expires_in ) return url
Node.js — multipart upload presigned URLs (v3):
import { S3Client, CreateMultipartUploadCommand, UploadPartCommand, CompleteMultipartUploadCommand } from "@aws-sdk/client-s3"; import { getSignedUrl } from "@aws-sdk/s3-request-presigner"; const s3 = new S3Client({ region: "us-east-1" }); export async function createMultipart(bucket, key) { const createResp = await s3.send(new CreateMultipartUploadCommand({ Bucket: bucket, Key: key })); return createResp.UploadId; } export async function getPresignedPartUrl(bucket, key, uploadId, partNumber, expiresSeconds=3600) { const cmd = new UploadPartCommand({ Bucket: bucket, Key: key, UploadId: uploadId, PartNumber: partNumber }); return await getSignedUrl(s3, cmd, { expiresIn: expiresSeconds }); }
Server should return uploadId and an array of presigned part URLs for the client to upload parts directly.
Frontend examples
Browser — PUT using fetch:
async function uploadFileWithPresignedUrl(file, presignedUrl) { const res = await fetch(presignedUrl, { method: 'PUT', headers: { 'Content-Type': file.type }, body: file }); if (!res.ok) throw new Error('Upload failed'); return res; }
Browser — multipart upload flow (simplified):
- Request uploadId and presigned part URLs from backend.
- For each part, call fetch(presignedPartUrl, { method: ‘PUT’, body: partBlob }).
- After all parts succeed, send the list of ETags and part numbers to backend to call CompleteMultipartUpload.
Important: For multipart, collect the ETag from each part response header and preserve the part numbers.
Handling large uploads, retries, and progress
- Use chunking and multipart uploads for files > 5 MB (S3 minimum part size is 5 MB, except the last part).
- Upload parts in parallel to speed up transfer.
- Implement exponential backoff and retries for transient network failures.
- For mobile/unstable networks, support resuming by listing uploaded parts (ListParts) and uploading missing ones.
Processing and post-upload workflows
- Use S3 Event Notifications to trigger Lambda, SQS, or SNS to process files (e.g., thumbnails, virus scan, metadata extraction).
- For processing pipelines, consider using S3 Object Lambda or Step Functions for complex workflows.
- Tag objects or store metadata in your database linking uploaded S3 keys to user records.
Cost considerations
- You save backend outbound bandwidth costs by uploading directly to S3.
- Multipart uploads add request costs per part — balance part size against concurrency and request cost.
- Lifecycle rules: move infrequently accessed items to cheaper storage classes (IA, Glacier) and expire old objects automatically.
Common pitfalls and troubleshooting
- CORS errors: ensure S3 CORS allows your origin and methods.
- Signature mismatch: ensure server and client clocks are in sync and use correct region/bucket/key.
- Missing Content-Type or wrong header: when presigning with specific headers, the client must include them exactly.
- Permissions: check IAM policy and bucket policy if access denied.
- Multipart: forgetting to complete the upload leaves incomplete parts billed; set a lifecycle rule to abort incomplete multipart uploads after N days.
Example end-to-end flow (concise)
- Client authenticates to your backend.
- Client requests presigned URL for key “user-uploads/{userId}/{filename}”.
- Backend validates request, enforces naming/size rules, generates presigned PUT or POST URL, returns to client.
- Client uploads directly to S3 using the presigned URL.
- S3 triggers a Lambda to process the new object; Lambda updates the application database.
Summary / Best practices
- Use presigned URLs for direct-to-S3 uploads to reduce backend load.
- Prefer short expirations, server-side validation, and least-privilege IAM.
- Use multipart presigned uploads for large files and implement retries and resume logic.
- Configure CORS, encryption, lifecycle rules, and S3 notifications for a robust pipeline.
This gives you a complete roadmap — from IAM and CORS setup, through backend presigning code, to frontend upload logic and multipart handling — to build a scalable, secure Amazon S3 upload tool using presigned URLs.
Leave a Reply