How to Set Up a Private File Vault for Your SaaS App
A step-by-step architecture guide for building secure, multi-tenant file storage in your SaaS app — with folder isolation, access control middleware, signed URLs, and audit logging.
The Problem: Your SaaS App Needs User File Uploads
Every SaaS application eventually reaches the moment: a customer needs to upload a file. Maybe it's an invoice in your accounting tool, a contract in your legal platform, or a medical record in your health-tech app. What starts as a simple "add an upload button" ticket quickly becomes a question of architecture, security, and compliance.
The stakes are higher than they look. User-uploaded files are often the most sensitive data in your entire application. A database breach is bad, but a file storage breach can expose signed contracts, financial statements, or protected health information — the kind of data that triggers regulatory consequences and front-page news.
This guide walks through the architecture of a proper multi-tenant file vault — from the naive approach that most teams start with to a production-grade design with folder isolation, access control, signed URL generation, and audit logging. The code examples are in Node.js, but the patterns apply to any backend stack.
The Naive Approach and Why It Fails
Most teams start with the simplest thing that works: a single S3 bucket, public read access (or security-by-obscurity with long random filenames), and a flat folder structure.
my-app-uploads/
a3f9b2c1-invoice.pdf
d7e4a1b8-contract.docx
f1c3d9e2-medical-record.pdfThe assumption is that nobody will guess the random filename, so the file is effectively private. This is security by obscurity, and it fails in predictable ways:
- URL leakage. The file URL ends up in browser history, server logs, analytics tools, Slack messages, and support tickets. Anyone who sees the URL has permanent access.
- No access revocation. If a user's account is deleted or their permissions change, the file is still publicly accessible at the same URL forever.
- No tenant isolation. All users' files live in the same flat namespace. A bug in your application code — a missing
WHERE user_id = ?clause — can expose one tenant's files to another. - No audit trail. You have no record of who accessed which file and when. When a compliance auditor asks for access logs, you have nothing.
- Enumeration risk. If your bucket has listing enabled (a common misconfiguration), an attacker can discover every file in your application with a single API call.
This approach is fine for a hackathon project or a prototype. It is not fine for anything that handles real user data. The good news is that building a proper file vault is not significantly more complex — it just requires deliberate architecture choices upfront.
Architecture: Multi-Tenant File Isolation
The foundation of a secure file vault is tenant isolation at the storage layer. Each tenant's files should live in their own namespace, making cross-tenant access structurally impossible rather than relying on application-level checks alone.
Folder-per-tenant structure
my-app-vault/
tenant-a1b2c3/
invoices/
2026-01-invoice.pdf
contracts/
nda-acme-corp.docx
tenant-d4e5f6/
invoices/
2026-01-invoice.pdf
medical/
patient-record-7890.pdfEach tenant gets a top-level prefix (folder) in the bucket. Within that prefix, you can organize by document type, project, or any structure that makes sense for your domain.
Naming conventions that prevent collisions
Use your internal tenant ID (UUID or database primary key) as the folder name — not the tenant's display name, domain, or any user-controlled string. This prevents collisions and path traversal issues:
// Good: deterministic, collision-free
const prefix = `tenants/${tenant.id}/`;
// Bad: user-controlled, could collide or contain path traversal
const prefix = `tenants/${tenant.companyName}/`;Private bucket, no public access
The bucket itself should have all public access blocked. No public ACLs, no public bucket policies, no static website hosting. Every file access goes through your application, which generates signed URLs after verifying authorization.
This pattern maps directly to how managed private file storage services work — files are stored in private buckets, organized by project or tenant, and accessed exclusively through authenticated API calls that return time-limited signed URLs.
Access Control Layer: Ownership Verification Middleware
The access control layer is the most critical component. It sits between your API and the storage layer, ensuring that every file operation is authorized before it executes.
Here's a practical implementation in Express.js:
// middleware/fileAccess.js
import { db } from "../db.js";
/**
* Middleware that verifies the requesting user owns the file
* (or has been granted access via a share).
*/
export function requireFileAccess(allowedRoles = ["owner"]) {
return async (req, res, next) => {
const { fileId } = req.params;
const userId = req.auth.userId;
const tenantId = req.auth.tenantId;
// Fetch the file record and its parent folder
const file = await db.query(
`SELECT f.id, f.tenant_id, f.folder_id, f.s3_key,
fs.user_id AS shared_with, fs.role AS share_role
FROM files f
LEFT JOIN file_shares fs
ON fs.file_id = f.id AND fs.user_id = $2
WHERE f.id = $1`,
[fileId, userId]
);
if (!file.rows[0]) {
return res.status(404).json({ error: "File not found" });
}
const record = file.rows[0];
// Check 1: tenant isolation — never cross tenant boundaries
if (record.tenant_id !== tenantId) {
return res.status(404).json({ error: "File not found" });
}
// Check 2: user-level ownership or share-based access
const isOwner = record.created_by === userId;
const hasShare = record.shared_with === userId
&& allowedRoles.includes(record.share_role);
if (!isOwner && !hasShare) {
return res.status(403).json({ error: "Access denied" });
}
// Attach file metadata for downstream handlers
req.file = record;
next();
};
}Key design decisions in this middleware:
- Tenant check returns 404, not 403. If a user from tenant A tries to access tenant B's file, the response is "file not found" — not "access denied." A 403 response leaks information (the file exists), while a 404 reveals nothing.
- Database-level verification. The middleware queries the database for every request rather than relying on cached permissions. This ensures access revocation takes effect immediately.
- Share-based access. The
file_sharestable allows granting access to specific users without exposing the file publicly. Roles (e.g., "viewer," "editor") control what operations are permitted. - Fail-closed. If the database query fails or returns unexpected data, the middleware denies access by default.
Wire it into your routes:
// routes/files.js
import { requireFileAccess } from "../middleware/fileAccess.js";
import { generateSignedUrl } from "../services/storage.js";
router.get("/files/:fileId/download",
requireFileAccess(["owner", "viewer", "editor"]),
async (req, res) => {
const signedUrl = await generateSignedUrl(req.file.s3_key);
res.json({ url: signedUrl, expiresIn: 600 });
}
);If you're using a managed file upload API, the service typically handles tenant isolation and access control at the platform level — but understanding how this middleware works helps you design the right permission model regardless of where files are stored.
Signed URL Generation: Short-Lived Access Tokens for Files
Once access is verified, you need a way to let the user actually download or upload the file without proxying every byte through your server. This is where signed URLs come in.
Download URLs
Generate a short-lived signed URL that grants read access to a specific S3 object:
// services/storage.js
import { S3Client, GetObjectCommand } from "@aws-sdk/client-s3";
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";
const s3 = new S3Client({ region: process.env.AWS_REGION });
export async function generateSignedUrl(s3Key, expiresIn = 600) {
const command = new GetObjectCommand({
Bucket: process.env.PRIVATE_BUCKET,
Key: s3Key,
});
return getSignedUrl(s3, command, { expiresIn });
}Upload URLs for direct-to-storage uploads
For file uploads, generate a presigned PUT URL so the client uploads directly to S3, bypassing your server entirely:
import { PutObjectCommand } from "@aws-sdk/client-s3";
import { randomUUID } from "crypto";
import path from "path";
export async function generateUploadUrl(tenantId, filename, contentType) {
// Sanitize: use UUID for the key, keep only the file extension
const ext = path.extname(filename).replace(/[^a-zA-Z0-9.]/g, "");
const s3Key = `tenants/${tenantId}/uploads/${randomUUID()}${ext}`;
const command = new PutObjectCommand({
Bucket: process.env.PRIVATE_BUCKET,
Key: s3Key,
ContentType: contentType,
});
const url = await getSignedUrl(s3, command, { expiresIn: 900 });
return { url, s3Key };
}Important safeguards:
- Short expiration. 10 minutes for downloads, 15 minutes for uploads. The shorter the better — a leaked signed URL is only useful until it expires.
- Content-Type enforcement. Set
ContentTypeon upload URLs to prevent users from uploading executable files disguised as PDFs. - Post-upload validation. After the client confirms the upload is complete, scan the file server-side (virus scanning, file type verification) before marking it as available.
If managing S3 signing keys, bucket policies, and expiration logic feels like overhead, managed services like files.link handle signed URL generation automatically — private folders return CloudFront-signed URLs with a 10-minute expiration, and uploads go through an authenticated API endpoint with built-in validation.
Audit Logging: Who Accessed What and When
Audit logging is not optional for a file vault — it's a compliance requirement in most regulated industries and a debugging lifesaver in all of them.
Every file operation should produce an immutable log entry:
// services/auditLog.js
export async function logFileAccess(event) {
await db.query(
`INSERT INTO file_audit_log
(tenant_id, user_id, file_id, action, ip_address, user_agent, metadata, created_at)
VALUES ($1, $2, $3, $4, $5, $6, $7, NOW())`,
[
event.tenantId,
event.userId,
event.fileId,
event.action, // 'download', 'upload', 'delete', 'share', 'view'
event.ipAddress,
event.userAgent,
JSON.stringify(event.metadata || {}),
]
);
}Integrate it into your access control middleware so logging happens automatically:
router.get("/files/:fileId/download",
requireFileAccess(["owner", "viewer", "editor"]),
async (req, res) => {
const signedUrl = await generateSignedUrl(req.file.s3_key);
await logFileAccess({
tenantId: req.auth.tenantId,
userId: req.auth.userId,
fileId: req.params.fileId,
action: "download",
ipAddress: req.ip,
userAgent: req.headers["user-agent"],
});
res.json({ url: signedUrl, expiresIn: 600 });
}
);What to log:
- Downloads — who accessed the file, when, from what IP
- Uploads — who uploaded, file size, content type
- Deletions — who deleted, with the original file metadata preserved
- Share events — who shared with whom, what role was granted
- Permission changes — who modified access, what changed
What NOT to log:
- The signed URL itself (treat it as a secret)
- The file contents (obvious, but worth stating)
Store audit logs in a separate table or service with append-only permissions. The application user that writes logs should not have permission to update or delete log entries.
Compliance Considerations: GDPR, HIPAA, and Encryption
If your SaaS handles files in regulated industries, the architecture above is a starting point — but compliance adds specific requirements.
Encryption at rest
S3 supports server-side encryption with three key management options: SSE-S3 (Amazon-managed keys), SSE-KMS (AWS Key Management Service), and SSE-C (customer-provided keys). For most SaaS applications, SSE-S3 or SSE-KMS is sufficient. If your compliance framework requires you to control the encryption keys directly, use SSE-KMS with a customer-managed CMK.
Encryption in transit
Always enforce HTTPS. For S3, this means setting a bucket policy that denies any request where aws:SecureTransport is false. Signed URLs are always HTTPS by default.
GDPR: Right to erasure
Article 17 of GDPR gives users the right to have their data deleted. For a file vault, this means you need a reliable way to delete all of a user's files and their associated metadata. The tenant-per-folder architecture makes this straightforward — delete the tenant's S3 prefix and their database records.
HIPAA: Access controls and audit trails
HIPAA requires access controls, audit logging, and encryption — all covered by the architecture in this guide. You will also need a Business Associate Agreement (BAA) with your storage provider. AWS offers BAAs for S3; if you use a managed service, verify they offer one as well.
Data residency
Some regulations (particularly in the EU and healthcare) require data to be stored in specific geographic regions. When choosing S3 regions or a managed document storage provider, verify that the storage region aligns with your compliance requirements.
DIY vs Managed: Choosing Your Approach
| Dimension | DIY (S3 + Custom Code) | Managed Service |
|---|---|---|
| Setup time | Days to weeks | Hours |
| Tenant isolation | Your responsibility (folder prefixes + IAM policies) | Built-in (project/folder-level isolation) |
| Access control | Custom middleware (your code, your bugs) | Platform-level, API key scoped per project |
| Signed URLs | Manual key management, expiration logic, CloudFront key pairs | Automatic (private folders return signed URLs) |
| Audit logging | Build it yourself (S3 access logs + custom application logs) | Included or via API |
| Encryption at rest | Configure per bucket (SSE-S3, SSE-KMS, or SSE-C) | Handled by provider |
| CDN delivery | Separate CloudFront setup and configuration | Included (450+ edge locations) |
| Compliance (BAA, SOC 2) | Your responsibility to configure and document | Provider's certification (verify coverage) |
| Flexibility | Full control over every detail | Constrained to provider's feature set |
| Ongoing maintenance | Key rotation, monitoring, incident response — all on you | Provider's responsibility |
Conclusion: Build the Vault Your Users Deserve
A private file vault is not a feature you tack on after launch — it's a foundational architecture decision that affects security, compliance, and user trust from day one.
The key components:
- Tenant isolation at the storage layer — folder-per-tenant with private bucket policies, not security by obscurity
- Access control middleware — database-verified ownership checks on every request, fail-closed by default
- Signed URLs — short-lived, per-file access tokens that keep bytes off your server and out of your logs
- Audit logging — immutable records of every file operation for compliance and debugging
- Encryption everywhere — at rest and in transit, with key management appropriate to your compliance requirements
If you want to build this yourself, the code examples in this guide give you a solid starting point. The patterns are well-established and the AWS SDKs are mature.
If you'd rather skip the infrastructure work and focus on your application, files.link provides multi-tenant file storage with built-in private file access, automatic signed URL generation, and a file upload API that handles the plumbing — so you can ship the features your users are actually paying for.
Either way, don't ship the naive approach. Your users' files deserve a vault, not a bucket with a long URL.