Why large uploads fail
- PHP upload and POST limits are too low.
- Execution timeouts stop long-running requests.
- The server has to relay the full file instead of orchestrating parts.
- There is no resume state when a connection drops.
- The storage backend is configured, but browser CORS is incomplete.
- Site CSP blocks direct-to-storage uploads when origin rules are not aligned.
A better model: multipart direct-to-storage uploads
Instead of making PHP assemble the entire file, the app creates an upload session, reserves quota, signs or prepares the part uploads, tracks part completion, and finalizes the upload when all parts are present. The actual file bytes can move directly from the client to the storage backend.
This architecture is a better fit for multi-gigabyte files, resumed transfers, and object-storage workflows.
fyuhls also exposes a managed-upload API shortcut so desktop tools can request signed parts and resume without recreating the full multipart negotiation on their own.
Reasonable PHP baseline for bigger uploads
The fyuhls documentation recommends a practical baseline for 2GB-plus deployments:
upload_max_filesize = 256Mpost_max_size = 300Mmax_execution_time = 3600memory_limit = 512M
Those values do not mean the final file must stay under 256 MB. In a multipart model, they act more like request and orchestration baselines than the real final file-size ceiling.
Where fyuhls fits
fyuhls includes the multipart upload path, API endpoints for session and part handling, storage backend support, and operational cleanup through its heartbeat cron runner. That makes it useful for self-hosted file hosting sites that need more than a single browser form.
If you want to integrate uploads from tools outside the web UI, the API reference covers token-based uploads and managed upload flows.