When uploading a file bigger than 4G, it gets stuck at 4G or 4294967296 bytes. Obviously this is the same good-old 4G file size limit we have here in ApisCP. This time however I’m unable to solve it. I know there’s SO MANY config locations for this 4G limit but these are the ones I’ve changed so far. Please see command output below.
I need assistance in what config I’ve missed changing to allow the httpd web server / php to store a file bigger than 4G. I’ve attempted to change this to 20G without success
What’s the HTTP response code? What application is accepting the upload? Does it always fail at the 4 GB boundary?
You’d need to alter the value in:
limits.conf using system.process-limits, this affects the max file size created within vfs; can be tested within the account using dd if=/dev/zero of=~/file bs=1G count=5
Apache may be overrode using LimitFSIZE, its primary purpose is to throttle logfiles from running away
Lastly, PHP mounts a tmpfs mount for /tmp. If uploads under 4 GB but more than 1 GB are failing, then this is a consequence of using a private /tmp mount. It may be adjusted in the policy file.
Resource limits in php-fpm.service or php-fpm-siteXX.service do not propagate to the individual pools which control the physical startup of PHP-FPM. These services are named php-fpm-siteXX-POOL-NAME.service. Policy overrides will propagate to these services however.
The method is POST upload, php application.
The total file size is 10G.
It always fail at exactly 4G / 4294967296 bytes. When viewing the terminal I can see that it’s storing the file in the uploads folder under /var/www and stops at the above limit.
Doing: dd if=/dev/zero of=~/file bs=1G count=5 works fine.
Understand I’ve added
[Service]
LimitFSize=20G
to the php-fpm-siteXX-POOL-NAME.service now as well.
Disable PrivateTmp, which is memory-backed given the file size requirement. Also increase the LimitFSIZE value that’s set when the PHP-FPM process spins up: