DevOps

Large file upload size issues

Lately I’ve been working on a project to help share my virtualbox vagrant’s privately. This is a very simple clone of the vagrant-cloud API that interacts with Packer & Vagrant. As part of this process, Packer uploads the artifacts it creates, and this became a problem in my dev environment. Between Nginx & Apache there are a few differences in how file uploads are configured, both defaults and in general.

My dev environment uses Nginx + PHP-FPM, and this is where the file upload issues began. I was receiving “413 Request entity too large” responses. I thought this was going to be easy, as based on my years of PHP experience I would just need to increase post_max_size, and upload_max_filesize. So I increased these to 7G, but I was still receiving the 413 error. After further investigation it turns out that Nginx has a default client_max_body_size of 1M, I increased that to 7G (I was testing iso uploads), and it was working without any issues.

Fast forward a week… I deployed Phagrancy to one of my Apache+PHP-FPM servers, and I had no upload issues. This surprised me, until I realized that Packer is using PUT requests to upload the build artifacts, bypassing PHP’s upload_max_filesize and post_max_size settings. From the Apache perspective, it has LimitRequestBody but it is defaulted to 0, meaning unlimited size.

So going from one extreme to another, I think it’s fair to say the following:
* Set Nginx’s client_max_body_size higher
* Set Apaches’ LimitRequestBody all together (DOS attacks are still common, and could easily exhaust your Apache instance)

Leave a reply