Fast(er) Content Delivery, 17 prime tips
Whenever I am trying to figure out additional ways to improve my already solved problem of high speed content delivery, I procrastinate by reading benchmarks of Nginx, Cherokee, and Linux tcp performance tweaks. In this delicate dalliance, I find myself reading lots of blog posts by Morons who seem to think that “their solution” to web content delivery is the Starship to destroy Star Destroyers. They’ll say pompous things like “oh, we got up to 500 requests per second”, along with “to learn how I did it, feel free to contact me and hire me!”. To which I say, “I deliver 100,000 requests per second BITCHES!“ (with a dual i7) (50,000/s with a 3ghz core2 duo). I don’t have some giant elastic cloud computing nightmare, or CDN, or a million servers.
(To learn how I did it, please feel free to hire me)
The real problem is with the internet itself. The internet gives everyone a pedestal to store their half-thought out and barely informed thoughts for everyone to digest and applaud them for. The number of crap posts that I have to read every day have finely tuned my online BS filter, I only wish that I could flag the author and have Firefox store/retrieve my settings and never show me anything by the offending author ever again. I mean, ever.
Anyway, just focusing on the server’s capabilities, I have the following tips for people who want to get past the 10-20K/sec range:
- Get a Girlfriend.
Oh wait, maybe that was advice for me. I got confused. Let me try again.
Do (in no particular order):
- Use Nginx or Cherokee or G-WAN as the front-end webserver, put Apache, PHP, etc. dynamic content behind it.
- Let Nginx/Cherokee deliver your static content.
- Make as much content static as you can (i.e. make PHP/ASP/etc unnecessary), let Nginx deliver it.
- Install eaccelerator for PHP
- Install an SSD. Seriously, if you want this number of requests/second, the SSD more than makes its cost back.
- Get the spinning disks out of the delivery chain — put static content and the static cache of dynamic content on a solid state drive.
- Understand that traffic spikes lead traffic growth — If your server cannot handle the spikes, the growth will plateau at the average service capacity.
- Broadcast server logs to a remote logger — local logging deflects the disk head from page-service locations on the disk, introducing longer access and delay times for non-cached material.
- Use Lazy database connections — don’t connect to the DB until a fully formed query is ready to go. You’ll be surprised how often that random “include file” forces a slow DB connection on every page access, even when no query is being performed.
- Use partial page caching — don’t query the db or run calculations if you don’t have to! Store parts of results in the cache.
- Did I already say “Use an SSD”? Yeah, cache everything, and cache it on the SSD.
- Tweak your server’s TCP settings to allow handling of new and release of old sockets faster.
- Load balancers are fast, but DNS Round Robin is faster. Load Balancers are a single point of failure that you have to pay out the butt for, DNS Round Robin is included with many global DNS services
- Since you just learned to cache everything, feel free to push your caches to additional (as many as you like) delivery nodes to spread out the traffic if it grows too fast.
- If your DB data is big, put it on the SSD as well, and tweak your database config files for your server’s actual resources. Now that you aren’t using Apache in the front, you should have plenty of memory left.
- Store your cache as gz compressed data and deliver gz-compressed data. People often talk about bandwidth savings on compression, but from a server-delivery standpoint, its lowering the number of packets before the connection can be closed. The faster it is closed, the faster reused, and the more that can be handled.
- If you have a lot of small files, small cache files, etc, use a faster filesystem — like reiser4. Backup often. Turn off atime on mounts in /etc/fstab.
Don’t (as in less than an order of magnitude speedup, and lots of wasted time):
- Concern yourself with deciding between Fastcgi vs. Apache mod_php; besides, Apache decodes and passes to PHP a lot of server variables that are a headache to remember to decode in fastcgi mode.
- Concern yourself with Varnish. Not only are the developers rude, but in our benchmarks, Nginx+Varnish is slower than Nginx alone on static files. Meaning that the kernel is doing a just fine job of file and descriptor caching. Nevermind all the header/compression variability that has to be taken into account.
Random things to understand:
- Faster request/sec capabilities on the server can result in lower actual requests per second. When the connections are being stalled out and not returning data fast enough, the browsers will retry the request on an already overloaded connection queue. It’ll get logged, but no data returned. After we increased capacity, we saw an apparent (and immediate) 20% drop in requests.
- Many times dynamic data ALL seems important, but ask yourself “will the user die if they get an old version of the page”? If the answer is no, then the page can be cached. You want something like 90-95% of all requests to hit the static cache. Let cache expiry happen with cron jobs/etc.
- Benchmark your test setup piecewise — i.e. test the socket request handling with like a 1 byte file, then with hitting the cache, then with dynamic files. Make it a game to see how often you can hit a cache.
- Site speed & Crawlers. Many admins hate crawlers but think of them as a necessary evil, but will block them, or not optimize performance for them. This is wrong. Crawlers do generate huge traffic that affects normal users, but it is also how your pages get indexed and found — you should optimize your server performance to give the crawlers as much data as they can handle and not have it affect your users.