Intro and Useful Points - (I'll backfill this intro as soon as I make it to my notes...)
Practicals - The problem: Scalable static image serving - eg. an in-house Ikami. We looked at a vendor solution, +/- of it. Next we did a build-your-own, looking at each of the pieces, and scaling the solution bigger and smaller. We talked through configuring each of the pieces, including a cool look at how to decide which image server is closest to the user geographically. By using multiple DNS servers, one near each image serving cluster. Things should converge... but they don't. Routes change too quickly for this to work. This does work if you use DNS Shared IP (AnyCast). All 3 servers all claim to serve the same IP - you'll always get to the closest server, with no convergence time. You can use a similar technique (if you own enough of the right, big, colocated parts) to make your system DDOS-resistant.
Logging - Again, we started with a hypothetical configuration and defined goals = multiple servers, real-time log analysis and reaction. Next he covered (dis)advantages of distributed logging, passive logging (sniffing), leading us to multicast logging. (JMS == perfect multicast logger at the app level) You can add multiple subscribers and loggers on the fly. This enables active and passive monitors, special purpose analyzers, and write-to-disk tasks.
Caching Architectures - caching can benefit application performance (and perceived performance) greatly. Theo covered layered cache, integrated cache (in app), data cache (in data store), write-thru cache. As before, he created an example case with example system configuration and goals of the exercise. In our sample bloggy-newsy app, when an article request comes in, we check to see if that page exists on the server, and if not, we fetch it from the db and create the page locally, so it exists for all future requests - much like we already do at Duke with patient discharge dates.
Tiered architectures - Theo started with a background on tiered design. Then he evolved into a worthy rant against traditional tiering. (Expensive people and $$-wise, hard to predict need and scale up/down.) A healthy replication system that allows any type of machine (web server) to answer any request. Scaling then just requires adding (or removing) like servers as needed. This sounds like the like of stuff that mongrel enables. Need more capability? Just stack it on like lego bricks.
Database Replication (part of Tiering) - It's a hard thing to deal with and do well. This is not warm-(or cold-) failover - this is true db replication (clustering). Multimaster replication is a log way off. Master-slave stuff is ready to use now. He made a mention of multi-vendor database replication - using mysql to cache and crunch 100 distributed instances, all dumping their data to a single Oracle backend.
The Right Tool for the Job - aka. How to Do it Wrong For our previous sample newsy-bloggy app, the customer wants a "which 30 users last loaded this page" and "which pages did a user load in the last 30 minutes". He talked through a mysql implementation of this, and how poorly it scales and how ludicrous the implementation would be. Next we created a custom app (a Skiplist structure in C) that hooks the logging stream (from above) and crunches it on the fly.
Q&A Session
Q: File systems?
A: There's GFS (at a cost)... Lustre is good. None are perfect, avoid them all if you can.
Q: Does spread have a performance cost?
A: No, negligible.
Q: I have huge sessions that resist distribution.
A: Go optimize them, make it smaller or compress it. Another option is to subdivide the session among different urls, and only get the big/expensive ones when you need them.
Q: Massive databases, how do you scale them?
A: Most obvious way is to federate (subdivide by geography/age/etc) to different databases.
Technologies from this talk to look more at:
- spread - a group message service
- whackamole - provides load-balancing and failover
- mod_log_spread -
- spreadlogd
- Splash - distributed sessions for clusters of web servers
Tags: OSCON06
No comments:
Post a Comment