Okay, I've just finished greatly simplifying the CSS for my blog. I've even commented the CSS so I'll remember what controls what. It seems to look a bit nicer on the browsers I've tried. But my TiBook is currently loaned out, so I'd appreciate feedback from Mac (and other) users. Tell me if you like it or not. If not, why? What would you change? And please tell me which browser you use.
I should note that it's completely unreadable in Netscape 4. But I no longer care. If you're still using that old, steaming pile of crap, not being able to read my weblog is the least of your problems. Just disable style sheets and it'll be just fine. Really. I tried.
Hm. It looks like Simon wants to integrate his mail, news, and rss/weblog reading into a single application. In his case, that application is gnus the killer Mail/News reader for Emacs. What he's saying really hits home for me. I recently wrote about my frustrations with RSS aggregators too.
I have to agree with Simon. Having an rss back-end for gnus would be excellent. However, I'd like to go a step further and propose something for those GUI inclined folks (not necessarily me, since I use mutt to read mail): Mozilla needs RSS aggregation capabilities.
Think about it. All the pieces are there. Mozilla is infinitely scriptable. It certainly knows how to fetch content via HTTP. It knows how to parse XML. Building a UI with Mozilla isn't terribly difficult.
Is anyone working on this. If not, why? This seems like a killer app. Put Mail, News, and Weblogs on equal footing in the only Open Source cross-platform platform (that's not a typo) we have.
I ran a quick google search to see who's working with RSS and Mozilla. I found a few interesting items:
And there's some RDF info on mozilla.org too.
But as far as I can tell, nobody has done this. Hmm. Someone really should.
Starting about 1 year ago, I get catalogs every few months from The Territory Ahead. I had never heard of them before. The appear to be an L.L. Bean for people who really have cash to burn. The stuff is expensive.
I'll probably never buy from them, and I'm really not sure how or why they got my name. But for some reason I flip thru the catalog each time it shows up. Then I throw it away.
Maybe L.L. Bean sold my info. I'm not sure.
Anyway, I thought I'd share. Maybe I'm just recovering from blog-withdrawl or something.
Though we've had a subscriptions page up for a while on the PHP Journal web site, there were two problems with it.
Those are now fixed. If you subscribed before, please head over to the PHP Journal site and re-subscribe. If you haven't yet... Well, what are you waiting for?
If you like reading about Mac OS X topics and looking at beautiful women, look no farther than the inluminent weblog. Then again, if that's not your style, don't go there. :-)
It looks like Phil Windley has an iBook to play with. I see that he's identified some of the same strengths and weaknesses that I noted. (Apparently we have similar ThinkPads and would really like a high-res LCD on a Mac notebook.)
It'll be interesting to hear how well it integrates with the infrastructure in Utah's government.
In other news, I'm TiBook-less for a while. Jeffrey Friedl and his wife have mine. They're thinking of getting an iBook and wanted to play with something similar for a couple weeks before taking the plunge. I'm pretty sure they'll end up doing it, considering that he just bought a WAP11 wireless access point so that they could experience wireless computing. :-)
So what have I been using at home? My ThinkPad T23 with XP? No, my 3-year old Thinkpad 600E w/Debian Linux. Hey, I've got xterms, Emacs, and Galeon. What else do I really need?
Wow, it's been a busy week. I was totally swamped for several days dealing with the remember.yahoo.com MySQL servers and related stuff. And then I used a day or two to recover (sleep, shower, etc).
Anyway, I made some interesting discoveries along the way. The most surprising one had to do with thread caching on Linux when you have a busy MySQL server--busy in a particular way, mind you.
You see, we had a single master server which all the web servers could connect to (using PHP) whenever someone made a change. That includes creating a tile (there were several hundred thousand tiles created), approves a tile, marks one as "cool", and so on. All told, the master was quite busy.
Because there were between 20 and 45 front-end web servers during that time, and each could have had up to 70 apache processes that might have needed to connect, we faced a problem. That meant that the master needed to handle up to 3,150 connections in the worst case (that's 45 x 70). Most of the PHP code used mysql_pconnect() to hold persistent connections.
Rather than worry about how to do that, I made sure that the wait_timeout was set to a very low value: 15 seconds. That means MySQL would close any connection that was idle for more than 15 seconds. But I didn't realize the extent of the problem until I started getting reports from the web servers that the master was refusing connections. Why? Because I had set the maximum number of connections to a reasonable value in the master's my.cnf file:
set-variable = max_connections=180 set-variable = max_user_connections=140
And at that time, the wait_timeout was set to 600 seconds (10 minutes). Clearly that was a problem. There were a lot of idle clients holding connections and blocking out new clients from connecting and getting real work done.
What to do?
We could have stopped using mysql_pconnect(), but as you'll see, that wouldn't have solved the underlying problem.
I needed to adjust the settings. But I wasn't sure what values to use. And I really didn't want to keep stopping and starting the master. That would just suck. Then I remembered that we were running MySQL 4.0.4. I'd has a new feature that allows you to change most of the server settings on on the fly without restarting! Read about here, it in the on-line manual.
All I needed to do was execute a few variations on this command:
SET GLOBAL wait_timeout=60;
(with different values in the place of "60") to try and strike a balance between letting new clients in and kicking out already connected users too quickly.
Ultimately, I settled on a timeout of 15 seconds.
But that had an interesting and unanticipated side-effect. It meant that the Linux server was having to create new threads (MySQL is a multi-threaded server) at a very high rate. That sucks up a measurable amount of CPU time.
How much CPU time? By the time I got around to looking at the output of SHOW STATUS and seeing this:
| Threads_cached | 0 | | Threads_created | 270194 | | Threads_connected | 46 | | Threads_running | 28 |
Things were pretty bad. The machine had very little idle CPU time--probably 5-10% at the most. But it really wasn't doing that much work--maybe 40 queries per second. I was a bit puzzled. But that Threads_created number jumped out at me. It was high and increasing rapidly.
Luckily I remembered the thread_cache setting. So I decided to investigate (using the new syntax for examining server variables):
mysql> SELECT @@global.thread_cache_size; +---------------------+ | @@thread_cache_size | +---------------------+ | 0 | +---------------------+ 1 row in set (0.00 sec)
Uh oh. I never set the thread cache in my.cnf, so it has assumed the default. That's bad. It's like removing the pre-forking capabilities of Apache 1.3 and letting it get pounded on a busy web site. The "fork a new process for each new request" gets pretty expensive pretty quickly.
Luckily the thread cache is also tunable on the fly now. So all I had to do was this:
SET GLOBAL thread_cache_size=40;
I took a guess and figured that by caching 40 threads, we'd be saving a lot of work. And boy was I right!
In the other window, where I was running vmstat 1 I noticed a dramatic change. The idle CPU on the machine immediately went from 5-10% to 35-40%
If only I had thought of that sooner!
So the moral of the story is this: If you have a busy server that's getting a lot of quick connections, set your thread cache high enough that the Threads_created value in SHOW STATUS stops increasing. Your CPU will thank you.
I don't feel bad though. We were all going nuts to try and tune/optimize the code and servers while it was running and had very little sleep. Thread caching really wasn't the worst of our problems. But it became the worst after we had fixed all the bigger ones.
It was quite a learning experience.