Thursday, May 27, 2004
Video iPods
Coincidentally, there were a couple of major announcements today about personal music players. The first was pointed out to me in a comment in an earlier post. Microsoft has made some fleeting non-announcement about a $50 "iPod-killer." Slashdot also picked this up.
I already mentioned my thoughts on this supposed $50 iPod killer in the previous post. The more interesting thing to me is that companies (like Sony) seem to think that a "video iPod" is a good idea. I'm sure a lot of people think they want one of these, too. I can only think that this is an extension of PC-think, where more is better. Steve Jobs has ruled out a video iPod at this point, and rightly so.
The obvious problem is that people experience music and video differently. I might listen to my iPod while at work, while walking around town, or while exercising. Video wouldn't work at all in any of these obvious use cases. Now, there are use cases when you might plausibly watch video, for example, on a long commute, or trip. But I suspect that for most people, the first case isn't enough time to fully watch anything, and the second doesn't happen often enough to justify the expense.
Furthermore, there's a fundamental tension between something watchable (large screen) and something portable (lightweight and small). The iPod is really useful because you can carry it around everywhere; it's so small and light that my cellphone and keys usually feel bigger and heavier in my other pocket. (And I don't even have a Mini). Furthermore, driving a fast-refreshing backlit color LCD display takes way more battery power, and decoding compressed video takes way more CPU, so the device will have to be way heavier to let you watch a decent amount. Oh yeah, heat problems? I doubt that the combination of fewer opportunities to watch it, and the difficulty getting the size right it is going to draw a lot of people. (Not nobody, just not enough to be worth it).
Another angle that I don't see a lot of talk about is what the heck you're supposed to put on this thing if it ever happens. You're not supposed to rip your DVDs (as if you'd want to watch your DVDs on this thing). Broadcasters are trying as hard as they can to lock everything up with broadcast flags and so forth. They don't seem likely to be cooperative in supplying content at this point. There are a few small legal movie download sites, but they don't seem too realistic at this point. The mp3 player is compelling because it lets everyone take their existing CDs and put them on. (This is probably a historical accident, since the record industry never put the flimsiest of plausible "anticircumvention devices" on CDs). Are people just supposed to buy digital copies of movies they have on DVD?
Everyone has DVDs, and portable DVD players haven't really taken off. I doubt adding video to what is basically an iPod is going to really sell people. There are more interesting directions to take the iPod concept in. That said, I imagine that Steve Jobs actually has some interesting video device in the works. I just don't think that it's something like an iPod that displays video. Given Apple's recent string of incredibly great consumer products, I hope it'll be something much more interesting.
I already mentioned my thoughts on this supposed $50 iPod killer in the previous post. The more interesting thing to me is that companies (like Sony) seem to think that a "video iPod" is a good idea. I'm sure a lot of people think they want one of these, too. I can only think that this is an extension of PC-think, where more is better. Steve Jobs has ruled out a video iPod at this point, and rightly so.
The obvious problem is that people experience music and video differently. I might listen to my iPod while at work, while walking around town, or while exercising. Video wouldn't work at all in any of these obvious use cases. Now, there are use cases when you might plausibly watch video, for example, on a long commute, or trip. But I suspect that for most people, the first case isn't enough time to fully watch anything, and the second doesn't happen often enough to justify the expense.
Furthermore, there's a fundamental tension between something watchable (large screen) and something portable (lightweight and small). The iPod is really useful because you can carry it around everywhere; it's so small and light that my cellphone and keys usually feel bigger and heavier in my other pocket. (And I don't even have a Mini). Furthermore, driving a fast-refreshing backlit color LCD display takes way more battery power, and decoding compressed video takes way more CPU, so the device will have to be way heavier to let you watch a decent amount. Oh yeah, heat problems? I doubt that the combination of fewer opportunities to watch it, and the difficulty getting the size right it is going to draw a lot of people. (Not nobody, just not enough to be worth it).
Another angle that I don't see a lot of talk about is what the heck you're supposed to put on this thing if it ever happens. You're not supposed to rip your DVDs (as if you'd want to watch your DVDs on this thing). Broadcasters are trying as hard as they can to lock everything up with broadcast flags and so forth. They don't seem likely to be cooperative in supplying content at this point. There are a few small legal movie download sites, but they don't seem too realistic at this point. The mp3 player is compelling because it lets everyone take their existing CDs and put them on. (This is probably a historical accident, since the record industry never put the flimsiest of plausible "anticircumvention devices" on CDs). Are people just supposed to buy digital copies of movies they have on DVD?
Everyone has DVDs, and portable DVD players haven't really taken off. I doubt adding video to what is basically an iPod is going to really sell people. There are more interesting directions to take the iPod concept in. That said, I imagine that Steve Jobs actually has some interesting video device in the works. I just don't think that it's something like an iPod that displays video. Given Apple's recent string of incredibly great consumer products, I hope it'll be something much more interesting.
Now Here's a Question
Who the heck linked me? Did someone link me? I'm getting all these strangers reading my blog and posting comments. Not that you're not welcome, I just don't know where you all come from.
So Here's an Actual Blog-Like Entry...
...since you're all complaining that I put actual content on my blog.
Macneil posted a comment wondering about the quality of "included software." Amusingly enough, that same day, Raymond Chen posted an article about almost the same topic.
Macneil posted a comment wondering about the quality of "included software." Amusingly enough, that same day, Raymond Chen posted an article about almost the same topic.
Apple Is Repeating Old Mistakes (Thank God)
One of the pieces of conventional wisdom most easily pulled out of the average high-tech buff's mind is the danger of closed platforms. While Apple held tightly to the Macintosh platform, IBM opened the PC platform so that any company could make compatible computers. The rest is history. Maybe I'm going to severely embarrass myself here, but I'm going to argue against this conventional wisdom in the digital music player market.
Lately, there's been a bit of buzz about Apple's iPod, and how to best manage that product. The iPod is easily the number one digital music player in the market, and it's basically a closed platform. You want iPod, you go to Apple, you buy their thing (iPod), and you download songs in their format (AAC + FairPlay). If you want to use either of those components (the players or the song downloads), you have to use the other.
So now there's a lot of pressure on Apple to open up the platform they've created to avoid a repeat of what happened with Macintosh. Particularly, the CEO of Real, Rob Glaser, has begged Steve Jobs to open the iPod platform.
Apple would have to be a real idiot to do this now. It would be tantamount to surrendering without a war. To a weaker enemy. They've got the majority of the mp3 player market, and make a healthy margin on iPods. They've got even more of the digital download market (at least 70%). Though hey, Real would love it if Apple was stupid enough to give them a free in to the market.
The are two supposed threats. The first is obviously Microsoft. Microsoft's competing digital media platform is more open, in that other companies can license it and provide products based on it. The second threat is the major consumer electronics giant, Sony (those guys who invented the personal music player market, and until recently owned it). These are credible threats, of course, and not to be taken lightly. But so far, they haven't posed much of a problem in the market.
Sony just recently announced their offerings in this market, such as their VAIO Pocket iPod-clone, and their online music store, Sony Connect. Without seeing either of these in person, both seem pretty lame. I think this article on iPodLounge does a pretty good job analyzing Sony's offerings.
See, the thing is, in order to be competition, someone has to actually want to buy the competing product. The VAIO Pocket left me totally unexcited from the moment I first saw it. I can't speak for everyone, of course, but I am a consumer in this market. I'd rather have my iPod, easily.
This "having to be desirable" is the same thing that makes Microsoft not much in the way of competition at this point either. Simply having tons of products doesn't attract consumers; they have to want a specific one. Especially since it's not like they're afraid of the iPod disappearing, given its huge success so far. So far, I haven't seen any competition for the iPod that would make me consider going that route. The iPod is priced at a premium compared to other music players, but it seems to be worth it to people. It's not a Macintosh-like price hike compared to the rest of the industry, especially if you believe, as I do, that Apple has come up with a much more attractive combination of dimensions, interface, weight, storage space, price, and battery life than anyone else so far.
Another thing that's different is that Apple seems to have a better strategy for staving off competition this round. They've sought patents on their user interfaces, and patents are likely to hold up better in court than their previous attempts, in which they attempted to apply copyright law to their interfaces. This will make it harder for competition to copy the attractive features of the iPod very closely.
And, if it starts to heat up, and iPod starts losing market share, because Microsoft or Sony has finally come up with something attractive, they still have opening up the platform as an option. Nothing precludes that at a later stage. Especially since they are starting from a very strong position in the market, and they control all aspects of the complete, integrated solution that consumers want.
Real Networks threatening to join with the competition was pretty funny, though. I can see Steve Jobs laughing now. "Oh no, please don't go associate your crappy brand and large, loyal user base with our competition! We'd love some of your brand poison over here!"
Update:
Wow, lots of comments.
Xirt/Matt: How right you are. I remember hearing the history of all that, but I'd forgotten the details of it all. In any case, however it happened, its effect was the same.
I'm no fan of the DMCA, but one of the few encouraging signs I've seen about it is that the right to violate the DMCA for purposes of reverse engineering has been upheld. In Sony v. Connectix, the court found that Connectix was exercising fair use of Sony's code since otherwise, the DMCA would be indirectly outlawing reverse engineering by way of making any intermediate copies made while examining the data illegal.
A $50 iPod Killer: Given the cost of micro-drives, and the very limited number of suppliers, I imagine this device would not be competing in the same category. If Microsoft could actually make a comparable device that cheaply, that would be problematic. But what would have prevented Apple from doing whatever it is that makes it so cheap? I'm skeptical, but even if it's true, it would likely serve a different market segment (See HP/Dell posts).
Wezelboy: Yeah, that's what I'm saying. The closed platform isn't necessarily the mistake, despite what Rob Glazer and others say. It sure as heck hasn't stopped game consoles. Innovation is important, but so are network effects. If Apple's format gains traction, that makes their platform more attractive. Given how far Apple has come already, that has everyone else starting at a disadvantage. They're not invincible, but they have a good chance, and are in the best position they can possibly be in right now.
However, I hardly think iPod DJing is a major market segment. If they can make changes in software to accomodate it, it would probably be smart. But another player just for that? I doubt it would sell, even if every asshole who thinks he is a DJ bought two.
Lately, there's been a bit of buzz about Apple's iPod, and how to best manage that product. The iPod is easily the number one digital music player in the market, and it's basically a closed platform. You want iPod, you go to Apple, you buy their thing (iPod), and you download songs in their format (AAC + FairPlay). If you want to use either of those components (the players or the song downloads), you have to use the other.
So now there's a lot of pressure on Apple to open up the platform they've created to avoid a repeat of what happened with Macintosh. Particularly, the CEO of Real, Rob Glaser, has begged Steve Jobs to open the iPod platform.
Apple would have to be a real idiot to do this now. It would be tantamount to surrendering without a war. To a weaker enemy. They've got the majority of the mp3 player market, and make a healthy margin on iPods. They've got even more of the digital download market (at least 70%). Though hey, Real would love it if Apple was stupid enough to give them a free in to the market.
The are two supposed threats. The first is obviously Microsoft. Microsoft's competing digital media platform is more open, in that other companies can license it and provide products based on it. The second threat is the major consumer electronics giant, Sony (those guys who invented the personal music player market, and until recently owned it). These are credible threats, of course, and not to be taken lightly. But so far, they haven't posed much of a problem in the market.
Sony just recently announced their offerings in this market, such as their VAIO Pocket iPod-clone, and their online music store, Sony Connect. Without seeing either of these in person, both seem pretty lame. I think this article on iPodLounge does a pretty good job analyzing Sony's offerings.
See, the thing is, in order to be competition, someone has to actually want to buy the competing product. The VAIO Pocket left me totally unexcited from the moment I first saw it. I can't speak for everyone, of course, but I am a consumer in this market. I'd rather have my iPod, easily.
This "having to be desirable" is the same thing that makes Microsoft not much in the way of competition at this point either. Simply having tons of products doesn't attract consumers; they have to want a specific one. Especially since it's not like they're afraid of the iPod disappearing, given its huge success so far. So far, I haven't seen any competition for the iPod that would make me consider going that route. The iPod is priced at a premium compared to other music players, but it seems to be worth it to people. It's not a Macintosh-like price hike compared to the rest of the industry, especially if you believe, as I do, that Apple has come up with a much more attractive combination of dimensions, interface, weight, storage space, price, and battery life than anyone else so far.
Another thing that's different is that Apple seems to have a better strategy for staving off competition this round. They've sought patents on their user interfaces, and patents are likely to hold up better in court than their previous attempts, in which they attempted to apply copyright law to their interfaces. This will make it harder for competition to copy the attractive features of the iPod very closely.
And, if it starts to heat up, and iPod starts losing market share, because Microsoft or Sony has finally come up with something attractive, they still have opening up the platform as an option. Nothing precludes that at a later stage. Especially since they are starting from a very strong position in the market, and they control all aspects of the complete, integrated solution that consumers want.
Real Networks threatening to join with the competition was pretty funny, though. I can see Steve Jobs laughing now. "Oh no, please don't go associate your crappy brand and large, loyal user base with our competition! We'd love some of your brand poison over here!"
Update:
Wow, lots of comments.
Xirt/Matt: How right you are. I remember hearing the history of all that, but I'd forgotten the details of it all. In any case, however it happened, its effect was the same.
I'm no fan of the DMCA, but one of the few encouraging signs I've seen about it is that the right to violate the DMCA for purposes of reverse engineering has been upheld. In Sony v. Connectix, the court found that Connectix was exercising fair use of Sony's code since otherwise, the DMCA would be indirectly outlawing reverse engineering by way of making any intermediate copies made while examining the data illegal.
A $50 iPod Killer: Given the cost of micro-drives, and the very limited number of suppliers, I imagine this device would not be competing in the same category. If Microsoft could actually make a comparable device that cheaply, that would be problematic. But what would have prevented Apple from doing whatever it is that makes it so cheap? I'm skeptical, but even if it's true, it would likely serve a different market segment (See HP/Dell posts).
Wezelboy: Yeah, that's what I'm saying. The closed platform isn't necessarily the mistake, despite what Rob Glazer and others say. It sure as heck hasn't stopped game consoles. Innovation is important, but so are network effects. If Apple's format gains traction, that makes their platform more attractive. Given how far Apple has come already, that has everyone else starting at a disadvantage. They're not invincible, but they have a good chance, and are in the best position they can possibly be in right now.
However, I hardly think iPod DJing is a major market segment. If they can make changes in software to accomodate it, it would probably be smart. But another player just for that? I doubt it would sell, even if every asshole who thinks he is a DJ bought two.
Monday, May 24, 2004
HP vs. Dell: How It's Done
A couple days ago I saw this article in the New York Times about HP and Dell competing in the printer business. This reminded me immediately of my previous post about HP and Dell competing in the PC market. This article is the exact opposite, as Dell is trying to gain traction on HP's home turf: printers.
My first thought was basically, well, at least Dell knows what they're doing. I think they're approaching from exactly the right angle: what are we good at, and how can we use that to make printers that are attractive to [certain] people? Dell, as everyone knows, is fantastic at logistics and operations management, and they can use that to lower costs and squeeze out better profits. They did it for PCs, and now they're trying to figure out how to apply their expertise to do it with printers. There's no reason, at this point, to think that they wouldn't be able to make interesting advances in this direction.
I don't want to ding HP too hard here, because they're the incumbent in this market, and they're basically gonna get taken down a few notches. (Third week of Econ 1: if there's profits in your industry, and free entry to the market, firms will enter to get a slice of it). So they'll say anything to make Dell's offerings seem less attractive, and right now, that message is "This is rocket science, and Dell's printers are gonna be crap." That's all they really can do. That is probably true, but as I pointed out in my previous post, there's no reason to think that there isn't room for that in the market.
The really idiotic thing for Dell to do, analogous to what HP is doing in PCs, is to declare that they're going to start spending billions of dollars on R&D in printing technology to create better printers than HP. That's not what they're good at, and it would be a huge waste of their money. By providing cheaper printers of reasonable quality, they can provide an equally valuable product, if only to a different group of people than those who would be attracted to HP's very quality offerings.
The New York Times article frames this as a battle of strategies, and the Slashdot posting that originally pointed me to this seemed to take this same standpoint. (Actually, they changed it to "Innovators vs. Copiers," but what do you expect from them). They both fail to understand that this is not a contest where one idea will win. These are different strategies for how to operate in a market. The idea of McDonald's ever winning or losing to a five-star restaurant is ridiculous, and no one would see the two as being in competition. People understand that people have different desires in different segments of the market (and often just in different circumstances, many people eat at both restaurants at different times). Perhaps in very fast-growing markets someone can aspire to be all of those to all people, but as growth naturally slows down, that becomes harder and harder.
My first thought was basically, well, at least Dell knows what they're doing. I think they're approaching from exactly the right angle: what are we good at, and how can we use that to make printers that are attractive to [certain] people? Dell, as everyone knows, is fantastic at logistics and operations management, and they can use that to lower costs and squeeze out better profits. They did it for PCs, and now they're trying to figure out how to apply their expertise to do it with printers. There's no reason, at this point, to think that they wouldn't be able to make interesting advances in this direction.
I don't want to ding HP too hard here, because they're the incumbent in this market, and they're basically gonna get taken down a few notches. (Third week of Econ 1: if there's profits in your industry, and free entry to the market, firms will enter to get a slice of it). So they'll say anything to make Dell's offerings seem less attractive, and right now, that message is "This is rocket science, and Dell's printers are gonna be crap." That's all they really can do. That is probably true, but as I pointed out in my previous post, there's no reason to think that there isn't room for that in the market.
The really idiotic thing for Dell to do, analogous to what HP is doing in PCs, is to declare that they're going to start spending billions of dollars on R&D in printing technology to create better printers than HP. That's not what they're good at, and it would be a huge waste of their money. By providing cheaper printers of reasonable quality, they can provide an equally valuable product, if only to a different group of people than those who would be attracted to HP's very quality offerings.
The New York Times article frames this as a battle of strategies, and the Slashdot posting that originally pointed me to this seemed to take this same standpoint. (Actually, they changed it to "Innovators vs. Copiers," but what do you expect from them). They both fail to understand that this is not a contest where one idea will win. These are different strategies for how to operate in a market. The idea of McDonald's ever winning or losing to a five-star restaurant is ridiculous, and no one would see the two as being in competition. People understand that people have different desires in different segments of the market (and often just in different circumstances, many people eat at both restaurants at different times). Perhaps in very fast-growing markets someone can aspire to be all of those to all people, but as growth naturally slows down, that becomes harder and harder.
The Long-Term Prospects for KDE
This is basically a response to a comment Macneil left for me in an earlier post.
Basically, I'm also of the opinion that KDE's days are numbered, for a variety of reasons. His point about C++ is absolutely valid. C++ is a language that doesn't seem to get a lot of buy-in from unix hackers, and since open source software is a sort of eco-system, this makes it harder for others to use KDE's libraries. This is compounded by KDE's highly integrated nature, which makes it harder to get just a small piece of their code to work in isolation. Since this game is defined by the amount of code written for your platform, these barriers are problematic.
Another problem I would point to is the difference between how KDE is run and how Gnome is run. Gnome has a formal management structure (the Gnome Foundation), with elected decision-makers running the show. KDE appears to have no such formal structure, and I imagine this makes it much harder for them to get difficult decisions made. Another benefit of the Gnome Foundation is that companies wanting to work with Gnome have someone to talk to that has some credibility. (Amusing incidents in Gnome Foundation history are when Miguel de Icaza, the founder of Gnome, failed to nominate himself for a seat on the board by the deadline, and asked for an exception (denied); also, Richard M. Stallman, the founder of GNU (the G in Gnome), has been repeatedly and overwhelmingly rejected by the voters).
Also, Gnome has better release management, as far as I can tell. I don't know if they pioneered it, but Gnome has time-based releases. That is, instead of targeting features for each release, they target a date for each release and ship with whatever they have. Given how amazingly well this works for Gnome, it's amazing to me that more projects don't do this (kernel! kernel!). I think this could be the biggest insight into how to make free software management work.
Why is it so great? Just look at what goes wrong and causes the kernel itself to be chronically delayed every single time. Features creep in, and then take longer than expected to stabilize, causing further delays. And hey, while we're delayed due to feature A, can I just stick in this one tiny little feature B over here in this other part? Might as well. If you know there's always another release just 6 months away, you don't have to stress so hard about shoving things into the next release. Slipping one cycle isn't the end of the world. Also, everyone gets to keep working on relatively new code, instead of the situation that exists in the kernel where the latest version of the kernel is drastically different from the last stable release, requiring a mess of backports. Gnome has finally gotten this right, and they're disciplined and make this work.
In fact, now that I think about this, the way time-based releases work is totally consistent with Eliyahu Goldratt's insights in Critical Chain. (A great summary is Joel Spolsky's review). By making releases time-based, they have decoupled the release from the code. The code for any given thing is still done When It's Done. But the holdup caused by any one piece of code won't stop everything else that is done from shipping.
Gnome's foundation also provides much-needed leadership for Gnome. For better or for worse, people involved in Gnome know what the goal is: a free desktop system for the average user. To this end, Gnome has made difficult decisions and ripped out options left and right, instead opting for sensible defaults and reasonable configurability, because that is what a non-technical user needs. This creates some problems, in that the programmers are not really the same as the users, but that's hardly anything to fear. KDE, on the other hand, thinks its ridiculous amount of options is great, and is sticking with high configurability.
Another factor in Gnome's favor is that for whatever reason, it seems to have almost all the high quality applications. The real stars of free desktop software, like Gimp, Evolution, and Gnumeric are all built on the Gnome stack. I'm not really sure why this happened, but it is a factor. Furthermore, Gnome has chosen to associate itself with other heavy-weight projects that aren't built on the same software stack but are nonetheless highly important. Mozilla. OpenOffice. KDE seems to have a version of everything Gnome has, including a web browser and office software. Yet somehow, none of it is very good. It all feels brittle and featureless by comparison, but that's just me. This is kind of a weird thing to have when the other major thing about KDE is the ridiculous amount of configurability it offers. (You can configure the heck out of things so that your desktop is safe for your featureless apps).
There are two big distributions shipping KDE as the default, as far as I know: SuSE and Lindows. SuSE just got bought by Novell, which at the same time bought Ximian, a major Gnome company. It's not clear at this point that Novell is going to switch to KDE as the default on their distribution. But that's seriously schizophrenic behavior. Since the Ximian unit will continue to write Gnome software, and Novell seems to be writing software for Gnome as well, it's unclear how long this will last. Lindows, I'm not sure how much of an impact it has, but I like to think it's pretty small.
I'd say things have tipped in Gnome's favor.
Basically, I'm also of the opinion that KDE's days are numbered, for a variety of reasons. His point about C++ is absolutely valid. C++ is a language that doesn't seem to get a lot of buy-in from unix hackers, and since open source software is a sort of eco-system, this makes it harder for others to use KDE's libraries. This is compounded by KDE's highly integrated nature, which makes it harder to get just a small piece of their code to work in isolation. Since this game is defined by the amount of code written for your platform, these barriers are problematic.
Another problem I would point to is the difference between how KDE is run and how Gnome is run. Gnome has a formal management structure (the Gnome Foundation), with elected decision-makers running the show. KDE appears to have no such formal structure, and I imagine this makes it much harder for them to get difficult decisions made. Another benefit of the Gnome Foundation is that companies wanting to work with Gnome have someone to talk to that has some credibility. (Amusing incidents in Gnome Foundation history are when Miguel de Icaza, the founder of Gnome, failed to nominate himself for a seat on the board by the deadline, and asked for an exception (denied); also, Richard M. Stallman, the founder of GNU (the G in Gnome), has been repeatedly and overwhelmingly rejected by the voters).
Also, Gnome has better release management, as far as I can tell. I don't know if they pioneered it, but Gnome has time-based releases. That is, instead of targeting features for each release, they target a date for each release and ship with whatever they have. Given how amazingly well this works for Gnome, it's amazing to me that more projects don't do this (kernel! kernel!). I think this could be the biggest insight into how to make free software management work.
Why is it so great? Just look at what goes wrong and causes the kernel itself to be chronically delayed every single time. Features creep in, and then take longer than expected to stabilize, causing further delays. And hey, while we're delayed due to feature A, can I just stick in this one tiny little feature B over here in this other part? Might as well. If you know there's always another release just 6 months away, you don't have to stress so hard about shoving things into the next release. Slipping one cycle isn't the end of the world. Also, everyone gets to keep working on relatively new code, instead of the situation that exists in the kernel where the latest version of the kernel is drastically different from the last stable release, requiring a mess of backports. Gnome has finally gotten this right, and they're disciplined and make this work.
In fact, now that I think about this, the way time-based releases work is totally consistent with Eliyahu Goldratt's insights in Critical Chain. (A great summary is Joel Spolsky's review). By making releases time-based, they have decoupled the release from the code. The code for any given thing is still done When It's Done. But the holdup caused by any one piece of code won't stop everything else that is done from shipping.
Gnome's foundation also provides much-needed leadership for Gnome. For better or for worse, people involved in Gnome know what the goal is: a free desktop system for the average user. To this end, Gnome has made difficult decisions and ripped out options left and right, instead opting for sensible defaults and reasonable configurability, because that is what a non-technical user needs. This creates some problems, in that the programmers are not really the same as the users, but that's hardly anything to fear. KDE, on the other hand, thinks its ridiculous amount of options is great, and is sticking with high configurability.
Another factor in Gnome's favor is that for whatever reason, it seems to have almost all the high quality applications. The real stars of free desktop software, like Gimp, Evolution, and Gnumeric are all built on the Gnome stack. I'm not really sure why this happened, but it is a factor. Furthermore, Gnome has chosen to associate itself with other heavy-weight projects that aren't built on the same software stack but are nonetheless highly important. Mozilla. OpenOffice. KDE seems to have a version of everything Gnome has, including a web browser and office software. Yet somehow, none of it is very good. It all feels brittle and featureless by comparison, but that's just me. This is kind of a weird thing to have when the other major thing about KDE is the ridiculous amount of configurability it offers. (You can configure the heck out of things so that your desktop is safe for your featureless apps).
There are two big distributions shipping KDE as the default, as far as I know: SuSE and Lindows. SuSE just got bought by Novell, which at the same time bought Ximian, a major Gnome company. It's not clear at this point that Novell is going to switch to KDE as the default on their distribution. But that's seriously schizophrenic behavior. Since the Ximian unit will continue to write Gnome software, and Novell seems to be writing software for Gnome as well, it's unclear how long this will last. Lindows, I'm not sure how much of an impact it has, but I like to think it's pretty small.
I'd say things have tipped in Gnome's favor.
A Very DC Night
On friday night, Claire and I were driving in my car, trying to figure out how long her daily running route is, and I finally snapped. As we were heading west, near Treasury, we were trying to cross 15th street, but weren't able to. Why not? Well, every time the opposing traffic (one-way) would get a green light, they'd go until they'd get so backed up, that people who entered the intersection couldn't leave it, blocking our entire green light. Then their direction would get another green light, everyone would clear out, and the next group of asshats would cleverly do the same thing, probably thinking they were the first to do it. We sat there stuck like this for 3 or 4 rounds, growing increasingly pissed off.
Finally, some lady pulled this stunt and wound up right in front of us. I've always been taught never to honk at anyone, unless you want your head blown off. But as the front of our column, which was filling up with cabs and other poor victims, I finally lost it (I've been on the east coast too long). I leaned on the horn, unyieldingly. Now, supposedly this car has a super-strong horn. It was specially installed. In my mind, this thing was going to blow the windows off her car. It wasn't actually that loud, unfortunately. But it was constant and firm. This lady didn't turn her head at all, as maybe 30 to 45 seconds passed. Of course, she didn't abort into the totally empty lane on the right which would have forced her to turn right, as would have been proper punishment, but we somehow made it through that time.
Then, it started spattering little bits of rain. Within about 2 or 3 minutes, this had turned into the most intense rain. I discovered a new speed for my windshield wiper. It was almost impossible to see, and the streets seemed to already be covered in inches of water. This was unfortunate, since Claire had to run inside to her apartment to get some stuff, and she got soaked in just a few moments. She came back outside, covered in a rain jacket, and we went back to my place.
By the time we got back to my place, of course, the rain had basically let up.
Finally, some lady pulled this stunt and wound up right in front of us. I've always been taught never to honk at anyone, unless you want your head blown off. But as the front of our column, which was filling up with cabs and other poor victims, I finally lost it (I've been on the east coast too long). I leaned on the horn, unyieldingly. Now, supposedly this car has a super-strong horn. It was specially installed. In my mind, this thing was going to blow the windows off her car. It wasn't actually that loud, unfortunately. But it was constant and firm. This lady didn't turn her head at all, as maybe 30 to 45 seconds passed. Of course, she didn't abort into the totally empty lane on the right which would have forced her to turn right, as would have been proper punishment, but we somehow made it through that time.
Then, it started spattering little bits of rain. Within about 2 or 3 minutes, this had turned into the most intense rain. I discovered a new speed for my windshield wiper. It was almost impossible to see, and the streets seemed to already be covered in inches of water. This was unfortunate, since Claire had to run inside to her apartment to get some stuff, and she got soaked in just a few moments. She came back outside, covered in a rain jacket, and we went back to my place.
By the time we got back to my place, of course, the rain had basically let up.
Sunday, May 23, 2004
Notes on Shrek 2
Shrek 2 is like Osama in reverse. It starts off being really bad, and then rights itself and gets on track.
The first, oh, third of it is just awful. Everything seems so awkward, like these animated characters are actors with no chemistry, or something. It's like the actors are panicking inside over how bad the movie is at that point. Maybe I'm projecting. In any case, it's just not funny. All the jokes fall flat, and the characters seem off, as if it is a sitcom where a character is suddenly played by a new actor or something. It was during this time that I realized this movie wouldn't have anything as sweet as the story of Fiona and Shrek in the first one. It just didn't have what it takes, and I was right.
However, a third of the way through, about the time when you first see Puss-in-Boots, it gets really hilarious anyways. The laughing that broke out at this scene was like a sigh of relief for the audience. And, with that, it was over the hump.
The love story that develops is really lame. Since Shrek and Fiona married at the end of the first one, that kinda limits their options for romantic tension, but obviously, there has to be some doubt about the future of their relationship. It's pretty forced, but if you don't think about it too hard, it feels OK. In fact, not thinking about things too hard is pretty much required. A (lame) gag is made at the beginning about how long it takes to go to the land of Far Far Away, yet later in the movie, Shrek's friends manage to get there in just a few hours, it seems.
Another thing that sucked was the fact that Shrek 2 is a musical. Totally forgettable music and numbers, but it's definitely there. I guess it helps fills up some of those vast oceans of time that they had to fill.
The new character, Puss-in-Boots, pretty much steals the show (I'm amazed Antonio Banderas was able to do some of those lines without cracking up; must have taken lots of tries). Which is kind of a shame, because Donkey had filled that roll in the first movie, and now I can't remember a single funny thing he did in this one, though I do remember a lot of dumb gags that weren't funny. Most of them involve Donkey singing contemporary songs.
It's very funny, but it isn't anywhere near the grand-slam that Shrek was, with excellence across all the categories.
(22:38:08) Shawn: how many babylons?
(22:39:30) Me: How many Babylons is the max?
(22:40:50) Shawn: I think it was five on the self-made
(22:41:43) Me: Hm. Then its about a 3 babylons, aside from how funny it is. However, considering the funniness, it gets edged up to 4.
(22:42:00) Shawn: that's a very SMC rating
The first, oh, third of it is just awful. Everything seems so awkward, like these animated characters are actors with no chemistry, or something. It's like the actors are panicking inside over how bad the movie is at that point. Maybe I'm projecting. In any case, it's just not funny. All the jokes fall flat, and the characters seem off, as if it is a sitcom where a character is suddenly played by a new actor or something. It was during this time that I realized this movie wouldn't have anything as sweet as the story of Fiona and Shrek in the first one. It just didn't have what it takes, and I was right.
However, a third of the way through, about the time when you first see Puss-in-Boots, it gets really hilarious anyways. The laughing that broke out at this scene was like a sigh of relief for the audience. And, with that, it was over the hump.
The love story that develops is really lame. Since Shrek and Fiona married at the end of the first one, that kinda limits their options for romantic tension, but obviously, there has to be some doubt about the future of their relationship. It's pretty forced, but if you don't think about it too hard, it feels OK. In fact, not thinking about things too hard is pretty much required. A (lame) gag is made at the beginning about how long it takes to go to the land of Far Far Away, yet later in the movie, Shrek's friends manage to get there in just a few hours, it seems.
Another thing that sucked was the fact that Shrek 2 is a musical. Totally forgettable music and numbers, but it's definitely there. I guess it helps fills up some of those vast oceans of time that they had to fill.
The new character, Puss-in-Boots, pretty much steals the show (I'm amazed Antonio Banderas was able to do some of those lines without cracking up; must have taken lots of tries). Which is kind of a shame, because Donkey had filled that roll in the first movie, and now I can't remember a single funny thing he did in this one, though I do remember a lot of dumb gags that weren't funny. Most of them involve Donkey singing contemporary songs.
It's very funny, but it isn't anywhere near the grand-slam that Shrek was, with excellence across all the categories.
(22:38:08) Shawn: how many babylons?
(22:39:30) Me: How many Babylons is the max?
(22:40:50) Shawn: I think it was five on the self-made
(22:41:43) Me: Hm. Then its about a 3 babylons, aside from how funny it is. However, considering the funniness, it gets edged up to 4.
(22:42:00) Shawn: that's a very SMC rating
Notes on Osama
This movie starts right off, and it's just riveting. The filmmaking is obviously by the seat of the pants (and forgivably so), but the material is so powerful that it doesn't even matter.
The beginning is a highly stressful Battleship Potemkin, Odessa Steps-type sequence where the Taliban show up and open fire on a marching group of burka-wearing women protesting for jobs. From there, it just gets worse. Our main character, who comes to be known as Osama, is a young girl whose only family is her mother and grandmother. Her mother is a doctor, but since the Taliban are in power, she cannot get any work. With no income and new food, they cut off the young girl's hair and pass her off as a boy so she can get work.
Basically, this movie just didn't work for me. The setting is horrible, as are the events depicted. But that doesn't make it a good movie. Some perspective is necessary, since it is obvious that this film was made with limited resources. Still, what's missing here isn't money, but ideas. The film is basically just a string of bad things happening to this poor girl. It's terrible, but that's not good enough. About a Boy makes (comparatively minor) bad things that happen to Marcus heart-breaking, because of who Marcus is and his attitude. Osama is missing this character and perspective, and riding an interesting, terrible setting only gets you so far.
The beginning is a highly stressful Battleship Potemkin, Odessa Steps-type sequence where the Taliban show up and open fire on a marching group of burka-wearing women protesting for jobs. From there, it just gets worse. Our main character, who comes to be known as Osama, is a young girl whose only family is her mother and grandmother. Her mother is a doctor, but since the Taliban are in power, she cannot get any work. With no income and new food, they cut off the young girl's hair and pass her off as a boy so she can get work.
Basically, this movie just didn't work for me. The setting is horrible, as are the events depicted. But that doesn't make it a good movie. Some perspective is necessary, since it is obvious that this film was made with limited resources. Still, what's missing here isn't money, but ideas. The film is basically just a string of bad things happening to this poor girl. It's terrible, but that's not good enough. About a Boy makes (comparatively minor) bad things that happen to Marcus heart-breaking, because of who Marcus is and his attitude. Osama is missing this character and perspective, and riding an interesting, terrible setting only gets you so far.
Saturday, May 22, 2004
Why Unix Hackers Prefer the Microsoft Solution
I must say, I'm kind of shocked that we find ourselves in this discussion at all. Java is a language technology created by a company with roots in Unix, and given away for free five years before C# showed up on the scene. Why are programmers on Linux seriously considering adopting a technology that is rooted in the totally alien design philosophies of the Microsoft backwards-compatibility stack? I have some theories, but the theme would be: .Net actually fits the free software development model better.
How can that be? Well, let's look at some of the things .Net lets you do. For one thing, .Net solves a bigger problem than Java does, in that it is a common language environment. Many languages can run on .Net, and call each other's code trivially. For any single project, this doesn't seem like a huge draw (despite Microsoft's initial pie in the sky proclamations about writing programs in five different languages due to the preferences of the individual developers, and what not). But take a look at the actual Gnome CVS repository, and look at how many languages are represented there. The core platform is all C, but there are bindings for C++, Java, C#, Perl, Python, and tons of other languages. Many peripheral components of Gnome are written in languages other than C as well; I believe some of their games are written in Lisp (with bindings), for example. Being able to tie all that code together into a single coherent framework would be wonderful for a free software project like Gnome. The strong need to mix languages together is a problem of open source development way more often than it is for proprietary development.
Now technically, other languages can be compiled to the Java virtual machine, but this is rarely a clean fit, and often involves major compromises in speed or functionality. .Net doesn't always fit perfectly either, but it certainly gets a lot farther, due to being designed with that in mind. (Besides, Sun seemed to say, why would you want to write any part of your program in something other than Java?)
Also, going along with the bindings, .Net makes it much easier for developers to invoke native code through its P/Invoke mechanism. It's trivial. The comparable mechanism in Java, JNI, is a huge pain in the ass, and involves writing stub functions in C for everything. There's no reason (in my understanding) that a similar mechanism couldn't be written for Java, but it hasn't happened as far as I can tell. Think about this from the perspective of the Gnome project: they want to leverage as much existing code as possible by binding it to Java or C#. It's much easier for them to create bindings for C# given this situation.
What's even worse is that Sun appears to have wanted this to be a hard thing to do. The original plan was for Java to be the universal language: "write once, run anywhere." Ten years later, it doesn't seem we are anywhere near to that. It was probably not a realistic goal to begin with. It's not too late to back away from this, but Sun still seems to make decisions based on this attitude, and it costs them developers. The most seamlessly cross-platform language I've ever seen is Python, and Python makes it easy for you to invoke native code, and easy for you to write platform-specific code. Python says, "OK, this stuff isn't cross-platform, but use it if you still want to do that." This is a reasonable thing for most developers to want to do at some point. It's especially reasonable for open source programmers to want to do this.
This "Java only, everywhere" attitude also has extended to the virtual machine. The virtual machine hasn't changed at all since Java first came out. This is not a bad thing, but it has created problems. For example, Sun finally has an implementation of generics in Java, but it is a hack since the virtual machine itself isn't aware of them, since Sun isn't willing to make incompatible changes to the virtual machine. That would be fine if it didn't work so much better in C#, where the virtual machine is aware of them.
As I said before, many languages do target the JVM, but it's not like Sun has welcomed them or made anything easier for them. This is too bad, since it turns out there's a lot of value to be had here. And I'm convinced that if Java compiled seamlessly to either of two JVMs (old or new), there wouldn't be much of a problem with making incompatible changes. It's easy to see how this could be done so it was a simple recompile for developers. Yet Sun won't change the JVM. I don't understand it; the value is in the source code, not the compiled code.
Furthermore, I can see yet another JVM-related issue. Microsoft .NET virtual machine was designed specifically for just-in-time compilation as the main use case. I haven't looked into the specifics, but it's not hard to imagine how this could result in actual performance gains (theoretical and actual). Furthermore, the machine has supported the caching of compiled code, which Sun has only now gotten around to implementing in Java 1.5. This obviously isn't something that Java's main developer base (server-side programmers) seem to need, since their programs are invoked rarely and run forever. But for desktop software, this results in huge wins for startup time and runtime performance. It's not like people want to be chugging along with Swing, now is it?
I think the last problem is the most obvious one for the open source world: licensing. I think Sun would probably be way less hostile to Gnome than Microsoft, but Sun hasn't exactly acted the most open. The source for Java is available, but it isn't free in the way that open source programmers tend to think of it. You can reimplement it freely, but you need to pass Sun's expensive compatibility tests to be fully kosher. This is obviously unworkable for free software. Microsoft on the other hand, submitted the core of .Net to a rubber-stamp standards body, given the appearance of greater openness. (In reality, no one else seems to participate in steering .Net).
I think open source developers are being a little too prickly on this issue. Yes, it's a shame that Sun won't let you modify Java freely, but they do give very liberal terms. If they would just create an exemption so that open source groups could get their compatibility tests for free, I think people should try to get over it. It's very open, but has some restrictions on what you can do with it (like the GPL, but different actual restrictions). As it is, Sun has created a huge problem for adoption by free software groups (the testing requirements), and they don't seem willing to tear it down. If it weren't for that, I imagine there still would be objections, but I think that's the only really valid one.
Looking back at the things I think make .Net more attractive than Java, I don't see anything insurmountable. It's not too late to make Java a great platform for free software. A P/Invoke-like system could definitely be implemented, Sun just won't adopt it. That sort of thing hasn't stopped Mono. An entire open source oriented stack could be built, as Mono has built its alternative stack. Some effort could be spent fixing up GCJ or some other virtual machine to be viable, or God forbid, Sun could help this process along by easing some of their restrictions for their allies against Microsoft (unlikely, I know).
Shame on Sun for getting this wrong for so long. Their users have spoken, for years, and Sun has stubbornly refused to hear them.
Running to Mono to get around this, of course, is jumping out of the frying pan and into the fire.
How can that be? Well, let's look at some of the things .Net lets you do. For one thing, .Net solves a bigger problem than Java does, in that it is a common language environment. Many languages can run on .Net, and call each other's code trivially. For any single project, this doesn't seem like a huge draw (despite Microsoft's initial pie in the sky proclamations about writing programs in five different languages due to the preferences of the individual developers, and what not). But take a look at the actual Gnome CVS repository, and look at how many languages are represented there. The core platform is all C, but there are bindings for C++, Java, C#, Perl, Python, and tons of other languages. Many peripheral components of Gnome are written in languages other than C as well; I believe some of their games are written in Lisp (with bindings), for example. Being able to tie all that code together into a single coherent framework would be wonderful for a free software project like Gnome. The strong need to mix languages together is a problem of open source development way more often than it is for proprietary development.
Now technically, other languages can be compiled to the Java virtual machine, but this is rarely a clean fit, and often involves major compromises in speed or functionality. .Net doesn't always fit perfectly either, but it certainly gets a lot farther, due to being designed with that in mind. (Besides, Sun seemed to say, why would you want to write any part of your program in something other than Java?)
Also, going along with the bindings, .Net makes it much easier for developers to invoke native code through its P/Invoke mechanism. It's trivial. The comparable mechanism in Java, JNI, is a huge pain in the ass, and involves writing stub functions in C for everything. There's no reason (in my understanding) that a similar mechanism couldn't be written for Java, but it hasn't happened as far as I can tell. Think about this from the perspective of the Gnome project: they want to leverage as much existing code as possible by binding it to Java or C#. It's much easier for them to create bindings for C# given this situation.
What's even worse is that Sun appears to have wanted this to be a hard thing to do. The original plan was for Java to be the universal language: "write once, run anywhere." Ten years later, it doesn't seem we are anywhere near to that. It was probably not a realistic goal to begin with. It's not too late to back away from this, but Sun still seems to make decisions based on this attitude, and it costs them developers. The most seamlessly cross-platform language I've ever seen is Python, and Python makes it easy for you to invoke native code, and easy for you to write platform-specific code. Python says, "OK, this stuff isn't cross-platform, but use it if you still want to do that." This is a reasonable thing for most developers to want to do at some point. It's especially reasonable for open source programmers to want to do this.
This "Java only, everywhere" attitude also has extended to the virtual machine. The virtual machine hasn't changed at all since Java first came out. This is not a bad thing, but it has created problems. For example, Sun finally has an implementation of generics in Java, but it is a hack since the virtual machine itself isn't aware of them, since Sun isn't willing to make incompatible changes to the virtual machine. That would be fine if it didn't work so much better in C#, where the virtual machine is aware of them.
As I said before, many languages do target the JVM, but it's not like Sun has welcomed them or made anything easier for them. This is too bad, since it turns out there's a lot of value to be had here. And I'm convinced that if Java compiled seamlessly to either of two JVMs (old or new), there wouldn't be much of a problem with making incompatible changes. It's easy to see how this could be done so it was a simple recompile for developers. Yet Sun won't change the JVM. I don't understand it; the value is in the source code, not the compiled code.
Furthermore, I can see yet another JVM-related issue. Microsoft .NET virtual machine was designed specifically for just-in-time compilation as the main use case. I haven't looked into the specifics, but it's not hard to imagine how this could result in actual performance gains (theoretical and actual). Furthermore, the machine has supported the caching of compiled code, which Sun has only now gotten around to implementing in Java 1.5. This obviously isn't something that Java's main developer base (server-side programmers) seem to need, since their programs are invoked rarely and run forever. But for desktop software, this results in huge wins for startup time and runtime performance. It's not like people want to be chugging along with Swing, now is it?
I think the last problem is the most obvious one for the open source world: licensing. I think Sun would probably be way less hostile to Gnome than Microsoft, but Sun hasn't exactly acted the most open. The source for Java is available, but it isn't free in the way that open source programmers tend to think of it. You can reimplement it freely, but you need to pass Sun's expensive compatibility tests to be fully kosher. This is obviously unworkable for free software. Microsoft on the other hand, submitted the core of .Net to a rubber-stamp standards body, given the appearance of greater openness. (In reality, no one else seems to participate in steering .Net).
I think open source developers are being a little too prickly on this issue. Yes, it's a shame that Sun won't let you modify Java freely, but they do give very liberal terms. If they would just create an exemption so that open source groups could get their compatibility tests for free, I think people should try to get over it. It's very open, but has some restrictions on what you can do with it (like the GPL, but different actual restrictions). As it is, Sun has created a huge problem for adoption by free software groups (the testing requirements), and they don't seem willing to tear it down. If it weren't for that, I imagine there still would be objections, but I think that's the only really valid one.
Looking back at the things I think make .Net more attractive than Java, I don't see anything insurmountable. It's not too late to make Java a great platform for free software. A P/Invoke-like system could definitely be implemented, Sun just won't adopt it. That sort of thing hasn't stopped Mono. An entire open source oriented stack could be built, as Mono has built its alternative stack. Some effort could be spent fixing up GCJ or some other virtual machine to be viable, or God forbid, Sun could help this process along by easing some of their restrictions for their allies against Microsoft (unlikely, I know).
Shame on Sun for getting this wrong for so long. Their users have spoken, for years, and Sun has stubbornly refused to hear them.
Running to Mono to get around this, of course, is jumping out of the frying pan and into the fire.
Friday, May 21, 2004
Gnome Politics
In my last post, I mentioned the debate going on within the Gnome community about what to use for a high level language. Since then, there have been a lot of postings from both sides of the issue. I think the two sides aren't really seeing eye to eye, and I'm not convinced that it's entirely innocent, either.
On the one hand, you've got pro-Mono people. This side of the debate often takes an attitude that has a kind of "Oh, this debate AGAIN?" attitude. "Why are we wasting time discussing this? We could just be writing code (with Mono)." This side also seems to feel that they are being told to not write code in Mono, which isn't, I believe, what they are being told.
On the other side, there's a concern that we are in a bad situation, where people want high-level languages, and it will be a disaster when different camps settle on different ones, since they won't interoperate nicely. As Havoc said, this is what will happen if no decision is made. But the issue isn't whether or not to use Mono, it's whether or not Mono should ever become a part of the core of Gnome. If the Gnome project were to decide now, one way or another, whether it was going to ever include Gnome, it would allow people to make choices with more clarity. People who wanted to write things in Mono would be perfectly happy to do so, and not expect that their code will ever get shipped with the default Gnome. As it is now, people are making choices as if what they want to have happen is what will happen, and it's going to end in tears for someone.
However, this isn't a debate Mono fans want to have right now (and Nat Friedman pretty much said as much). Whatever the reason for this oddly SCO-like delaying instinct, it does have another effect. By delaying the decision, Mono benefits from this uncertainty with respect to the Gnome project's decision; people will continue to write more and more software with Mono that is meant for the Gnome eco-system. When it does come time to make a decision, there will be so much Mono-based software that it will be hard for Gnome to turn it all down.
People who are against the adoption of Mono should be aware of that.
On the other hand, Red Hat has been accused of "Stop Energy" tactics. I think this is largely bogus, because Red Hat is simply being careful and wise, and trying to win people's hearts and minds on this issue, instead of just plunging ahead with some course of action as some others have. What possible reason could Red Hat have for wanting actual stasis in the Gnome code? However, there is an effective counter that anti-Mono people could play that doesn't involve "Stop Energy." And I imagine it would be much more successful than trying to win this through debate.
The counter is simply to start creating the type of software infrastructure that currently makes Mono so attractive to Gnome hackers for Java (or the platform that they would otherwise propose). The Mono hackers did a huge amount of work getting Mono so solid, and so well integrated with the Gnome environment; the similar work was never done with Java. I believe there are factors that made .Net more attractive to those hackers than Java, but Java is probably good enough if it were given a bit of work. (If Mono is not chosen, it'll be a huge shame that the developers hadn't spent that time fixing up current open-source Java projects and optimizing them and creating bindings). In any case, it's not at all too late for Red Hat or Sun (or anyone else, but they'd be the most obvious proponents) to do this and start laying this foundation. Ximian spent a lot of money developing Mono, and then developing interesting open source software with it. If Red Hat and Sun want to fight effectively against having Gnome integrated with Mono, they should do the same.
Updated: I just realized that when I say that I think "it's not innocent," it makes it sound like I think these are bastards trying to hurt free software for their own gain, or something. I don't think that's the case at all. Their software's all open source, and it's all great. The debate is about whether or not the thing they wrote should be given away for free as part of another project. I think that's a fantastic thing to do. Really, I think they think they're right, and want to win here. For various reasons from technical to yes, financial. Nothing wrong with that. In particular, I've always been impressed with Miguel de Icaza. I once wrote some code for GTK, and he noticed my patch and wrote me to tell me to copyright it in my name, and get credit in the authors file. I was impressed that he'd even noticed, but I was also impressed that he then sent me a follow-up email taking some interest in who I was. Unfortunately, I just think he's wrong about the long-term wisdom of this strategy. I guess the good thing here is that if we can choose the right thing to do here, whichever one that turns out to be, even the people who are on the losing side of the debate should benefit.
On the one hand, you've got pro-Mono people. This side of the debate often takes an attitude that has a kind of "Oh, this debate AGAIN?" attitude. "Why are we wasting time discussing this? We could just be writing code (with Mono)." This side also seems to feel that they are being told to not write code in Mono, which isn't, I believe, what they are being told.
On the other side, there's a concern that we are in a bad situation, where people want high-level languages, and it will be a disaster when different camps settle on different ones, since they won't interoperate nicely. As Havoc said, this is what will happen if no decision is made. But the issue isn't whether or not to use Mono, it's whether or not Mono should ever become a part of the core of Gnome. If the Gnome project were to decide now, one way or another, whether it was going to ever include Gnome, it would allow people to make choices with more clarity. People who wanted to write things in Mono would be perfectly happy to do so, and not expect that their code will ever get shipped with the default Gnome. As it is now, people are making choices as if what they want to have happen is what will happen, and it's going to end in tears for someone.
However, this isn't a debate Mono fans want to have right now (and Nat Friedman pretty much said as much). Whatever the reason for this oddly SCO-like delaying instinct, it does have another effect. By delaying the decision, Mono benefits from this uncertainty with respect to the Gnome project's decision; people will continue to write more and more software with Mono that is meant for the Gnome eco-system. When it does come time to make a decision, there will be so much Mono-based software that it will be hard for Gnome to turn it all down.
People who are against the adoption of Mono should be aware of that.
On the other hand, Red Hat has been accused of "Stop Energy" tactics. I think this is largely bogus, because Red Hat is simply being careful and wise, and trying to win people's hearts and minds on this issue, instead of just plunging ahead with some course of action as some others have. What possible reason could Red Hat have for wanting actual stasis in the Gnome code? However, there is an effective counter that anti-Mono people could play that doesn't involve "Stop Energy." And I imagine it would be much more successful than trying to win this through debate.
The counter is simply to start creating the type of software infrastructure that currently makes Mono so attractive to Gnome hackers for Java (or the platform that they would otherwise propose). The Mono hackers did a huge amount of work getting Mono so solid, and so well integrated with the Gnome environment; the similar work was never done with Java. I believe there are factors that made .Net more attractive to those hackers than Java, but Java is probably good enough if it were given a bit of work. (If Mono is not chosen, it'll be a huge shame that the developers hadn't spent that time fixing up current open-source Java projects and optimizing them and creating bindings). In any case, it's not at all too late for Red Hat or Sun (or anyone else, but they'd be the most obvious proponents) to do this and start laying this foundation. Ximian spent a lot of money developing Mono, and then developing interesting open source software with it. If Red Hat and Sun want to fight effectively against having Gnome integrated with Mono, they should do the same.
Updated: I just realized that when I say that I think "it's not innocent," it makes it sound like I think these are bastards trying to hurt free software for their own gain, or something. I don't think that's the case at all. Their software's all open source, and it's all great. The debate is about whether or not the thing they wrote should be given away for free as part of another project. I think that's a fantastic thing to do. Really, I think they think they're right, and want to win here. For various reasons from technical to yes, financial. Nothing wrong with that. In particular, I've always been impressed with Miguel de Icaza. I once wrote some code for GTK, and he noticed my patch and wrote me to tell me to copyright it in my name, and get credit in the authors file. I was impressed that he'd even noticed, but I was also impressed that he then sent me a follow-up email taking some interest in who I was. Unfortunately, I just think he's wrong about the long-term wisdom of this strategy. I guess the good thing here is that if we can choose the right thing to do here, whichever one that turns out to be, even the people who are on the losing side of the debate should benefit.
Wednesday, May 19, 2004
Supplier Power
Surprisingly, someone has finally said something sane about integrating Mono into Gnome, and I couldn't be more pleased. To be fair, Havoc Pennington has always been very reasonable about this, but he hasn't taken the hard line that Seth Nickell finally has in his blog.
To summarize, the Gnome project has been considering what its policy will be with respect to the adoption of a high-level language. The candidates for this, basically, are Microsoft's C#/.Net (through the open-source Mono implementation) and Sun's Java, with Python as a distant third. The debate is already coming a bit late, as Mono has started getting software written for it, and a major distributor (Novell) seems to be fond of it. Havoc Pennington simply pointed out that Gnome needed some of that sort of technology, and that getting it from Microsoft might pose problems. The subsequent debate was never resolved, but people seemed surprisingly open-minded about the possibility of creating a huge dependency on Microsoft's intellectual property. Miguel de Icaza (the creator of the Mono .Net implementation) has argued very passionately that Gnome should actually trust that they can depend on their own implementation of .Net. Seth Nickell has come along and pointed out that all technical issues aside, a "trust us" from Microsoft isn't worth very much, and that's essentially what de Icaza is proposing the Gnome project accept.
I agree with Seth Nickell, and I intend to argue at an even higher level than he does. Let's assume, for now, that Microsoft's licensing terms are actually acceptable, and mix well with the rest of the free software universe without any fears of legal trouble initiated by Microsoft. It would still be a bad idea to choose .Net as the high-level language for Gnome.
Let's look at this from a competitive strategy standpoint. According to Michael Porter, professor of business and expert on competitive strategy, there are five forces of competition: buyer power, supplier power, threat of substitution, intra-industry rivalry, and the threat of new entrants to a market. As things stand right now, Microsoft is a competitor in Gnome's market (that is, it's an industry rival). Gnome is also a competitor in Microsoft's market. (A competitor, just to be clear, is anyone or anything that can affect your business. Assume for the moment that Gnome is a business; although not in the traditional sense, it does want to compete and win in a market). The legal aspect is just one way for them to mess with your life.
So in essence what is being proposed is to make Gnome subject to Microsoft's supplier power in addition to its already strong competition in industry. I hope it's obvious why depending on your biggest rival as a supplier is not a great idea. It doesn't even need to involve any threat of legal attacks. Your interests are fundamentally not aligned, and when your supplier is a monopolist in the good being supplied, as Microsoft is the only source of .Net licensing, you are handing over a lot of supplier power.
De Icaza seems to argue that this hand-over of power is inevitable. This is utter madness.
One point he makes is valid, and that is that it is undesirable to have Linux locked out of a platform that will become dominant. But I disagree that this is inevitable. Not every cool thing Microsoft has ever pushed has become widely used, for various reasons. We haven't yet seen how people will react to the Avalon technology (a component of .Net) and how likely it is to become a must-have component of a desktop system. This is certain to not happen if people are unwilling to adopt it, and I'm not convinced that the ease of programming in creating simple applications it offers will necessarily translate to useful applications. In fact, I think that there's a very good possibility that corporations will look at the technology, think it's nice, do a cost-benefit analysis (benefit: easier programming, cost: Microsoft is now a vital supplier), and take a pass. This has been happening in recent years with all sorts of Microsoft initiatives. (Having a Linux implementation of this technology, however, might in fact make corporations more likely to wade in. His suggestion might actually cause the very situation he is trying to manuever around to become a reality. That doesn't mean hedging your bets and having an implementation isn't worthwhile, but that's a very different thing).
But this is all a bit of a digression, because having a free-software implementation of Avalon is different from making Gnome depend on .Net. If .Net is really so legally clear, Mono can always be installed on a Linux system to interoperate. As he said, that's great for Novell's Linux distribution. This isn't the same thing as building Gnome (or parts of it) on Mono.
Furthermore, doing so would risk alienating a major ally of Gnome: Sun. De Icaza points out that Sun has patents on Java, and argued that Sun is just as fickle and threatening as Microsoft, but this is pure sophistry. To argue that Sun, who contributes programmers and code to Gnome, and who ships Gnome on its workstations, is just as likely to attack Gnome (it's own supplier!) is ridiculous. Yes, the possibility cannot be ruled out, but let's be realistic here. Furthermore, to suggest that they should ship their competitor's major product is asinine. Gnome and Sun have a shared competitor, and gain strength from each other's help. The relationship is not as healthy and reciprocated as many in the Gnome community would like, but you have to work with the choices that are in front of you. (Microsoft, by the way, pays no programmers to work on Gnome, adds no code to Gnome, and does not have any dependency on Gnome at all. Oh yeah, and they see it as competition to be destroyed). Sun, for their part, hasn't exactly handled this whole thing well, and the progress Microsoft has made with .Net is largely their fault. It was their game to lose when Microsoft entered, and though they haven't lost, they have lost ground. For one thing, Microsoft is willing to evolve the virtual machine, while Sun isn't. The value people have created with Java isn't in the binaries, it's in the source code.
All that said, I do have a lot of sympathy for Mono and de Icaza's thinking. I do think it is a better technology than the Java language stack, and I wish it could be used. I sympathize with the geeks who want to just use the best technology. However, as much as we'd all like it if competitive forces weren't an issue, Gnome can't pretend that Microsoft doesn't intend to compete with them. That means geeky considerations have to sit in the back seat, and some solid business strategy has to dominate these decisions.
To summarize, the Gnome project has been considering what its policy will be with respect to the adoption of a high-level language. The candidates for this, basically, are Microsoft's C#/.Net (through the open-source Mono implementation) and Sun's Java, with Python as a distant third. The debate is already coming a bit late, as Mono has started getting software written for it, and a major distributor (Novell) seems to be fond of it. Havoc Pennington simply pointed out that Gnome needed some of that sort of technology, and that getting it from Microsoft might pose problems. The subsequent debate was never resolved, but people seemed surprisingly open-minded about the possibility of creating a huge dependency on Microsoft's intellectual property. Miguel de Icaza (the creator of the Mono .Net implementation) has argued very passionately that Gnome should actually trust that they can depend on their own implementation of .Net. Seth Nickell has come along and pointed out that all technical issues aside, a "trust us" from Microsoft isn't worth very much, and that's essentially what de Icaza is proposing the Gnome project accept.
I agree with Seth Nickell, and I intend to argue at an even higher level than he does. Let's assume, for now, that Microsoft's licensing terms are actually acceptable, and mix well with the rest of the free software universe without any fears of legal trouble initiated by Microsoft. It would still be a bad idea to choose .Net as the high-level language for Gnome.
Let's look at this from a competitive strategy standpoint. According to Michael Porter, professor of business and expert on competitive strategy, there are five forces of competition: buyer power, supplier power, threat of substitution, intra-industry rivalry, and the threat of new entrants to a market. As things stand right now, Microsoft is a competitor in Gnome's market (that is, it's an industry rival). Gnome is also a competitor in Microsoft's market. (A competitor, just to be clear, is anyone or anything that can affect your business. Assume for the moment that Gnome is a business; although not in the traditional sense, it does want to compete and win in a market). The legal aspect is just one way for them to mess with your life.
So in essence what is being proposed is to make Gnome subject to Microsoft's supplier power in addition to its already strong competition in industry. I hope it's obvious why depending on your biggest rival as a supplier is not a great idea. It doesn't even need to involve any threat of legal attacks. Your interests are fundamentally not aligned, and when your supplier is a monopolist in the good being supplied, as Microsoft is the only source of .Net licensing, you are handing over a lot of supplier power.
De Icaza seems to argue that this hand-over of power is inevitable. This is utter madness.
One point he makes is valid, and that is that it is undesirable to have Linux locked out of a platform that will become dominant. But I disagree that this is inevitable. Not every cool thing Microsoft has ever pushed has become widely used, for various reasons. We haven't yet seen how people will react to the Avalon technology (a component of .Net) and how likely it is to become a must-have component of a desktop system. This is certain to not happen if people are unwilling to adopt it, and I'm not convinced that the ease of programming in creating simple applications it offers will necessarily translate to useful applications. In fact, I think that there's a very good possibility that corporations will look at the technology, think it's nice, do a cost-benefit analysis (benefit: easier programming, cost: Microsoft is now a vital supplier), and take a pass. This has been happening in recent years with all sorts of Microsoft initiatives. (Having a Linux implementation of this technology, however, might in fact make corporations more likely to wade in. His suggestion might actually cause the very situation he is trying to manuever around to become a reality. That doesn't mean hedging your bets and having an implementation isn't worthwhile, but that's a very different thing).
But this is all a bit of a digression, because having a free-software implementation of Avalon is different from making Gnome depend on .Net. If .Net is really so legally clear, Mono can always be installed on a Linux system to interoperate. As he said, that's great for Novell's Linux distribution. This isn't the same thing as building Gnome (or parts of it) on Mono.
Furthermore, doing so would risk alienating a major ally of Gnome: Sun. De Icaza points out that Sun has patents on Java, and argued that Sun is just as fickle and threatening as Microsoft, but this is pure sophistry. To argue that Sun, who contributes programmers and code to Gnome, and who ships Gnome on its workstations, is just as likely to attack Gnome (it's own supplier!) is ridiculous. Yes, the possibility cannot be ruled out, but let's be realistic here. Furthermore, to suggest that they should ship their competitor's major product is asinine. Gnome and Sun have a shared competitor, and gain strength from each other's help. The relationship is not as healthy and reciprocated as many in the Gnome community would like, but you have to work with the choices that are in front of you. (Microsoft, by the way, pays no programmers to work on Gnome, adds no code to Gnome, and does not have any dependency on Gnome at all. Oh yeah, and they see it as competition to be destroyed). Sun, for their part, hasn't exactly handled this whole thing well, and the progress Microsoft has made with .Net is largely their fault. It was their game to lose when Microsoft entered, and though they haven't lost, they have lost ground. For one thing, Microsoft is willing to evolve the virtual machine, while Sun isn't. The value people have created with Java isn't in the binaries, it's in the source code.
All that said, I do have a lot of sympathy for Mono and de Icaza's thinking. I do think it is a better technology than the Java language stack, and I wish it could be used. I sympathize with the geeks who want to just use the best technology. However, as much as we'd all like it if competitive forces weren't an issue, Gnome can't pretend that Microsoft doesn't intend to compete with them. That means geeky considerations have to sit in the back seat, and some solid business strategy has to dominate these decisions.
Monday, May 17, 2004
Notes on Troy
I went into this screening absolutely horrified, for moments before, as we stood in line in front of the auditorium, I saw something gross. There were a couple of guys in line right behind us, and after taking a quick look around (apparently the fact that people were watching didn't deter him), one of the guys proceeded to reach a hand deep into the other's pants and fiddle around with something in there for a moment. This was very clearly inside the pants and inside the boxer shorts as well. My group realized that I was staring at something agape, and I told them I'd have to tell them in a second. Claire had seen it as well, and as we entered the theater, I told her that I thought you were supposed to wait for the movie to start for that kind of thing. As she said, that's just not acceptable public behavior.
Also, for some reason, I looked back at the projection room before the movie started, and noticed a large orange LED display over the projector window which said, backwards, "Welcome to Rear Window. Please adjust your reflectors." We were kinda baffled by this, until I looked back during the movie and noticed that the dialog was being displayed backwards on it. I figured it was an accessibility feature.
Troy is very good at being what it is. That is, given that it's an almost three hour long action film, it's very polite about giving you what you want, and only what you want. Instead of promising for hours and hours before finally delivering a battle that can't possibly live up to expectations, the movie delivers plenty of action from the beginning to the end.
For example, one thing I thanked the heavens for was when Achilles was talking with his mother about whether he should go fight in Troy. The mother prophesizes that if he does, he'll be famous for all eternity, but she'll never see him again. I braced myself for a boring five minutes of agonizing and goodbyes, with a possible last-second change of mind. Instead, I was delighted when after about 6 seconds of Brad Pitt looking as thoughtful as he can, it cut immediately to Achilles standing on a ship, one of thousands, crossing the ocean.
To understand the movie's pacing, let me give you an example of how the action progresses. The movie opens with a brief but spectacular battle sequence in which Achilles single-handedly wins a war. Then it skips to the celebration dinner (just as it's ending), at which point Paris asks Helen to come back to Troy with him. Then it cuts to Paris, on a boat bound for Troy, revealing what he's done to his brother Hector. It's pretty brisk, and by keeping it that way, the movie probably amplifies the emotional impact of the later scenes that require it (Though Peter O'Toole really helped out here, and did most of the heavy lifting). This isn't to say that it doesn't make you groan, it just knows what it's good at and tries to focus on that, and that makes it harder to hate the movie. It also ends very politely. After the Trojan horse sequence, when Achilles dies, the movie doesn't outstay its welcome, telling you what happened to everyone. It just ends gracefully a few moments later. Sure, you don't know for sure what happened to the less important characters you don't really care about, but their fates were sufficiently hinted at earlier. The movie definitely does not feel three hours long. I remember looking at my watch and being surprised that there was only 40 minutes left. Where did all the time go?
Especially since what it's good at, it's really good at. The most suspensful scene I've seen in a while is the battle between Achilles and Hector. Now, I knew enough about The Illiad to know that Achilles kills Hector. Yet during this battle, I felt a rush of adrenaline, and was on the edge of my seat. Not so much because I believed that Hector would win (although I considered this possibility due to critics mentioning major changes in the story), but because the battle sequence is so well choreographed. One of the things that I hated about Gladiator was that for all the cool action you should be watching, it's so poorly edited together that you can't get a sense of the space of the fight. You can't get a coherent idea about who is standing where and what they are doing. The sequences were like visual gibberish, and it was very frustrating. In contrast, the battle between Hector and Achilles doesn't feel choreographed, with swords magically moved to parry blows before the blow has begun, or anything like that. Hector is pressed to the edge of his fighting ability, and you feel the danger he's in. Achilles, as he dodges easily and moves aggressively like some sort of fast animal, feels like a highly dangerous, incredibly angry killing machine, and you feel bad for Hector for having to do something so terrifying.
Similarly, I really loved the scene where Paris is fighting the gigantic Menelaus, and you see it through his point of view. With his helmet blocking any peripheral vision, and his breath echoing in the helmet, all you see is the gigantic king pounding on you from various directions, and you realize how scary such a battle would actually be.
I thought the special effects were pretty damn good too. In the beginning fight scene, I detected something really obviously fake with Brad Pitt, but it all happened so fast, I couldn't really tell exactly what it was they did. The ships looked pretty fake too. But the thousands of soldiers looked really good, I thought. If you really focused on one soldier, and watched his legs, you'd probably think they looked mechanical. But overall, if you don't do that, it looked quite convincing. Instead of looking like an entirely computer-generated world, like in the last battle of The Return of the King, say, where the lighting doesn't look quite right, and everything is too dark and cartoony, this all looked like it actually was in bright daylight.
Other thoughts:
One thing I couldn't help but think about during the movie was the major logistical problems that wars pose. Even today, the largest problems in our military are largely logistical. We're having a hard enough time moving 200,000 of our troops and their equipment, and the replacement parts for that equipment, across the sea and keeping them there (and fed and oiled) for a year. Just imagine trying to move 50,000 soldiers across the sea and keeping them fed for 10 years on a hostile shore, as in The Iliad, in 1200 BC no less. This in itself makes the whole thing very unlikely. Where did they get food? How did they repair their weapons and armor? Clothes? It's not like they were on friendly territory, or could move very far inland.
The whole thing reminded me of the Siege of Antioch during the First Crusade, in 1097. Antioch was a walled city, like Troy, and so the crusaders basically pulled up to the gates and hung out there (of course, in those days, the state of the art in long-distance military planning and logistics was basically having everyone agree on where they were going to, and then finding their own way there, stealing whatever they could on the way). Both sides proceeded to starve in this stalemate. As the situation got more and more desperate, the crusaders learned that a gigantic Turkish army was marching to Antioch's relief, and only days away. The crusaders managed to get themselves inside the city (this was either, depending on whose account you believe, due to trickery, bribery, or divine intervention). Of course, those inside the city had been starving just as much, so now the crusaders found themselves besieged inside the city they just conquered (and ravaged and cannibalized), and there was still no food. Obviously, this didn't last anywhere near 10 years, but this real-life version of The Illiad is still fascinating.
Another thing I thought was cool to see was the soldiers of Achilles creating a wall of their shields, as the Spartan army was known to do. Some informal searching suggests this may have been an anachronism, as the Greek army doesn't seem to have been very organized at the time Troy is supposed to take place.
However, while searching for that, I found an interesting page. After the movie, Mary pointed out that it would have been really hard to coordinate a battle in all that chaos (another logistics problem, of course). It's a great question; how did they coordinate and distribute orders in the middle of a battle in those times? Apparently the commander of each phalanx would always stand in the right-most position of the front line, so each soldier would know where to look to get orders. I don't think they did that in the movie, though. Instead, the commanders appear to be on horseback, which would be another good method to know where your commander is, if it weren't for the fact that I think it's also an anachronism.
Also, for some reason, I looked back at the projection room before the movie started, and noticed a large orange LED display over the projector window which said, backwards, "Welcome to Rear Window. Please adjust your reflectors." We were kinda baffled by this, until I looked back during the movie and noticed that the dialog was being displayed backwards on it. I figured it was an accessibility feature.
Troy is very good at being what it is. That is, given that it's an almost three hour long action film, it's very polite about giving you what you want, and only what you want. Instead of promising for hours and hours before finally delivering a battle that can't possibly live up to expectations, the movie delivers plenty of action from the beginning to the end.
For example, one thing I thanked the heavens for was when Achilles was talking with his mother about whether he should go fight in Troy. The mother prophesizes that if he does, he'll be famous for all eternity, but she'll never see him again. I braced myself for a boring five minutes of agonizing and goodbyes, with a possible last-second change of mind. Instead, I was delighted when after about 6 seconds of Brad Pitt looking as thoughtful as he can, it cut immediately to Achilles standing on a ship, one of thousands, crossing the ocean.
To understand the movie's pacing, let me give you an example of how the action progresses. The movie opens with a brief but spectacular battle sequence in which Achilles single-handedly wins a war. Then it skips to the celebration dinner (just as it's ending), at which point Paris asks Helen to come back to Troy with him. Then it cuts to Paris, on a boat bound for Troy, revealing what he's done to his brother Hector. It's pretty brisk, and by keeping it that way, the movie probably amplifies the emotional impact of the later scenes that require it (Though Peter O'Toole really helped out here, and did most of the heavy lifting). This isn't to say that it doesn't make you groan, it just knows what it's good at and tries to focus on that, and that makes it harder to hate the movie. It also ends very politely. After the Trojan horse sequence, when Achilles dies, the movie doesn't outstay its welcome, telling you what happened to everyone. It just ends gracefully a few moments later. Sure, you don't know for sure what happened to the less important characters you don't really care about, but their fates were sufficiently hinted at earlier. The movie definitely does not feel three hours long. I remember looking at my watch and being surprised that there was only 40 minutes left. Where did all the time go?
Especially since what it's good at, it's really good at. The most suspensful scene I've seen in a while is the battle between Achilles and Hector. Now, I knew enough about The Illiad to know that Achilles kills Hector. Yet during this battle, I felt a rush of adrenaline, and was on the edge of my seat. Not so much because I believed that Hector would win (although I considered this possibility due to critics mentioning major changes in the story), but because the battle sequence is so well choreographed. One of the things that I hated about Gladiator was that for all the cool action you should be watching, it's so poorly edited together that you can't get a sense of the space of the fight. You can't get a coherent idea about who is standing where and what they are doing. The sequences were like visual gibberish, and it was very frustrating. In contrast, the battle between Hector and Achilles doesn't feel choreographed, with swords magically moved to parry blows before the blow has begun, or anything like that. Hector is pressed to the edge of his fighting ability, and you feel the danger he's in. Achilles, as he dodges easily and moves aggressively like some sort of fast animal, feels like a highly dangerous, incredibly angry killing machine, and you feel bad for Hector for having to do something so terrifying.
Similarly, I really loved the scene where Paris is fighting the gigantic Menelaus, and you see it through his point of view. With his helmet blocking any peripheral vision, and his breath echoing in the helmet, all you see is the gigantic king pounding on you from various directions, and you realize how scary such a battle would actually be.
I thought the special effects were pretty damn good too. In the beginning fight scene, I detected something really obviously fake with Brad Pitt, but it all happened so fast, I couldn't really tell exactly what it was they did. The ships looked pretty fake too. But the thousands of soldiers looked really good, I thought. If you really focused on one soldier, and watched his legs, you'd probably think they looked mechanical. But overall, if you don't do that, it looked quite convincing. Instead of looking like an entirely computer-generated world, like in the last battle of The Return of the King, say, where the lighting doesn't look quite right, and everything is too dark and cartoony, this all looked like it actually was in bright daylight.
Other thoughts:
One thing I couldn't help but think about during the movie was the major logistical problems that wars pose. Even today, the largest problems in our military are largely logistical. We're having a hard enough time moving 200,000 of our troops and their equipment, and the replacement parts for that equipment, across the sea and keeping them there (and fed and oiled) for a year. Just imagine trying to move 50,000 soldiers across the sea and keeping them fed for 10 years on a hostile shore, as in The Iliad, in 1200 BC no less. This in itself makes the whole thing very unlikely. Where did they get food? How did they repair their weapons and armor? Clothes? It's not like they were on friendly territory, or could move very far inland.
The whole thing reminded me of the Siege of Antioch during the First Crusade, in 1097. Antioch was a walled city, like Troy, and so the crusaders basically pulled up to the gates and hung out there (of course, in those days, the state of the art in long-distance military planning and logistics was basically having everyone agree on where they were going to, and then finding their own way there, stealing whatever they could on the way). Both sides proceeded to starve in this stalemate. As the situation got more and more desperate, the crusaders learned that a gigantic Turkish army was marching to Antioch's relief, and only days away. The crusaders managed to get themselves inside the city (this was either, depending on whose account you believe, due to trickery, bribery, or divine intervention). Of course, those inside the city had been starving just as much, so now the crusaders found themselves besieged inside the city they just conquered (and ravaged and cannibalized), and there was still no food. Obviously, this didn't last anywhere near 10 years, but this real-life version of The Illiad is still fascinating.
Another thing I thought was cool to see was the soldiers of Achilles creating a wall of their shields, as the Spartan army was known to do. Some informal searching suggests this may have been an anachronism, as the Greek army doesn't seem to have been very organized at the time Troy is supposed to take place.
However, while searching for that, I found an interesting page. After the movie, Mary pointed out that it would have been really hard to coordinate a battle in all that chaos (another logistics problem, of course). It's a great question; how did they coordinate and distribute orders in the middle of a battle in those times? Apparently the commander of each phalanx would always stand in the right-most position of the front line, so each soldier would know where to look to get orders. I don't think they did that in the movie, though. Instead, the commanders appear to be on horseback, which would be another good method to know where your commander is, if it weren't for the fact that I think it's also an anachronism.
Wednesday, May 12, 2004
HP's White Whale
I read this article in the Wall Street Journal today (sorry, no external link) about HP's PC business. The basic story was that since HP has bought Compaq, and they have merged their PC divisions, HP has been trying to be very strong competition for Dell. Just as they completed their merger, Dell's marketshare overtook their combined marketshare for the first time. They've been neck and neck since then, edging each other out in different quarters.
Apparently HP's take on this is that the whole game is "strategic," and so they have lowered their prices to the point where they just barely break even, in an effort to put price pressure on Dell and thus expand their marketshare. Then, with increased marketshare, they could make money off of selling related devices, like printers and scanners. I don't have an MBA or anything, but I'm really not sold on this.
First off, Dell easily has the lowest costs of any PC manufacturer. This is why they make better margins than any other PC maker. Dell keeps costs low by not making it until you order it. This gives them incredible flexibility in pricing and inventory management. That's not even saying anything about their superior operations and logistics. In contrast, HP makes less than a fifth of their computers made-to-order. Instead, they rely on a vast (slow!) retailer network, which sells pre-built boxes in standard configurations.
Now, if you want to lead a race to the bottom, make sure that you can go lower than the other guy. First point. But also, what happens when they get there? Do they actually think they're gonna kill Dell by doing this, and then be able to strengthen prices? Unlikely.
More likely, this was largely a symptom of their inability to change, and their desire to show some success out of their big merger. "Well, we can't squeeze profits out this, but if we can start beating Dell in marketshare, then at least that will look good," they might have thought. There's even some valid logic to it: They're a very diversified company, and an unprofitable PC business isn't going to kill them, whereas Dell doesn't have any other money makers. If they can't make it profitable, might as well hurt Dell and try to give a boost to other divisions.
Still, that sort of strategy isn't viable over the long term. By having essentially zero margins, they're very sensitive to changes in costs and demand. Apparently they've already been bitten severely by this. What do you do if you're operating at break-even for years, and market share starts to decline for all your troubles? You'd have to have some pretty amazing margins on the related devices you expect your PC buyers to also buy, in order to compensate for this. And I imagine only a fraction of their customers buy related devices.
It's a terrible shame, because I think HP is otherwise well-positioned to do something else. Markets always have room for multiple successful firms, each one serving different segments of that market. By blindly trying to out-Dell Dell, they have failed to fully exploit the strength they get from their diversity. In my opinion, a better long-term strategy would have been to try to be a PC version of Apple.
Apple's strategy is basically "digital lifestyle," where the Mac is a hub that connects and coordinates other components that you use in your daily life. Apple does this by making the Mac, and making it very easy to have it talk with digital cameras, video cameras, digital music players, DVD burners, and so on. Yet Apple seems unable to ever get anyone to switch from PCs to their platform. People just want PCs, and Apple is unwilling to act as if this is reality. And Apple doesn't even make very many of those peripherals, though they seem to be moving in that direction! (iPod is the obvious exception)
This is where an opportunity exists for HP. HP actually makes all those other devices, yet the value of an all-HP set of those devices doesn't seem any better than an HP computer with peripherals made by third parties. And HP's software offerings don't seem to offer any notable improvement compared to any other PC using HP peripherals. For years people have periodically gotten excited by the possibility of Apple producing versions of their hardware and operating system for PCs. HP could actually create a similar package of default software, services, and hardware designed to integrate better with other HP products.
For example, an easy, quick start would be to license Picasa, photo software made by some friends of mine.
Update: Interesting responses.
First, the idea does not require being as cool as Apple. That's probably impossible, and as Shawn pointed out, HP doesn't seem to swing that way. My point was that given the choice between a price war and adding value in a way that no other PC manufacturer is positioned to do, the latter would be a better option, even if you couldn't get the Apple buzz. It would require changes more pervasive than just changing a price from "level where we make money" to "level where we break even."
Apparently HP's take on this is that the whole game is "strategic," and so they have lowered their prices to the point where they just barely break even, in an effort to put price pressure on Dell and thus expand their marketshare. Then, with increased marketshare, they could make money off of selling related devices, like printers and scanners. I don't have an MBA or anything, but I'm really not sold on this.
First off, Dell easily has the lowest costs of any PC manufacturer. This is why they make better margins than any other PC maker. Dell keeps costs low by not making it until you order it. This gives them incredible flexibility in pricing and inventory management. That's not even saying anything about their superior operations and logistics. In contrast, HP makes less than a fifth of their computers made-to-order. Instead, they rely on a vast (slow!) retailer network, which sells pre-built boxes in standard configurations.
Now, if you want to lead a race to the bottom, make sure that you can go lower than the other guy. First point. But also, what happens when they get there? Do they actually think they're gonna kill Dell by doing this, and then be able to strengthen prices? Unlikely.
More likely, this was largely a symptom of their inability to change, and their desire to show some success out of their big merger. "Well, we can't squeeze profits out this, but if we can start beating Dell in marketshare, then at least that will look good," they might have thought. There's even some valid logic to it: They're a very diversified company, and an unprofitable PC business isn't going to kill them, whereas Dell doesn't have any other money makers. If they can't make it profitable, might as well hurt Dell and try to give a boost to other divisions.
Still, that sort of strategy isn't viable over the long term. By having essentially zero margins, they're very sensitive to changes in costs and demand. Apparently they've already been bitten severely by this. What do you do if you're operating at break-even for years, and market share starts to decline for all your troubles? You'd have to have some pretty amazing margins on the related devices you expect your PC buyers to also buy, in order to compensate for this. And I imagine only a fraction of their customers buy related devices.
It's a terrible shame, because I think HP is otherwise well-positioned to do something else. Markets always have room for multiple successful firms, each one serving different segments of that market. By blindly trying to out-Dell Dell, they have failed to fully exploit the strength they get from their diversity. In my opinion, a better long-term strategy would have been to try to be a PC version of Apple.
Apple's strategy is basically "digital lifestyle," where the Mac is a hub that connects and coordinates other components that you use in your daily life. Apple does this by making the Mac, and making it very easy to have it talk with digital cameras, video cameras, digital music players, DVD burners, and so on. Yet Apple seems unable to ever get anyone to switch from PCs to their platform. People just want PCs, and Apple is unwilling to act as if this is reality. And Apple doesn't even make very many of those peripherals, though they seem to be moving in that direction! (iPod is the obvious exception)
This is where an opportunity exists for HP. HP actually makes all those other devices, yet the value of an all-HP set of those devices doesn't seem any better than an HP computer with peripherals made by third parties. And HP's software offerings don't seem to offer any notable improvement compared to any other PC using HP peripherals. For years people have periodically gotten excited by the possibility of Apple producing versions of their hardware and operating system for PCs. HP could actually create a similar package of default software, services, and hardware designed to integrate better with other HP products.
For example, an easy, quick start would be to license Picasa, photo software made by some friends of mine.
Update: Interesting responses.
First, the idea does not require being as cool as Apple. That's probably impossible, and as Shawn pointed out, HP doesn't seem to swing that way. My point was that given the choice between a price war and adding value in a way that no other PC manufacturer is positioned to do, the latter would be a better option, even if you couldn't get the Apple buzz. It would require changes more pervasive than just changing a price from "level where we make money" to "level where we break even."
Sunday, May 09, 2004
Notes on Man on Fire
How far does style get you? Well it turns out, it can get you pretty far, at least with me.
The movie started, and the opening credits sequence made it unmistakably clear that this movie was going to be stylish. "Seven? That's nothing. Look how hardcore this is, we're shaking the camera harder than you've ever seen, we're cutting way more than 24 times per second, and we've invented a new type of film that's 40% grainer than the previous most grainy film," it screamed.
"I don't know man," I said. "This might actually be too stylish for me." The movie is not actually that hardcore the whole time, which is a relief. It's got some more style tricks in store to prop itself up, though. One thing I thought was pretty innovative was the way the movie handles the subtitles. As an American film set in Mexico, it faces the choice between either having everyone mysteriously know how to speak English with a Spanish accent, or putting large chunks of the movie in subtitles. The movie (rightly) goes the latter route, but instead of apologetically stowing the subtitles down at the bottom, the movie places them all over, making them part of the composition of the shots. The subtitles shake, or get large when someone is yelling, and disappear in various ways, fading out, getting covered up by objects in the scene as the camera moves, or "dripping" off the screen. I think this is likely to be copied in the future, it's so effective (I've never seen it before, but if Man on Fire is ripping it off from somewhere else, please let me know).
I'm reminded of the language transition in The Hunt for Red October, which starts out with the Russians speaking Russian. Then John McTiernan used an interesting transition to switch the Russian characters to speaking English, where in the middle of a conversation, the camera zooms in on a character's mouth while he talks in Russian, and after a pause, the next thing he says is in perfect English. This wasn't the most effective move, but even as a non-Russian speaker, it's painfully obvious that Sean Connery speaks in terribly accented Russian, so it was probably necessary. The only other time I've seen anything similar was actually in another John McTiernan film, The 13th Warrior. Antonio Banderas's character learns Viking in a montage sequence where he listens to all the Vikings talk at the campfire every night during a journey. The mishmash of their voices eventually starts to have English words as he maps out the language, until they're finally speaking English. I remember thinking this little gimmick was too good for such a crappy, throw-away movie.
Anyways, the other notable thing is how violent this movie is. In the opening credit sequence, you already get treated to a severed ear. You pretty much go into the movie knowing that it is basically a platform from which to launch a lot of vengeful action sequences. The surprisingly intelligent, articulate little girl is kidnapped, and this guy is gonna get revenge. You're probably with that, but the extent to which I actually cared about the characters surprised me. The relationship between Creasy (Denzel Washington's character) and the little girl works. They take their time developing that in the beginning, longer than you would expect for a movie like this, and it helps make what follows work better.
[spoilers follow]
Unfortunately, by the end, the movie has spent all the capital it had built up in that first part. The second half is the part you came to see, and it's pretty suspenseful and all, if you can take the gross violence. But it is basically just a repetition of "who do you work for?", followed by some gruesome torture, followed by a vague clue for Creasy to follow up on and start the sequence again. Like I said, the stylishness, plus the investment you have in the relationship between Creasy and the girl, makes this successfully carry the movie for a while. But by the time Creasy has shoved a bomb up a guy's ass, the movie starts to feel a little wacky. I'm serious. The audience laughed.
I was glad the ending didn't go on endlessly celebrating the girl's return, with a little montage with the mother, or something. No one is in the mood for that by then, we're just glad she's back. Similarly, I was glad they didn't show the death of Creasy and drag it out. But it left me a bit dissatisfied. There's nothing wrong with the hero dying in a movie, but the way he submits to death at the hands of the mastermind doesn't feel heroic at all. I kept expecting him to unexpectedly deal out an asskicking, or get shot from the distance (the bad guys weren't gonna let him get that close, were they?) He died on my birthday.
Before the ending credits, a card pops up thanking Mexico City, "a very special place." The audience laughed at this. I don't know if it was the more than two and a half hours of gritty footage depicting Mexico City as a wretched hive of scum and villainry, and quite scenic, with its sooty, crumbling buildings, and humid, smoggy air. Oh, and the locals! Toothless grins, senior citizen prostitutes, kidnappers, corrupt cops, organized crime, lawyers, and guys from New Jersey. But ignore all that. Mexico City rocks.
The movie started, and the opening credits sequence made it unmistakably clear that this movie was going to be stylish. "Seven? That's nothing. Look how hardcore this is, we're shaking the camera harder than you've ever seen, we're cutting way more than 24 times per second, and we've invented a new type of film that's 40% grainer than the previous most grainy film," it screamed.
"I don't know man," I said. "This might actually be too stylish for me." The movie is not actually that hardcore the whole time, which is a relief. It's got some more style tricks in store to prop itself up, though. One thing I thought was pretty innovative was the way the movie handles the subtitles. As an American film set in Mexico, it faces the choice between either having everyone mysteriously know how to speak English with a Spanish accent, or putting large chunks of the movie in subtitles. The movie (rightly) goes the latter route, but instead of apologetically stowing the subtitles down at the bottom, the movie places them all over, making them part of the composition of the shots. The subtitles shake, or get large when someone is yelling, and disappear in various ways, fading out, getting covered up by objects in the scene as the camera moves, or "dripping" off the screen. I think this is likely to be copied in the future, it's so effective (I've never seen it before, but if Man on Fire is ripping it off from somewhere else, please let me know).
I'm reminded of the language transition in The Hunt for Red October, which starts out with the Russians speaking Russian. Then John McTiernan used an interesting transition to switch the Russian characters to speaking English, where in the middle of a conversation, the camera zooms in on a character's mouth while he talks in Russian, and after a pause, the next thing he says is in perfect English. This wasn't the most effective move, but even as a non-Russian speaker, it's painfully obvious that Sean Connery speaks in terribly accented Russian, so it was probably necessary. The only other time I've seen anything similar was actually in another John McTiernan film, The 13th Warrior. Antonio Banderas's character learns Viking in a montage sequence where he listens to all the Vikings talk at the campfire every night during a journey. The mishmash of their voices eventually starts to have English words as he maps out the language, until they're finally speaking English. I remember thinking this little gimmick was too good for such a crappy, throw-away movie.
Anyways, the other notable thing is how violent this movie is. In the opening credit sequence, you already get treated to a severed ear. You pretty much go into the movie knowing that it is basically a platform from which to launch a lot of vengeful action sequences. The surprisingly intelligent, articulate little girl is kidnapped, and this guy is gonna get revenge. You're probably with that, but the extent to which I actually cared about the characters surprised me. The relationship between Creasy (Denzel Washington's character) and the little girl works. They take their time developing that in the beginning, longer than you would expect for a movie like this, and it helps make what follows work better.
[spoilers follow]
Unfortunately, by the end, the movie has spent all the capital it had built up in that first part. The second half is the part you came to see, and it's pretty suspenseful and all, if you can take the gross violence. But it is basically just a repetition of "who do you work for?", followed by some gruesome torture, followed by a vague clue for Creasy to follow up on and start the sequence again. Like I said, the stylishness, plus the investment you have in the relationship between Creasy and the girl, makes this successfully carry the movie for a while. But by the time Creasy has shoved a bomb up a guy's ass, the movie starts to feel a little wacky. I'm serious. The audience laughed.
I was glad the ending didn't go on endlessly celebrating the girl's return, with a little montage with the mother, or something. No one is in the mood for that by then, we're just glad she's back. Similarly, I was glad they didn't show the death of Creasy and drag it out. But it left me a bit dissatisfied. There's nothing wrong with the hero dying in a movie, but the way he submits to death at the hands of the mastermind doesn't feel heroic at all. I kept expecting him to unexpectedly deal out an asskicking, or get shot from the distance (the bad guys weren't gonna let him get that close, were they?) He died on my birthday.
Before the ending credits, a card pops up thanking Mexico City, "a very special place." The audience laughed at this. I don't know if it was the more than two and a half hours of gritty footage depicting Mexico City as a wretched hive of scum and villainry, and quite scenic, with its sooty, crumbling buildings, and humid, smoggy air. Oh, and the locals! Toothless grins, senior citizen prostitutes, kidnappers, corrupt cops, organized crime, lawyers, and guys from New Jersey. But ignore all that. Mexico City rocks.
Saturday, May 08, 2004
This Week's Mind Poison
So, here's a book that I just read. I found it while exploring the website of Ray Fair, a very distinguished economist at Yale. It's called Predicting Presidential Elections and Other Things, and as you can see, the full text is conveniently available online (though on his site, it's slightly hidden behind a hard to see link). I started reading it while at work the other day, since I was interested in the specifics of his presidential election prediction model, and before I knew it, I'd read almost half the book (the whole thing only took a few hours). It's quite readable, and only requires enough math to understand the slope of a line or so, and that's only in chapter two. If you want to get a very good intuitive understanding of how econometrics works, his explanation is the best I've ever seen.
By the way, his model was predicting pretty strong results for Bush two years ago, even with anemic GDP growth assumptions. The latest versions are close to predicting a landslide.
By the way, his model was predicting pretty strong results for Bush two years ago, even with anemic GDP growth assumptions. The latest versions are close to predicting a landslide.
Friday, May 07, 2004
These fellows seemed to have a problem with Israel (Taken April 18, 2004 right across from the White House)
The only appropriate first post
(21:40:46) me: I gotta get me one of them blogs. I'm still working on that.
(21:40:55) lorna: oh yeah? get one on blogger
(21:40:58) lorna: they're free
(21:41:02) me: Yeah, it's just such a pain to set up.
(21:41:05) lorna: no it's not!
(21:41:13) lorna: what name you want?
(21:41:21) me: I dunno. It's really hard to think of one.
(21:41:54) lorna: what's your email address?
(21:41:55) me: What if Blogger isn't the best one? I don't have a server either.
(21:42:10) lorna: blogger will work well for you
(21:42:16) lorna: i can explain more.... later
(21:42:21) lorna: but first tell me your email
(21:42:29) me: Yeah, I understand. Let's go with [this]
(21:44:13) lorna: http://mindpoison.blogspot.com/
(21:44:20) lorna: mind/poison
(21:44:23) lorna: www.blogger.com
(21:44:24) lorna: now go!
(21:44:32) me: Huh?
(21:44:37) lorna: you have a blog
(21:44:39) lorna: i just made you one
(21:44:44) lorna: you can change any piece of it
(21:44:45) me: Whoa. It all happened so fast.
(21:44:54) lorna: haha yeah blogs are like that
(21:45:19) me: Huh.
(21:45:20) me: Whoa.
(21:40:55) lorna: oh yeah? get one on blogger
(21:40:58) lorna: they're free
(21:41:02) me: Yeah, it's just such a pain to set up.
(21:41:05) lorna: no it's not!
(21:41:13) lorna: what name you want?
(21:41:21) me: I dunno. It's really hard to think of one.
(21:41:54) lorna: what's your email address?
(21:41:55) me: What if Blogger isn't the best one? I don't have a server either.
(21:42:10) lorna: blogger will work well for you
(21:42:16) lorna: i can explain more.... later
(21:42:21) lorna: but first tell me your email
(21:42:29) me: Yeah, I understand. Let's go with [this]
(21:44:13) lorna: http://mindpoison.blogspot.com/
(21:44:20) lorna: mind/poison
(21:44:23) lorna: www.blogger.com
(21:44:24) lorna: now go!
(21:44:32) me: Huh?
(21:44:37) lorna: you have a blog
(21:44:39) lorna: i just made you one
(21:44:44) lorna: you can change any piece of it
(21:44:45) me: Whoa. It all happened so fast.
(21:44:54) lorna: haha yeah blogs are like that
(21:45:19) me: Huh.
(21:45:20) me: Whoa.