Tubthumping

A place to rant.

Tuesday, January 06, 2009

Movie taglines

I'm always amused when advertisements present a tagline that reads well in one way, but doesn't make sense (or means something bad) in another way. I'm especially surprised when the unfavorable reading is the more likely one, because I find it hard to imagine a group of highly paid professional advertisement writers not thinking about the other readings of their lines (or just deciding that they don't care).

I just saw a poster for Valkyrie, whose tagline is: "Many saw evil. They dared to stop it." I assume that the writers are using the common construction that typically goes "Many people talk about [some problem], but these people did something about it." The "but" and the emphasis on "these people" (or just "they") are critical because without them it's not obvious that the subject of the second clause is not the same as that of the first. Without this emphasis, one could read that the people who talked about the problem are also doing something about it, but the speaker is actually trying to contrast the two groups' responses. Moreover, I'd argue that this is the more common (and grammatically correct) reading.

Unfortunately, the movie tagline uses two sentences instead of "but" and has no italics. So the emphasis is completely lost. When I read the poster, I initially thought the group who see evil also dared to stop it (which, while noble, is not as dramatic as the contrast I think the writers were intending to convey).

Yes, I know I'm being pedantic, but that's what a blog is for.

Thursday, March 06, 2008

Porting a Verizon number to Cingular + iPhone (after the fact)

I finally took the plunge: I bought an iPhone. I mainly just wanted a phone with a reasonable user interface, where "Calculator" wasn't under the "Planner" section and I could easily sync my contacts with my laptop. But if I'm going through the pain and expense of a new phone, I thought I'd get a sweet one that would let me check NextMuni and such, too.

I'd been debating it for along time: is it really worth $500? Is Cingular's reception as good as Verizon's? (It turns out that no, it's not, but you should check each company's coverage map for your area.) But what really tipped the scale was a Cingular agent's suggestion that I sign up for a new account without porting my old number. They'd give me a temporary number, and then if I wanted to stick with Cingular, I could port my old number and have it replace the temporary one. I could return the iPhone and cancel the contract within two weeks if it really sucked, though I'd have to pay a 10% restocking fee on the phone. Sounds perfect, right?

This might be a good option if you're worried about switching carriers (as I was), but despite confirming with the agent several times that this plan was actually going to work, it turned out to be a real pain. I'm hoping this blog entry will help people avoid some of the annoyance I went through.

The first problem was that my old number was in the Boston market, but my Cingular account was opened in the Northern California market, and you can't port numbers between markets. I found this out when I called the porting number that the agent had given to me. The agent told me I had to relocate my account first, and then I could port it. These are two separate transactions, and as I later discovered, the people involved with one don't necessarily know anything about the other.

So I went to the store to relocate my account, because I generally find people more helpful in person than on the phone. The agent spent ten minutes on his computer and decided he couldn't help me, and that I had to call customer service to relocate the account.

I called customer service the next day. I explained the situation to an agent, but he didn't seem to get it. I told him I needed him to relocate my account so that I could port my number, and he kept insisting that he could not port my number for me - I had to call Porting for that. Frustrated, I told him that I just needed him to relocate my account so that I could call the Porting people myself later to have the number ported. 25 minutes and three times on hold later, he transferred me to the relocation department. (If I'd known there was one, I'd have just called them directly.)

The agent from the relocation department was very helpful. She explained that while she could relocate my account for me, I'd need a new SIM card, which would take 3-5 days to be mailed to me. Then I'd have to call them again to activate the card once I got it and installed it. But she also told me that corporate AT&T stores should be able to relocate my account (as opposed to authorized dealers, who may or may not be able to), and that the store I went to was in fact a corporate store. If you do the relocation in-store, they basically just call the customer service people to do it for you, but they give you the SIM card right then and there and you don't have to call them again later.

Armed with this new information, I went to the same store, got somebody who actually knew something about what they were doing, and did this process. This agent told me that with the new SIM card, the iPhone would not work until I resync'd it. Okay, no big deal. But then he said that he thought I shouldn't resync it until I had ported the new number, because otherwise I might need yet another SIM card for the newly ported number. The customer service agent he was talking to confirmed that I shouldn't sync until after I'd done the port.

So I left with a broken phone, came home, and called up the Porting department to port my number. All seemed fine, and this agent told me that I could reconnect my phone in 5 minutes. Great! I thought. I waited 15 minutes to be sure, but when I connected my iPhone to my Mac, iTunes claimed that my AT&T account had been deactivated. An hour later I got the same result. Great.

I called the number given in the error message (with my old phone at first, which had already been turned off!), and the agent told me to wait up to 24 hours, but that she wasn't sure about that and couldn't verify it with the Porting people because they were closed already. She told me I should try again the next day or call them again. I tried again in the morning with no luck, so I called again. The agent was very confused - she couldn't figure out how I could possibly have a new SIM card in the phone and iTunes wasn't taking me back to the initial activation screen. When she seemed so confident that it should have done this, I got the idea to turn off and on my iPhone (which I hadn't done since I bought the thing - you have to hold down the sleep button on the top until it asks you to confirm turning it off, and again to turn it on) and voila! Back at the initial activation screen. I followed the usual prompts (being sure to enter my old zip code, not my current billing zip code, where it asks for billing zip code), and I was set. After a few minutes, I was able to both send and receive calls from my new (old) number!

The summary is this:
  1. It is possible to buy an iPhone and sign up with Cingular, and then later port your old number, even from a different market. It's not easy, though. In my case, I spoke to ten different agents for a total of a few hours, admittedly two of whom did absolutely nothing to help me, and only another two of whom seemed to actually understand the whole process involved.
  2. If you need to port your number from another carrier, you must be porting it within the same market. So if you're moving, for example, you should sign up for a new account before you move. If not, you'll have to relocate your new account to the old area. This requires that you have an address local to that area, though you can give them a different billing address, in which case it's unclear what, if anything, they use the local address for.
  3. If you need to relocate your account, you should be able to do it at any corporate AT&T store. You can also do it on the phone, but if you'll need a new SIM card, they'll have to mail it to you and you have to call them again to activate it.
  4. It's unclear to me whether you need a new SIM card whenever your number changes, or just when your market changes, or just when you change markets across the country.
  5. If you get a new SIM card, make sure to turn off the iPhone and turn it on again before reconnecting it to iTunes, or iTunes won't recognize that it's a new SIM card and should reactivate the phone.
  6. If you have an iPhone and you relocate and port a number, you shouldn't sync your iPhone in between.
I hope this experience helps convey how unnecessarily complex this process was and that it helps someone else trying to do the same thing to do it more quickly and less painfully.

Labels: , ,

Wednesday, October 17, 2007

Javascript, without the misery

I was trying to convince a friend of mine to implement a new widget in Javascript. Of course, he was reluctant. Who can blame him? Who likes to deal with "silent errors" and "browser incompatibilities"?

Indeed, the first ten minutes of writing Javascript will make you want to punch yourself in the face to stop the pain. After all, javascript errors do often pass by silently by default (or else they show useless modal dialogs). And some operations really are very different across browsers. Moreover, Javascript's dynamic typing allows you to do all kinds of illegal things without finding out until the program blows up at run-time, at which point it's hard to observe what's happened.

Two things can make Javascript development more comfortable: good programming practices and good tools.

Programming tips
  • Make errors a Big Deal. You don't have to use a modal dialog, but maybe write exceptions with stack traces to the bottom of your page when they come up. At the very least, use the Firefox error console to observe the problems that occur.
  • Seek cross-browser solutions. There aren't that many operations that are really different across browsers. Adding event handlers is a classic one that you just can't avoid because IE used to use attachEvent(), and Firefox uses addEventListener().
  • When you do need to write code that depends on the browser, try to use semantic tests to distinguish among browsers. That is, instead of checking for IE, check for the presence of attachEvent(). Browser detection is tough to do completely correctly, and needing to know which browsers use which methods makes code more brittle and likely to break when new browsers or versions are released.
  • Isolate browser-specific code into utility functions. This one should be obvious. Browser-detection code is probably among the most brittle of any web application. At least if it's isolated to a small section of code, it can be maintained, tested, and updated more easily when things change.
  • Don't end lists with commas. My advice has suddenly become much more concrete. But this is a problem I've run into several times and spent an incredible amount of time debugging. IE versions before 7 choke when given Javascript lists that end with commas. For example:

    var array = [ 1, 3, 5, ]; /* note comma after 5 */

    This construction is really useful for lists of more complex items when the list's contents might change and each item takes up nearly a whole line. But it will bring IE6 to its knees in a completely silent, unhelpful way, because it can't even parse your file. If you're getting a bizarre error before any of your code seems to be run, check for this issue.

    For the curious, this is technically correct behavior: the ECMA standard IE is implementing doesn't allow that comma, but every other language does, and other browsers deal with it alright.
Tools and references
  • Look up methods and properties on w3schools.com. Their HTML, DOM, and CSS references are invaluable.
  • Check out the Venkman debugger. It's slow and somewhat buggy, but it gives you more observability into Javascript than you'd have thought possible if you haven't sought out good tools before. You can set breakpoints and watches, evaluate arbitrary expressions, view call stacks, etc.
  • Use the Firefox web developer toolbar. Among the great features of this add-on are in-line CSS editing (without reloading the page), outlining arbitrary DOM elements, and displaying other relationships between DOM elements. Great for debugging everything from cookies to DIV positioning.
  • Here's a useful list of Javascript gotchas.
There's one thing left to know about: layout problems are often undebuggable. This isn't a tip - it's an admission that web development with Javascript isn't all rosy. Before you start messing with padding and margins, learn about the Box model. Then learn about the peculiar ways in which every browser manage to implement it incorrectly.

Wednesday, January 03, 2007

Virtualization

Anyone paying attention to the usual sources knows that virtualization has become the next best thing since virtual memory, before which multiple programs on your computer used to stomp all over each other. One thing I don't understand is: why virtualization?

Here are some of the benefits commonly cited for virtualization:
  • Server consolidation (moving applications that used to run on several physical machines onto one box). This is probably the biggest single reason, but it assumes that you can't run more than one serious application on an OS instance. While this may be true for some (even many) platform/app combinations, this is not an inherent problem for an OS. It seems like the simpler, more performant and elegant solution is to build better OS abstractions and systems for managing installed software so that this isn't a problem.
  • Sandboxing applications (for isolating faults, and for security). How is it not the job of the OS to keep applications from stomping on each others' toes, intentionally or otherwise?
  • Resource management (limits and guarantees). This is supposed to be one of the OS's main jobs!
There are many others, but I think you get my point: all of them should be done at the OS level, rather than adding a new layer to the stack (particularly one which is opaque to the real OS, which used to have full control over the system).

But why do these things in the OS? It seems analogous to most other elements of the monolithic kernel. Because the OS is already a mature environment. We already have tools like ps(1) to list processes, debuggers like mdb to debug the kernel, etc. People have had to develop a separate infrastructure for managing and debugging VMs and VMMs, and it hasn't been easy. Moreover, in most use-cases today, OS's are not even aware that they're not running on the "bare metal" - if they were (which is the case with paravirtualization), perhaps they could make much better performance decisions. And if there weren't such a separation, there could be some real opportunity to innovate in what is now the hypervisor/OS interface.

At least there's reasonable support for the idea of virtualization technologies in the OS. Solaris 10 introduced Zones (the article is skeptical, but technically good), which sort of behave like very lightweight virtualization containers. And Linux is adding something called the Kernel-based virtual machine, which is a more complete solution inside the system.

Tuesday, December 26, 2006

Why does Windows rule the world?

The Economist has an interesting article on why Windows rules the world. Much of the article discusses whether it's worth upgrading to Vista and the state of code bloat in Vista, MacOS, and GNU/Linux, etc. But the ultimate conclusion is that Windows favors simplicity, in design and functionality, sometimes at the expense of performance and customizability. And that consumers care most about something "easy to use and maintain."

Not surprisingly, I disagree with their assessment. First, their comparison of "lines of code" in Linux and Windows is fairly absurd - it's not clear that the 50m figure for Windows and the 213m for Debian are even comparing corresponding parts of the system. Even if they were, who says LOC = simplicity? Sometimes, more lines of code (in the form of more abstract interfaces) actually simplify the whole system, making it less bug prone and easier to update (important effects for the end user).

The author's assessment that Windows favors simplicity in use and maintenance is probably true. But I question whether it really does this better than, say, Ubuntu Linux. I wouldn't make this claim about many other OS distributions, Linux or otherwise, but I think Ubuntu's a great example.

As the article points out, even novices can install Ubuntu in minutes, but it claims that installing software is "certainly not for novices." If I want to install Firefox, for example (assuming it weren't already installed by default), I open up the Package Manager (from the equivalent of the Start Menu), search for firefox, click a checkbox, and hit "Apply changes." Okay, some of the UI choices aren't the greatest, but this is at least as easy as finding the Firefox web site, downloading the installer, and running it.

And if Firefox required some other library (like, say, OpenGL), Ubuntu will tell you that and install that for you, too! On Windows, you sometimes have to find the website of the maker of the OpenGL implementation you need, download it from there, and install it separately. Admittedly, most real software on Windows includes all you need, but I've definitely had to do this in the past, and it's pretty painful.

But what if you need to uninstall software? What if it conflicts with something else? Or you just need back the several gigabytes of disk space that Acrobat loves to hog? Or you're tired of the popups you get from the system tray when you log in? You can usually manually uninstall software, but any one who's done this knows that this rarely removes it completely. Files, libraries, and registry keys are left around, contributing to the overall bloat of the system (which is not the same as the code bloat described above). Some savvy Windows users I know reformat their hard drive roughly once a year to overcome the issues associated with this bloat.

With Ubuntu, fire up the package manager again, uncheck the box, and "Apply changes." Not too bad. It removes everything you haven't changed, and the stuff that might stick around (like saved documents) does not impact performance.

What if you want to upgrade software? With Windows, you get a pop up every once in a while indicating that there's a Windows update available. Ubuntu does the same thing, except it gives you the opportunity to upgrade your whole system - Linux itself, Firefox, Thunderbird, OpenOffice, etc. On Windows, you'd have to obtain a copy of the installer for the newer version of each of these software packages and run that, often after running the uninstaller.

But what about overall ease of use? It's hard for me to address this one. I'm so used to various Unix variants that I find them much more intuitive (even without the shell) than the Windows interface (which I used to be pretty familiar with).
But I recognize that that's a result of lots of experience with one system and less with another.

What I can give is a great case-in-point from this past summer. One of my housemates bought a printer. At one point, I wanted to use it, so I connected the USB cable to my Ubuntu laptop. Within seconds, a box popped up asking if I wanted to use this new printer (having recognized the make and model, etc.). I said "yes," and within a minute I was printing a document with this printer which my computer had never seen before. Another of my house mates tried this on Windows and gave up after 15 minutes, unable to get it to work.

This was not intended to be a rant against Windows. Obviously, Windows does dominate the market, and there must be some reason. I haven't figured it out yet, though I suspect it's the inertia of computer users.

But it's ludicrous to say that Windows makes it easier to install and maintain software, and questionable whether it's really easier to use for people who have never touched a computer before. At best, it's easy to install software on both. But people don't just install software. They upgrade it, and they remove it, too. These are places where Ubuntu's model is simply easier.

Tuesday, November 21, 2006

Simple timer for C programs

So, when coming home for Thanksgiving break, what else would I decide to do but write a useful utility I've been wanting to use for a while? And then host it on Google:
http://code.google.com/p/ctimer/

Programmers might find this useful for finding out how long certain parts of their programs take (in real time, system time, and user time).

Labels:

Friday, October 27, 2006

Real news: Solaris "beats" Linux

In my last post, I mentioned an article comparing Solaris and Windows performance. Many people who commented on that article (as well as myself) really wanted to see a comparison between Solaris and GNU/Linux. Well, that's just what the author did.

The overall result was that Solaris was faster than Linux, but it's a much more even matchup than it was with Windows. In many cases, Linux was non-negligibly faster. But we see that Solaris performs quite well, particularly in the multi-threaded benchmarks. Sweet.

My only question is: were they using ZFS?

Labels: ,

Tuesday, October 24, 2006

So, Solaris outperforms Windows?

Is anybody really that surprised?

Although, it could be a little unfair to be using Sun hardware to benchmark Solaris and Windows. It's not that not legit, but Sun has the advantage of being able to coordinate their hardware and software teams to optimize one for the other, when appropriate.

Labels:

Friday, August 25, 2006

The SiteKey Fallacy

While working this summer a long way from my home bank, I opened an account at Bank of America so I could deposit my paychecks. Before I could do my banking online, though, I had to learn all about their authentication system called SiteKey.

BoA is quite proud of this system, which is designed to avoid phishing attacks, in which some malicious guy (affectionately dubbed Mallory) creates a web site that looks just like Bank of America's, convinces somebody to click that link to login to their BoA account, and steals their password when they try to login. With SiteKey, in theory, the user knows they're logging into the real BoA web site, not some bogus one.

For those too lazy to click the SiteKey link above, it works like this: when you initially sign up for online banking, you choose a unique picture from hundreds of options and you provide a unique phrase. This photo and passphrase together make up your SiteKey. Later, when you want to login, you first enter your Online ID without the password. Then the BoA website shows you your SiteKey. Only Bank of America knows what your SiteKey is, since you created it with their site securely when you signed up, so (theoretically) seeing your SiteKey means you must be at BoA's web site. Now you can safely enter your password and login.

This is terrible for two reasons:
1) There already exists a perfectly good system in place to accomplish exactly the goal of authenticating web sites to end users.
2) SiteKey doesn't work.

I'll address these in order.

First, there already exists a system by which end users can verify that the web site they're viewing is authentic. Basically, when you visit a secure web site, the site provides some sort of proof that it is owned by the organization which claims to own it. When viewing a secure site, most modern browsers show a padlock icon somewhere on the page or in the URL bar, and you can click that icon for more information. Browsers also alert users if they're dealing with a website which can't prove it is who it claims to be. (The details of all this are beyond the scope of this blog entry, but for the curious, look up public key encryption, SSL, and certificates.)

Thus, as long as the user checks for the presence of this padlock icon, as well as the URL which he's viewing (since that's the identity of the web site), he can be pretty sure that any information submitted will be securely transmitted to the person he's expecting to send it to.

BoA does use this system for their web site, but SiteKey is implemented on top of that as a way to make the user feel more secure. While I appreciate the goals BoA seeks by doing this, it's fundamentally a bad idea. Standards exist for the benefit of users. If every web site that wanted to make users feel secure implemented their own solution for the problem, then we'd have countless different solutions, confusing users even more and hurting the overall web experience.

A much better approach is to develop new standards or improve existing standards to solve this problem. That way, users aren't confronted with various different attempts to solve the same problem. This has the added benefit of community involvement, which results in people finding fatal flaws in otherwise great ideas before they're implemented (see below).

In most cases, creating a community standard (or changing an existing one) is difficult. In this case, though, the standard already exists and works fine! The problem is that many people ignore security warnings. A simpler solution, then, is to publicize the existing system and make it easier to use (e.g., by providing better browser support, possibly in the form of extensions).

A much bigger problem with SiteKey is that it simply doesn't work. It makes phishing attacks marginally more difficult, but nevertheless possible. It's fairly simple to break.

Normal phishing attacks look like this:
1) Lure user to evil web site looking just like a real web site
2) Get the user to enter their username/password
3) Save the username/password that the user entered, and forward them onto the real site (or tell them that the login failed, or whatever - you really don't care what happens now, since you have their username and password!)

A SiteKey phishing attack could look like this:
1) Lure the user to an evil web site looking just like the real web site
2) Get the user to enter their username
3) From the evil web site, enter the username into the real BankOfAmerica web site.
4) BoA doesn't recognize the evil web site's computer, so it asks a personal question to verify the identity.
5) Display this personal question to the user, and forward their response to the server.
6) Now that the evil site has answered the personal question properly, BoA shows the SiteKey and asks for a password. The evil site relays that to the user and asks for their password.
7) The user enters their password. The user is toasted once again.

Note that this is only marginally more difficult. Anyone sufficiently motivated who could accomplish the first sort of attack can easily accomplish the latter. So seeing your personalized photo of a cute little puppy might make you feel secure, but don't get too comfortable. You still need to take all the precautions against phishing attacks you always had to take (as I mentioned above: check the URL, and your browser's padlock icon).

It's worth mentioning that this vulnerability has been pointed out before in SiteKey and similar systems, and alternatives have been proposed. Which makes it all the more saddening that another institution I actually care about seems to be switching to a SiteKey-like system.

Sunday, August 20, 2006

Installing Nexenta Alpha 5 on Toshiba A75 laptop

Excited from my recently-completed internship on Sun's Solaris Kernel Group, and desperately needing to reinstall something on my Toshiba A75 laptop, I decided to go ahead and try installing Nexenta Alpha 5.

Why Nexenta?

I've been a Debian (or Debian-based-distro) user for a while. First I used stock Debian, but Kubuntu quickly became my distro-of-choice after I installed it over year ago. To me, it had everything I loved about Debian (a familiar /etc layout, a great packaging system, etc), but more of the useful (but non-free) packages and better forums for support. [K]ubuntu took something I loved, and made it easier for me to use.

Nexenta is a distribution of OpenSolaris based on Ubuntu. The idea is to make something much like Ubuntu, but with the fast, rock-solid Solaris kernel rather than Linux. Now, I believe the biggest problem with using OpenSolaris is the lack of a package manager on par with apt, not to mention all the other tools and programs I know and love. But with Nexenta, you get apt as well as over 12,000 packages. (Debian developers appear less than enthused. (I dare you to read more than 50% of that thread.))

I'm installing Nexenta on my laptop because I'm hoping this will be another case of taking something I love (Ubuntu) and making it better (with a more solid kernel, zfs, zones, dtrace, etc). I recognize that it might not be quite as good as Ubuntu in terms of the level of support for everything I use, but if it's most of the way there, the extra Solaris features might be worth it to me.

Installation

Goals

I wanted to run ZFS for most of my storage, and I definitely wanted the partitions (or slices, in Solaris-speak) encompassing /home and other non-system areas to be separate. That way, when I reinstall, I don't have to back that up - I can simply tell the installer to skip those partitions.

Troubles

Not surprisingly (to me), the installation did not go terribly smoothly. Because of my storage goals (above), I couldn't use the default partitioning scheme. But the Nexenta installer does not have good support for manually partitioning your disk - either you let it decide how to arrange your partitions, your you use the cryptic format(1M) utility.

Note that format(1M) isn't usually too bad if you know what you're doing, but in my case, I was quickly turned off. When I selected my disk (c0d0), it asked me nearly a dozen questions whose answers I couldn't have known easily and I expected it to determine automatically. Apparently, this is not a common case, but I needed to know the number of heads on the disk, the number of sectors per track, the number of cylinders (data and alternate), the number of blocks per cylinder. There were a number of other questions, but those had a "default" option.

For the answers to these questions, I booted up my existing Ubuntu system and ran "fdisk -l". (Note: I didn't want to keep score, but at this point we have Linux: 1 and Solaris: 0.)

The other major problem I had was that once I did accomplish all this, when I rebooted, I got the word "GRUB" in the corner of my screen, and nothing else. The pretty GRUB menu never came up. I think this was related to a screwy partition table I had set up. More below, but after reinstalling with a "proper" setup, it seems to be working.

Slicing (Partitioning)

Important things to remember when setting up your slices (which I discovered from this multi-boot Solaris x86 page) is that slices 2, 8, and 9 are considered special:
  • Slice 2 should always be a partition with the "backup" tag which occupies the entire hard drive (I didn't even realize that slices could overlap).
  • Slice 8 should always be a partition with the "boot" tag
  • Slice 9 should always be a partition with the "alternate" tag and occupy 2 cylinders
I have yet to find any documentation on what the different tags mean, though.

Anyway, I expanded the example in the Nexenta Alpha 1 guide (which the Alpha 5 guide references if you need to manually partition the drive). I followed that guide closely, so I'm not going to repeat the steps here. But since my drive was already partitioned for Windows and Linux, I first had to delete all the existing partitions. In the format tool, I typed "partition" and then deleted each partition. Then I saved and exited the partition tool. Then I reentered it - it now got me to the default formatting described in that guide (a 100% Solaris disk).

Using their example as a guide, but taking into account my desire to use ZFS, I created the following table on my 100gb hard drive:











Slice #TagCylindersSizeComment
0root3-9167gbRoot partition
1swap917-10471gbSwap partition
2backup0-12157whole driveReserved - don't change this
3alternate1048-660242gbZFS 1
4alternate6603-1215742gbZFS 2
5unassigned0-00(Not used)
6unassigned0-00(Not used)
7unassigned0-00(Not used)
8boot0 - 08mbReserved - don't change this
9alternate1 - 216mbReserved - don't change this


Note that I split up my non-system storage space into two slices so that I can use zfs mirroring. I'd rather use multiple drives and raidz2 of course, but it's a laptop with just one drive. Having both mirrors on the same drive might be a performance hit (have to look into that), but should be more reliable. If the drive fails, I'm dead, but if there's a transient hardware error that only messes up some data, I can recover. I'll be regularly backing up this machine anyway (I hope).

Summary

With the exception of the problems noted above in the "Troubles" section, the installation went smoothly. On the second try (with a correct slice table), she booted up wonderfully. Time to go play!

Sunday, July 02, 2006

Ubuntu Dapper Upgrade - Here be dragons

So, I was excited for Ubuntu's Dapper Drake, the newest release of their GNU/Linux distribution. Some programs of mine were getting stale, and I was eager to get newer but stable versions.

Despite that, I've had lots of problems with upgrades in the past (not necessarily with Ubuntu), so I was a little cautious. Like many disillusioned upgraders, I didn't upgrade the day the release came out. I waited a month to make sure that the silly little bugs left in were fixed. I looked online, and most people didn't seem to have many problems, so I went ahead with the upgrade.

Bad idea.

I had lots of issues with liblvm2. In summary, I had to remove it and the package that depended on it (ubuntu-standard). . . which is not good, since ubuntu uses metapackages like these to ensure that upgrades go smoothly. Thankfully, I think this one was not required for that.

After rebooting, it booted into an old kernel (which is fine, because I had overridden this before, when I had major problems booting the kernel shipped with breezy). But it turns out that when I tried the new kernel it installed, it couldn't even finish booting.

I was able to boot into my previous kernel, though, only to find out that Ubuntu could not find my networking devices. ath0 (the Atheros card) simply didn't exist. Great. I'll take this opportunity to point out that this is one of the worst reasonable modes of failure - because you can't even look up how to fix the problem, without network connectivity.

I have since found out that sound no longer works either. Fantastic.

It turns out that in both cases, the upgrade removed some file which, on bootup, loaded the kernel modules (drivers) for my network and sound cards. Once I figured this out (which took about half an hour), it was a trivial fix for the network card. I'm now fighting with alsa and my sound card.

I just wanted to warn people who thought that the upgrade would be seamless - not so.

I think Ubuntu's wonderful, and I really respect what they've done for GNU/Linux in making it usable. I think overall they've done a great job. But it's really unacceptable for an upgrade process to take out your networking and sound subsystems. I realize I may have a special case here, but I'm just an ordinary Joe who happens to know how to install drivers. If not, I might be switching back to Windows, which is frankly ridiculous.

Wednesday, June 28, 2006

Interning at Sun

So I'm interning at Sun Microsystems this summer, and maintaining an active blog on blogs.sun.com. I advise you to check it out.

Monday, February 27, 2006

Breaking the 'continue' habit

The keywords continue and break are used by many programmers for "easy" flow control. But they make code harder for humans to read.

The problem with them is that they interrupt the flow of control in a strange way. Of course, that's the whole point, but while it makes lots of sense when you're writing it, it's surprisingly counter-intuitive for people reading your code.

When you're scanning code, and you encounter a 'break' or 'continue', you immediately have to stop and figure out where the code actually jumps to. This is particularly annoying with nested scopes, and they always occur in at least one nested conditional.

The next thing the reader has to do is determine the state of local variables after the jump, which is not always easy. In fact, it's hard enough for any non-standard loop to determine the current state of the program at the end of the loop. But when the loop allows the possibility of break'ing or return'ing early, it's even harder, because it means you have to scan the entire body of the loop to figure out what it changed, instead of being able to look at just the header or footer.

A much better way to write most loops is to partition the body into several conditional cases, possibly nested ones. Then, put the 'break' condition into the loop condition, where it should be. People expect terminating conditions here - that's what it's for!

Consider a contrived example in which you're iterating over n elements in a collection list. You only care about elements x such that foo(x) is positive (for some function foo). For those x, you want to do some complicated stuff (like compute an average). You want to stop if you get to a negative number. It's very tempting to write this:


int min = 0;
for(int i = 0; i < list.length; ++i) {
if(!foo(list[i])) {
continue;
} else if(list[i] < 0) {
break;
}
// do work to calculate the average
}


I would argue that the following is a much better way:


int min = 0;
for(int i = 0; i < list.length && list[i] >= 0; ++i) {
if(foo(list[i])) {
// do work to calculate the average
}
}


Why's this better? Well, it's more concise, but that just reflects the other nice property: it's much simpler! If you're reading this code and wondering how long this loop will run, it's easy: you can just look at the loop conditions. And if you want to know what it does in the body, you only need to consider two cases (one of which is a no-op). In the first example, there are at least three important cases to worry about, and the order in which they are checked is significant. As a result, looking at any given line of code in the loop, it's hard to figure out under what circumstances it will be executed. With the second example, it's extremely clear.

Of course, there are exceptions to all rules of thumb. Return'ing early is usually much less controversial (to me), and a really good idea if, for example, you're doing a linear search for something that will only be true for one element. But you can always rewrite the code to be as efficient without 'break' or 'continue', and it will very often be clearer.

Tuesday, November 15, 2005

In defense of proprietary software

Last week, Free Software Foundation President Richard Stallman came to visit Brown University to give a talk about (you guessed it) free software. In it he outlined four freedoms with respect to software which he believes to be central for a just society. I won't detail them here (you can find them described by the GNU project), but I'll give a brief description of them:

Freedom 0: freedom to control one's own computer. One should be able to run any program which they legally obtained, for any purpose, whenever they want, however many times they want, etc.
Freedom 1: freedom to help oneself. One should be able to examine the source code to understand and modify it as necessary to make a program suit his needs.
Freedom 2: freedom to help one's friends. One should be able to distribue a program to friends who also could be served by it.
Freedom 3: freedom to help one's community. One should be able to distribute the modified program to help others.

Stallman claims that these freedoms are essential. Any software obeying them is called free; anything else is called proprietary. He argues that creating, distributing, and even using proprietary software is simply unethical.

I agree that these freedoms are Good Things. I think the world would be a better place if they were always provided. However, there are a lot of practical concerns that get in the way of things, and I'm not convinced that this is an ethical issue. Stallman argues that companies like Microsoft "oppress" their users by not providing these freedoms; while I might agree with that particular example, I don't think proprietary software in general is oppressive.

Here are two major problems with adopting the Free Software ideals: they're not economically sustainable, and they're impractical.

First, there's the question of economic incentives. Why would anybody produce software if they were not to get paid for it? More formally, if we consider the socially optimal amount of software produced, would that level be reached if people were not compensated for their work?

Well, would people be compensated for their work? Who would pay anybody to produce software which they could otherwise get for free? (Note that I'm well aware that free-as-in-speech software isn't by definition monetarily free, but I think that's a necessary condition, which I won't describe here.)

The model Stallman suggests is that a thousand people who want a piece of software get together to contribute $100 each toward the cause. They hire some programmers to create the software, and it gets done.

But he neglects the so-called "free rider" problem. Why would I, a member of this organization, contribute $100 knowing that if I simply chose not to, the project would still get done (and I'd still reap the benefits)? This is a known economic problem; this isn't just me being cynical.

It's possible that people would, in fact, be willing to pay for some software, because it's worth it to them to have it immediately and know that it gets done right. However, it's not clear that this would be the case for all the software that exists today. This problem may be solvable, but that Stallman failed to even address it really hurt the argument.

The other (related) problem with free software is its impracticality in some cases. Now, there are plenty of instances in which free software performs as well or better (in terms of quality as well as efficiency) than proprietary counterparts. Two prime examples are the Apache web server (hosting more than half the sites on the Internet), and the Mozilla Firefox browser. But there are many cases in which free software simply doesn't cut it.

Take the Java platform. There are currently free implementations for the Java 1.4 platform. But Java 1.5 has been out for over a year now. I understand that it's a huge change, and I don't criticize the developers of the free implementations for their efforts. But it's unacceptable for many businesses to use these free implementations because they'd be trading the great improvements of the latest platform for abstract ideals. For example, the 1.5 library contains immensely faster implementations of various collections and primitives in java.util.concurrent which can speed up programs by several times. A company which chose free software over the official Sun implementation would give up this and provide a product which is substantially lesser in quality than it could be. That's not good for anybody.

Now, before I close I want to make it clear that I actually support free software as an ideal. I run GNU/Linux myself, using almost exclusively free software. And I'm often annoyed by the restrictions of proprietary software. What I question is the moral imperative that the FSF seeks to impart, as well as whether such a model can be sustained on a large scale. I'd love to develop a way to make free and open source software a viable mainstream idea, but I think the FSF has yet to do so. I invite your comments.

Tuesday, August 09, 2005

Give cell phones a break...

I've heard a lot of people complaining about loud cell phone users lately. They don't like hearing other people's cell phone conversations on the bus, in a store, etc. One group of people have even organized a movement against it. My question is: why?

What's the difference between somebody having a personal conversation about "the game" or "last night" on their cell phone, and somebody else having it with their buddy (who happens to be there)? One's the most annoying thing in the world, and the other's normal? I don't get it. Somebody enlighten me.

(I understand it's a different thing if people are loud and obnoxious on their cell phone - but it's the same as if they were being loud and obnoxious with somebody physically there. Both are annoying. But if they're just talking normally, as they would with somebody that was there, then I don't see the point of making a big deal about it.)

Say good-bye to passwords?

So I've been thinking about the idea of using public keys instead of passwords to gain entry into web sites. It has the advantage that users don't have to remember anything but their email address (if that!). I think it's also more secure. I've written a semi-informal paper about it, and I'd appreciate comments from both the tech-savvy (who can speak to the security) and not-so-sophisticated (who can speak to usability).

Tuesday, August 02, 2005

Biblical Ruminations

I just finished reading Ender's Shadow (Orson Scott Card). This great book ends with a Biblical quote from the parable of the prodigal son: "Drink and be merry. for my son was dead and is alive again; he was lost and is found." It made me realize that this book, primarily the story of a somewhat cold-hearted but super-intelligent child, is littered with these Biblical references. How funny it is, I thought, that such a non-religious text should contain so many religious quotes.

But then it occurred to me: regardless of one's beliefs about the existence or nature of God, or of his historical or modern-day followers, there's a great deal of wisdom in that holy book. Indeed, it embodies the collective wisdom of an entire society (one which, today, has been quite successful by most standards).

But a lot of that wisdom is overlooked today. Many people, especially young people, have not read any significant part of it. Many are turned off by those who preach its lessons. I think a primary reason for this is that those who preach the Word of God believe their interpretations to be complete, correct, and infallible, but I think that's an immature belief. Their attitude often comes off as arrogant and foolish, and causes many intelligent people to brush them aside.

I believe one must interpret Biblical stories as parables — rules of thumbs for life. But they're not hard, fast rules. They're not without exception; if they were, life would be too simple. Making the right decision would be easy and deterministic. But if that were the case, what kind of life would our God expect us to have? Why would he endow us with gifts like intelligence and free will, if there were really only one correct way to live?

I don't claim to understand the Meaning of Life. But I think a large part of it is the experience itself, and the learning process we undergo. If the Bible dictated right decisions, then the ideal human life could be modeled with automata; it could be emulated with computers. And as much credit as I give computers, I don't think they're comparable to people, either in intelligence or any other metric of complexity.

The gray area is the spice of life! The controversy, the discussion, and the thought that goes into human decision making is the essence of life. There isn't a single set of rules which governs good decisions. That's why while the Bible provides vicarious experience through which we can learn to make better decisions, it cannot serve as an infallible, undebatable key to life.

But once one realizes this, man, there are a lot of good ideas in there.

Thursday, July 28, 2005

Welcome to Nowhere

Welcome to my new blog. Why does the world need a new blog? I don't know. It probably doesn't. But I wanted somewhere to write down some thoughts, and why not share them?

Why tubthumping? Because that's what I intend to do here.