Tuesday, December 29, 2009

How-To: Convert Audible (*.aa) files to MP3 format.

Audible.com is a great service for downloadable audio books, but they use their own format for the audio files. This format is internally very similar to MP3s, but without Audible's own software to play the files, they are nearly useless. Audible files are also notoriously annoying to convert to MP3 format, and information on converting the files is hard to come by, and often incorrect. Also, most of the tools that you see recommended for converting audible files don't actually work, so it is easy to throw a lot of money away buying commercial conversion tools only to find out they can't help you anyway.

So, I've decided to share my own recipe for audible file conversion here. I've converted hundreds of my own books using this process over the last several years, so I know it does work, but this is NOT an easy process.  Also, it doesn't always work out for some people due to other software or settings on their system.

Anyway... I've done my best to make these instructions as clear and concise as possible, but I make no guarantees that you'll be able to get this to work. If it doesn't work for you, feel free to drop in some comments about your experience here, but don't expect me to troubleshoot it for you. I offer these instructions "as-is".

Read on...

The disclaimer:

I do not advocate the stealing books, especially audio books. Unlike the RIAA and MPAA, book publishers only rip-off authors a little bit (by comparison anyway). Audio books also have a LOT of additional overhead, what with all those narrators and sound editing and stuff. And audio books have a smaller audience from which to recoup those costs. So I urge you to buy your audio books honestly please. The more people buy them, the more likely we'll continue to see lower prices and larger selections in the future.

The advice I'm giving here is NOT intended to help you pirate audio books. It can be used for that purpose if you choose to use it that way; but that's on your head, not mine.

What I'm going to explain is how to take an audible file and convert it to MP3 format for your own personal use.

There are many legitimate reasons you might want to convert your Audible files to MP3s. The most popular is so you can play the books on a portable device or PC audio player other than those supported by Audible’s own software. My own personal reason for converting is to ensure that my files will not become useless should Audible go out of business down the road (backup purposes); also, so they can’t revoke my ability to listen to the book through the DRM should their own business arrangements with the books' publishers change in the future.

So on with the show!

Overview:

The idea here is that you will:
  • Download the Audio Book via an older version of Audible Manager

  • Open and Rip the audio book file in GoldWave

    • Read the file using the audible media player plug-in

    • Set cut/points and split the file into smaller chunks

    • Save the chunks as "wav" files

    • Convert the "wav" files to MP3s

  • Tag the MP3s

  • Enjoy the book
Here is what you'll need:

Older Version of Audible Manager - v3.5:

So far, I have been unable to get the newer versions of the Audible manager to work with MP3 conversions, so you'll need to get a copy of an old version of the Audible Manager (version 3.5).

If you already have a newer version installed you'll need to deactivate and uninstall it before installing version 3.5. Don't forget to deactivate! Audible limits how many activations you can have at a time and you don't want to use them all up by accident.

You can tell if you have the newer version of Audible's software because it has a nauseating green color scheme. The old version is NOT green, just plainish grey and white and very "old-skool" looking.

Audible does NOT make this specific version easy to get to from their regular site, so use this link to be sure you get the right file (the file is hosted on their site... it's just hard to find).

If you plan to do this on Windows Vista or Windows 7, you will want to turn off UAC and make sure to run the installer as an administrator (right-click the installer file and choose "run as administrator", see the link for more info).

You might prefer to use Virtual PC for Windows Vista or Windows XP Mode for Windows 7 and do your conversions within the virtual OS. This allows you to keep the newer Audible software installed on your main system. The older version of Audible Manager doesn't work with the newer "enhanced format" files either.

After you install audible manager, you will need to upgrade it through the Audible update service. If you are on Windows 7 or Vista, keep UAC off and launch the Audible Manager as an administrator just like you did with the installer.

To upgrade, open Audible and go to the Help menu and choose "Check for update". Once it connects to audible and gets the (big) list of upgradable components, scroll though and choose to upgrade only the "Audible Desktop Playback" and the "Burning Audio CD Support" component. It will need to reboot the system.

DO NOT install the "Windows Media Player Filter" component when you get the updates. I also suggest that you make sure that all this stuff works before you go bothering to install any of the other optional components either.

The base Audible Manager has built-in media player support through an older plug-in (it even works with Windows 7's media player).

Once installed and upgraded, you will have to activate your audible desktop player.

It is optional if you want to activate the CD burner component or not, but you DO have to at least have to install the CD burner component, even if you don't activate it. I don't know why exactly, but if you don't upgrade the burner then the media player plug-in tends not to work right.

On Vista or Windows 7, you can probably turn UAC back on at this point, but you should continue to run Audible as an administrator. You can also setup the shortcut or exe to always run as administrator administrator.

Audible can be a pain in the balls on Windows Vista and Windows 7. If you run into errors complaining about not being able to write to the registry or the media player plug-in doesn’t work right then you may have to setup the audible *.exe files to always run as administrator. These can be found here:

C:\Program Files\Audible\Bin or C:\Program Files (x86)\Audible\Bin

The three files you'll want to set run as administrator are: AudibleCD.exe, adhelper.exe, and Manager.exe.

You may also have to turn off UAC while you do the conversions as well, though I've been able to leave UAC enabled with Windows 7 at least.

On one of my systems, I had to change permissions in the registry through regedit. if you don't know how to do this, I advise you reaserach this before trying it yourself. You can seriously break stuff in regedit, which is why I'm not linking to detailed instructions for this.

HKEY_CURRENT_USER\Software\Audible

Right click this, choose permissions, and make sure your own user account, the system account, and administrators group have "full control" of this key.

Lame:

You don't need this, but GoldWave's built-in MP3 encoder doesn't handle variable bit rate (VBR) encoding. If you want to use VBR then install LAME. GoldWave will be able to use LAME for the MP3 encoding instead of its own built-in encoder without any special setup on your part.

You can get LAME from the project site, but I find it easier to use this old installer package that includes the LAME encoder and RAZORLAME front-end GUI. It sets everything up in the right places so GoldWave can find it.

It is an older version of LAME, but I've never had any problems with it for book conversions.

GoldWave:

The big piece of the puzzle is to get a copy of the GoldWave audio editor.

GoldWave is an old, but awesome sound file editor. It has tons of really nice features, most of which you will not be using here. It is a daunting tool to use if you aren't familiar with sound editing, but relax... I'll walk you through it.

GoldWave is cheap (like $50), but you can download the trial and still be able to convert entire audio books. I do recommend you pay them for having made awesome software though, so don't be a cheap bastard! Use the trial to make sure you can get all this to work, and then buy it!

On Windows Vista or Windows 7, you will always need to run GoldWave as an administrator. I highly recommend setting the shortcut or even the *.exe file itself up to always run as administrator.

MP3 Tag Editor 3 Tag Editor:

You will probably want an MP3 tag editor so you can tag your books once you've converted them. I use ID3TagIT, which is an old product that is no longer under development, but any tagger that you are comfortable with should be fine as long as it allows you to pull the tags from file names and edit tags on multiple files at a time.

Prepare for conversion:

The conversion will create some temp files, and these can be quite large. You'll want to create a "staging" folder in advance where you will put those temp files.

You will also want to create a folder where your MP3s will be put when you are done converting them.

Make sure you have a good bit of free hard drive space. During conversion you might have 2 or 3 gig of temp files floating around.

How to convert the book to MP3s:

OK, now that you have everything installed and setup, it’s time to convert a book!

Get the Book:

Go to Audible's site and download the book you want to convert. If the book is available in "enhanced format" don't download that version. Instead change the book to one of the numbered formats. I prefer to convert the format "4" files. These are the highest quality (short of enhanced).

If the book has multiple parts, go ahead and download all the parts.

Audible, by default, puts the files for your book in

C:\Program Files\Audible\Programs\Downloads\ or C:\Program Files (x86)\Audible\Programs\Downloads\

You can change the default location for these files in the Audible Manager options if you prefer.

Once you have the book files downloaded, close audible manager. Check the taskbar to make sure it isn't running in the background; if it is right-click and choose "exit".

Open the Book:

Open GoldWave as an administrator.

Choose" "file --> open" from the menu

In the "open sound" dialog, change the "files of type" drop down list to "all files".

By default GoldWave only shows files it "knows how to open". Several years ago, Audible got upset with GoldWave's ability to open audible files, and once the lawyers were done, GoldWave agreed to exclude "*.aa" files from the "supported files" list.


Don't worry; if everything setup right with Audible Manager, GoldWave can still open the files.

In the "open sound" dialog, navigate to the location where audible is putting the files (see above), select the file for the book (or the file for the first part), then choose "open".

If everything is working right, a small pop-up window called "section navigation" will appear. This window has Audible's logo and a couple of buttons. Just leave this window open and NEVER click the buttons.


Another pop-up from GoldWave called "Processing Audio Decompression" will open too. This one has a progress bar and will close itself once GoldWave has finished reading the file (this can take a few minutes).


After GoldWave finishes decompressing the file, it will open the file in the main editor window.

Edit the Book:

GoldWave has all kinds of tools to allow you to edit and/or modify a sound file.

I will not be explaining how to edit the books in detail here, but if you want to edit the book's contents go for it!

A lot of people like to cut out the audible lead-in and lead-out stuff from the file. This is very easy to do in GoldWave, but when you are done don't forget to "select" the entire file (CTRL + A) before you move on to the next step.

Split the Book:

Audio books are BIG. You "can" convert the entire audile file to one really big MP3 file if you want to, but I don't recommend just saving the book as one giant MP3.

If that's all you want to do, you can just choose "file --> save as" from the menu, change the format to MPEG Audio (*.mp3) and setup any custom attributes you want to use. Then you are pretty much done.

Most MP3 players aren't too good about fast-forwarding within a file, and odds are you won't listen to the whole book in one go. It is best to split the book into smaller chunks to make it easier to resume playback mid-book later. Also, some books are so large that the entire thing might not fit on portable devices with limited storage space.

To split up the file, we'll tell GoldWave to scan through the file, find "quiet" spots where the narrator pauses, and put in cue points into those spots at some interval we choose (like every 5 minuts or so). Those cue points will be used to let GoldWave break the file up into smaller parts, without cutting the narrator off in the middle of a word.

By default, the entire book will be "selected" in the editor (the background will be bright blue). Make sure the entire book is selected now (or use CTRL + A to select everything).

Keep in mind that when splitting the book, we'll be dropping a lot of *.wav files in the staging folder. These files are big compared to the MP3s we will eventually produce so make sure you have spare free space on your hard drive.

To split the book:

Choose "tool --> cue points" from the menu then choose "auto cue" from the "Cue Points" dialog window.

In the "auto cue" pop-up window:


Make sure you have this set to Mark Silence (the default).

For most audio books set the "Below Threshold" slider to "-40.0". This setting is "how quiet it has to be in order for it to be considered "silence". Files with static or background noise may need a higher setting.

For most audio books set the "Minimum Length" slider to "1.00". This determines how long the silence has to last in order for it to be used as a Cue Point.

The "Minimum separation between cues" setting tells it how frequently you want the markers placed. This will become the approximate length of each of the files after we split them. I personally prefer my MP3s to come out around 5 minutes each, but if you want to split it into longer or shorter MP3s set this appropriately. For a 5 minute length set this to "5:00 00".

Don't worry about "Cue Placement within Area"; the default 50% setting is fine.

Click the OK button and GoldWave will scan the file, put in the cue points, and return you to the Cue Points pop-up window

In the Cue Points pop-up:

You should now see the cue points that were created and their position within the file (measured in time).

Take a second to make sure these cue points are spaced out reasonably evenly and are generally as far apart as you wanted. Some books have authors that talk fast, or have a static background sound, or what not. If the cue points aren't very evenly spaced, you may have to delete the cue points and repeat the auto-cue process using difference settings.

Don't worry too much if the cue points are off a few seconds or even a minute or two here and there. This usually just means there are some "intense" areas of the narrative and you probably don't want a file break right in the middle of one of those anyway.

Once you are satisfied with the cue placement, click the "split file" button to open the "split file" dialog.

In the Split File pop-up:



Set the "destination folder" to the "staging" location we setup earlier.

Set the "Method of naming split files" to "use base filename and number" (the default).

Set the "Base Filename" you want to use. Remember that books can often run into the hundreds of files if you are splitting them up in 5 minute chunks, so this value should look like "### - My Book Name".

The "#" represents the track number, so this example uses three digits to represent track numbers allowing for up to 999 files (I've never had a book need more, and I have some REALLY big ones).

Set the "first number" to 1. When you are converting a book that has multiple audible files, you can change this for the second and subsequent parts so that the file names stay in order. Example: if you have 74 files in the first part, you'd start the second part at "75".

Leave the File Format setting at the default "Use CD compatible wave format and alignment".

Click OK and GoldWave will start splitting your file up. This can take from several minutes to hours depending on how fast your computer is and how big the file is.

Repeat for multi-part books:

If the book has multiple parts, after splitting the first part go ahead and repeat the "Open the Book", "Edit the book", and "Split the book" steps above for the second and subsequent parts.

Drop all of the split files into the same stage folder so you can convert them all in one go.

Convert the Book:

Once you have the files all split up and the *.wav files sitting in the staging folder, it's time to actually convert them to MP3 files.

You can use any wave-to-mp3 converter for this, but GoldWave can do this part just fine. It can do an even better job if you've installed the optional LAME encoder.

First though, close any open file you have in GoldWave, or close GoldWave and restart it.

Choose "File --> Batch Processing" from the menu to open the Batch Processing dialog:


At the top of the pop-up window:

Click "add folder" and navigate to the staging folder where we dropped the wave files.

Set the "type filter" to *.* (the default)

Click OK to add this folder to the Batch Processing dialog.

In the "convert" tab at the bottom:

Check the "convert files to this format" checkbox

Set "Save as Type" to "MPEG Audio (*.mp3)"

Pick your MP3 Attributes:

In the "attributes drop down you can either pick one of the default settings, or click the attributes button to open the custom settings window.

I recommend the custom window if you have LAME installed.


For custom attributes:

Most audio books don't require high quality encoding. They are just spoken word for the most part so you can be conservative with the settings to make the MP3s smaller.

You probably don't want mono MP3s though, so in the custom attributes set "Channels" to "stereo".

The default sampling rate of "44100" is probably fine too.

Audio books lend well to variable bit rate (VBR) which will make the files a good bit smaller. I recommend setting "VBR Quality" to somewhere between 0 and 4 (I use 1 most of the time). Then in the bitrate range set it to 64000 to 128000. 128k is "radio quality" which is fine for most audio books, but if you are an audio purist you may prefer higher settings.

In the Folder tab at the bottom of the Batch Processing pop-up

Choose "store all files in this folder" and supply the folder name where you want your MP3 files to be dropped when they are done converting.

I highly recommend having a folder with the author's name, then a folder within that one with the book's title. Drop your files there. This allows you to use the folder names in an MP3 Tag Editor to automatically populate the artist and album tags from the folder names.

Click the "begin" button.

This will take a long time to finish... possibly hours on slower systems or with large books.

Delete the staging files:

Once the conversion completes, DO NOT forget to go into your staging folder in windows explorer and delete the *.wav files. These files take a large amount of disk space, and you don't want them hanging out next time you go to rip another book either.

Tag your MP3 files:

Most MP3 players read the tags in the file to display track number, artist, album, etc. Books aren't music, but you can and should still tag them for your own convenience.

Bulk MP3 tag editors can usually pull in tags for multiple MP3s using the folder and file names.

One thing you might want to consider when tagging a book though... some players don't use the track # from the MP3 tag to sort the file (yeah, I know... retarded!). Also, some players don't display the track number. So I recommend that you make the "title" tag start with the track number.

That's it... I hope you enjoy your audio books :)


Thursday, December 3, 2009

Review: Dell Studio XPS 16

I've finally retired my Dell XPS M1730. The M1730 was, and remains, a very powerful machine but I'd only ended up with that beast because of bad timing. When I needed to buy last time there just weren't any reasonable machines in the upper mid-range. The available systems were either just a little underpowered, or you had to go with the overpowered gaming rigs.

The short battery life of the gaming rig has been a challenge though, so I have grown very eager to leave it behind for something a little more reasonable.

I picked the Dell Studio XPS 16, also known as the Dell Studio M1640.

So now it's review time again.

Here we go!

Here is my configuration:
  • Intel Core 2 T9800 (2.93GHz 6M cache)

  • 8GB DDR3 RAM

  • 256GB Solid State Drive

  • ATI Radeon Mobility HD 4670 (1GB Ram)

  • RGB-LED Display (1920 x 1080 - 16:9 aspect ratio)

  • Slot load DVD/CD burner

  • Intel Wireless N-Ultimate

  • 9-Cell Battery (std is the 6 cell)
The XPS Brand:
When Dell first came out with the XPS line, the purpose was to make ulra-cool gaming machines to compete with Alienware.

My last XPS screamed "I AM A BADASS!" as soon as you saw it! Even the packaging it was delivered with was somewhat over-the-top. It even came with a leather binder for the manual, and an inscribed micro-fiber sham to clean the screen with. When you opened the box, it gave the immediate impression that you just bought something special! The system itself was eye-catching. If you pull out an XPS M1730 in public, heads will turn and jaws will drop! It is so flashy that it may as well come with spinners!

But dell bought Alienware and has phased out the XPS gaming rigs in favor of the Alienware brand. The other XPS was an ultraportable that has since been replaced by the Adamo

All these changes left the XPS brand in a lurch. Recently Dell re-launched XPS as a sub-moniker for the Studio laptop line where it just denotes a high-end Studio instead of being a distinct brand in its own right.

The Studio XPS 16 comes in a plain black box without frills and extras now.

The new machine itself is very sleek and sophisticated, but gone are the flashy lights, complex color schemes, and gaudy logos. Anyone that looks close will still notice the high quality fit and finish, but it doesn't draw the eye from across the room like older XPS models did. Fortunately I have no interest in drawing attention, but if you buy for the "look-at-me!" factor, then get an Alienware instead.
Exterior:
The Studio XPS 16 is very thin and light for a full-size large-screen laptop. It is as thin as the last generation's ultra-portables were, but it still packs a lot of firepower into a small package.

The exterior surfaces are made from that glossy coated plastic that is all the rage these days. Mine is black of course, but you can get it in white or red if you want to spoil it.

The glossy finish looks fantastic, but it is a finger-print whore! You cannot touch it anywhere without leaving prints. Even the touchpad gets prints! You'll find this complaint in every review about this system because it really is THAT damned annoying!

Like the rest of the studio line, this one uses round side-mounted hinges. Because of this, the display doesn't "stand up" on top of the housing like it does on most laptops. Instead it falls off the back covering the rear of the system entirely when open. Because of this, you will need to tilt the screen a little further back, especially if you are tall. Sometimes this angle causes the glossy screen to catch some glare from overhead lights though.

Also, the new hinge design requires that all of the ports be on the sides of the system instead of in back. This cuts the number of ports down a bit, but it does has all the ports you'd expect; except for the odd decision to omit the DVI port. Instead of DVI you have HDMI, DisplayPort, and an old-fashioned VGA port (for projectors). Fortunately you can get HDMI to DVI adapters for a couple bucks easily enough.

Unlike other Studio laptops, this one doesn't put the power button on the side of the hinge, but it does use the space for the battery status lights. This looks kinda cool, but I'd rather have the battery indicator where I can see it while I'm working.
Screen:
Dell has always had phenomenal screens on the high-end laptops, and I've been very fond of their 17" displays for years. I've been using the 17" screens with a 16:10 ratio at 1920 x 1280 for about 8 years now (longer than they've even been available in external displays).

The smaller 16" screen of the Studio XPS gives you the option to switch to the 16:9 widescreen aspect ratio at a slightly reduced resolution of 1920 x 1080. This is the native resolution for 1080p HD TV, which is convenient if you watch movies on your laptop. I personally preferred the 16:10 ratio myself, but since the TV market picked 16:9 we may as well all just settle on the one standard and be done with the argument already.

What I wasn't prepared for though was just how much smaller a 16" screen would be compared to older 17" screens. Not only do you lose the diagonal inch, but the change in aspect ratio also reduces the screen's height considerably.

The screen is fantastic, but I really would love to see Dell offer it in 17" or 18" versions at 16:9. The drop in size is tolerable, but it really does cut close to the bone for those of us that need every scrap of screen real-estate that we can get.

The optional RGB-LED display on this model has gotten rave reviews, and I will tell you that those reviews are NOT overstated in any way!

This is the sexiest laptop display I've ever seen! Well worth the price of the upgrade (about $350 extra).

It is amazingly bright and vivid --So bright that I keep mine at about 1/4th of the max setting. If you turn it all the way up you will get tan, I promise!

The image clarity is fantastic too, and the colors are exceptionally vivid and distinct compared to traditional LCD displays. Keep in mind that I've been using high-end displays for years, and my eyesight sucks to boot; so for me to notice a significant jump is unusual.

My favorite part though is the uniformity of the illumination. The backlight on even the best traditional LCD always has a slight variance in brightness from one edge to another.

All this clarity and crispness is awesome for graphics and movies, but it does have a drawback too... White and black are also vivid colors, so back text on a white background ends up being TOO clear and crisp! This effectively undoes the deliberate blurring (ClearType) that most OSes use for more readable font rendering.

Most people won't notice this effect, but for programmers working in text editors all day this can be a really big deal!

You can compensate for too crisp text by using an off-white background, dimming the brightness, and/or modifying the cleartype settings. If your editor supports it, you can invert it to use a black background with light text -- my preferred solution.
Video Card & Gaming:
I'm not a die-hard FPS guy, but I do game a bit. I can live with slightly reduced detail levels, but I do like my games run smoothly at, or near, the native resolution of the display. I also don't like to be prohibited from playing certain games due to limited video hardware.

My enthusiasm for high-end screens and video hardware is not gaming related: My eye-sight is REALLY bad, and I was still losing a lot of vision even into my late 20's from staring at crappy monitors all day.

So for the last 10 years I've insisted on only the best displays and video cards, more as a matter of personal protection than for gaming.

Also, as a programmer, the tools I use really do benefit from large screens and high resolutions.

I figure that if I'm going to spend 10+ hours a day using a computer, the least I can do is invest in the best display I can get my hands on.


My last several laptops have used NVidia mobile GeForce GPUs, which handle most games well. But I've grown increasingly annoyed by NVidia’s lack of concern for mobile customers. They tend to abandon driver support as soon as the next generation GPUs hit the market (which is about 5 minutes after you buy your laptop). After that, you have to scrounge for hacked up desktop drivers online and hope they are stable enough to use.

I've also noticed a decline in the overall quality of NVidia’s mobile GPUs recently too. In my opinion NVidia is just so focused on the "next big desktop GPU" that they neglect the fine tuning and engineering in the mobile versions.

So this time I decided to give ATI another shot. At least they seem to actually CARE about the mobile market, and they've been doing much better on the high-end than they have in the past.

I haven't played a lot of games yet, but so far it has run everything I've thrown at it as well as my M1730 does. Left 4 Dead 2 just came out, and it runs at high quality settings at the native 1920 x 1080 resolution, though I did turn down the anti-aliasing to 2x instead of the default 4x.

Time will tell for sure, but so far I'm pretty happy with the ATI card.
Keyboard:
This laptop does away with the numeric keypad seen on most full-size laptops. This was necessary because the frame is a little too small, plus they put the speakers on either side of the keyboard instead of the front to allow the laptop to be thinner.

I don't mind the loss of the keypad one bit. Having my hands offset from the center of the screen is more annoying than any convenience that a keypad might add.

The keyboard is white backlit, and it has a very pleasant feel to it, though the action is a little mushier than on past Dell models. The tactile feedback is still sufficiently good though.

One thing I'm not sure about are the slightly oversized keys. This is taking some getting used to. The keys aren't crazy big, but for a touch-typist the subtle difference is noticeable at the outer edges.

My biggest gripe is the return of the dreaded "Apps" key (sometimes called the "context menu key"). I HATE this key on desktop keyboards and I wish the inventor a long and painful death. This key has been blessedly absent from most laptops until now. But the worst is that Dell put the apps key right next to the arrow keys... specifically to the immediate left of the left arrow key. This placement is an outright sadistic move on Dell's part!

Nothing sucks more than "CTRL+SHIFT+Apps" when you were just trying to "back-select" text in your text editor!

Fortunately SharpKeys makes a utility that allows you to perma-kill the apps key via a registry tweak.

Overall I like the new keyboard better than the one on my old M1730, but I REALLY wished they'd just pick a standard keyboard layout and stick with it on all their systems. I hate having to relearn how to type every time I change laptops. Actually, I'd much prefer that they just go back to the old keyboard layouts they used 5 years ago... that was the perfect layout, which is why dell had used that same design for 10 straight years before they started mucking about with new keyboards.
Solid State Drive:
This is by far my favorite part of my new Studio XPS, though it isn't a feature unique to this specific system by any means.

On laptops, hard drives have long been THE performance killer. It takes a lot of power to spin a metal disk around at several thousand RPM, and laptops don't have a lot of power to spare. While traditional drives have gotten faster over the years, the power limitations have kept the laptop versions performing far below that of their desktop cousins.

With solid-state drives becoming a viable option, it is moronic not to jump onboard with your next laptop purchase. The reduced power requirements alone are worth the price tag! But the best part about SSDs in laptops is that power and spin rates aren't an issue anymore. SSDs on a laptop operate at the same speed that they do in desktops!

Dell doesn't offer the "best" SSDs on the market. Mine is a Siemens, which is decidedly a mid-grade SSD. But Dell's price on these is crazy good (only about $300 for the 256GB SSD). Even with the lower-end SSDs the performance will still far exceed even the best traditional spindle based drives.

Since this is THE bottle neck, switching to SSD will improve every aspect of your system's performance. Everything is smoother and snappier. Boot times are amazing with Windows 7 (about 10 seconds if you don't load a bunch of startup junk!). Programs smoothly spring up when you launch them, and local drive searches are outright zippy!

I can't overstate just how much faster the whole system is with an SSD under the hood!

If you can afford the price of the high-end Intel solid state drives, then I'd advise you just buy the dell with the cheapest spinel drive they make and replace it with the Intel SSD yourself. But even if you are on a budget, I still strongly advise getting the Dell SSD.

This laptop comes with an eSATA (external SATA) port, so you can compensate for the smaller sizes of internal SSDs by just buying an external spindle drive to store your music and movie collections. The eSATA connection allows those external drives to operate at full speed unlike traditional USB based externals (and eSATA doesn't add but a few dollars to the price of the external drive either).
Battery:
The default battery is a rather small 6 cell lithium ion. The reason for this is that the chassis is thin and small, so the 6 cell is just what fits. You can upgrade to a 9 cell battery, but to make extra room for the additional cells the battery is taller. The 9 cell battery acts like a stand and has its own rubber feet. This jacks-up the back of the system a good bit.

The extra life of the 9 cell really is worth the upgrade price though.

After two years working on a gaming rig with only about 1 hour per battery, the life on the Studio XPS 16 is great! But this is a high end system, and so it has some power hungry hardware still. For that reason it isn't going to get the kind of crazy battery life that you hear about with more conservative high-end systems, but it still does very well.

With wireless turned off I get about 5 hours on the 9 cell battery. With wireless-n under heavy use I get about 3.5 hours. But with the power-hogging Verizon broadband card I get just shy of 3 hours at best.

Personally I dislike having the back of my system jacked up by the 9 cell battery, but many people do prefer this --it is similar to the angle you get with a desktop keyboard. I personally find that the angle adds stress on my hands, so I've ordered some tall rubberized feet to put on the front of the system to match the height of the battery.

For most people though, the jacked up rear is probably not a problem, so I still recommend the 9 cell battery.
Odd Stuff:
Core i7 CPU:
At the time I bought this system, the Core i7 CPUs have just become available with this model laptop. The Core i7 sounds like a fantastic upgrade, but it is also a major change in architecture. Last time I jumped onto the brand-new architecture was when the first Core Duo CPUs came out. Those were much faster and nicer than the previous CPUs, but they also ended up being a little flaky. It wasn't but a few months after that that Intel replaced Core Duo with the Core 2.

So this time I decided to go with the highest end of the previous generation rather than jump pre-maturely on the i7 bandwagon. Since i7 is out, the Core 2 line has gotten a major price cut too. This allowed me to get the extra-high-end Core 2 at a decent price tag. The Core i7 costs a fortune by comparison, but prices will drop for i7 pretty fast I expect.

If you are buying you should consider the quad-core i7. They will likely be worth the upgrade price over the Core 2, but don't expect miracles here. Most software still can't really unleash the true power of multi-core processors.
IR Receiver:

This laptop doesn't come with the "travel media remote" like many previous XPS systems did, but it is still supposed to be compatible with them. The travel remote is neat because it fits into the Express card slot. No one ever has an actual express card (I have NEVER seen one in person), so storing it in the express slot it is a convenient use of otherwise wasted space.

On the rare occasions that I connect to my TV, the remote is handy and having it stored away in the express slot keeps me from losing track of it.

But when I tried to move my travel remote to the new Studio XPS 16, it wouldn't work!

Eventually I discovered that there is a driver for the built-in IR Receiver, but for some odd reason Dell didn’t pre-load the driver at the factory. The device manager didn't report a malfunctioning or unknown device either (which is the truly strange part), so it was not obvious what the problem was about.

Once I figured it out and installed the receiver's driver the remote worked like a charm. No additional software is needed for the remote (another reason I really like it).
FastAccess Software:
One of the new toys shipping on many newer Dell systems is a software app called FastAccess. This is a face recognition login system. When you go to login, FastAccess will turn on the camera and take a look at you. If it recognizes your face, it automatically logs you in without typing your password. Otherwise you can just type the password as normal. The software "learns" how to better recognize you over time.

It is a neat feature, though not much of a time saver. The recognition is quite snappy, but typing a password doesn't take much time or effort either. Still it is a really nifty feature in that "pure-nerd" way.

The software also has some advanced capabilities beyond just desktop logins, and it does actually work surprisingly well! After a couple of manual logins it was able to pick me out almost every time (as long as the lighting was good).

The big problem though is this thing's insane usage of CPU resources. It sits there chewing up a massive 10% to 15% of my CPU resources... continuously! All the time!

Since it isn't doing anything unless I'm actually logging in, I have no idea what it needs all that CPU power for. I tried turning off all the optional features, but it still sat there sucking down clock cycles like mad.

So I uninstalled it of course. Neat utility or not, nothing is worth sacraficing 10% of the available CPU!

Maybe future versions will fix this problem.

Friday, September 11, 2009

The case of the non-enforced foreign key relationship

While working with the TicketDesk 2.0 MVC project, I came across a really unusual situation within the relational database that TicketDesk uses.

The resulting voyage of discovery lead to me finding my first legitimate use of a "non-enforced foreign key relationship" in a relational database... The situation required an explicitly defined foreign key relationship, but also prohibited the server being able to enforce the relationship.

Here is how it happened...

I'm trying to make as few changes to the TicketDesk 1.x data model as possible. The focus for the 2.0 project is the shift to the new MVC platform and sticking with the old data model makes it easier to support upgrades. Plus the existing model is pretty decent as it is.

In TicketDesk, attachments are stored in the database rather than the file system. This prevents the need for write access to the web server's file system. It also allows you to copy or move ALL of the data by just moving the database. Simpler all-around.

We could have a huge discussion about the drawbacks and advantages of this design. The wisdom of using a RDBMS to store binary file data is very debatable. But the end-analysis here is that TicketDesk isn't expected to handle a lot of files, they are not likely to be large files, and the advantages of having all the ticket data in the database outweighs the disadvantages in most environments.

Anyway, the big problem in TicketDesk 1.x is that the client-side file upload mechanism is terrible. It does gets the job done, but it requires that files be uploaded together in a single HTTP POST. This can be slow, the process is subject to request timeouts and size limits, and it uses a LOT of resources on the server. Additionally, if the server rejects the submission then the user has to re-upload everything all over again.

For TicketDesk 2.x I wanted to switch to using a flash based uploader. I personally HATE flash, but when it comes to file uploading plain HTML and javascript fail to provide any good solutions.

The flash uploader will send the files up to the server using separate out-of-band connections to the web server. This means that the server has to "put" the files somewhere until the user has finished submitting the rest of the form.

For new tickets, the files will get uploaded before there is a ticket in the database to associate them with. For existing tickets, the uploads need to be held in a pending status until the user has finished supplying the meta-data  (descriptions, file names, and comments for the activity history log).

I really didn't want the server to dump pending attachments to the file system or hold them in memory, so I needed to modify the database to provide a place for the server to put the uploaded files as soon as they arrived.

The original data model looked like this:



As you can see, there are two tables. Tickets and TicketAttachments. The contents of the file are directly stored in TicketAttachments along with meta-data (file size, name, description, file type, etc). The old primary key was on both TicketId and FileId... not my best primary key definition ever, but it made sense at the time. TicketId is foreign keyed to the TicketId column in parent Tickets table.

Not a remarkable design really.

At first I toyed with creating a new "PendingFiles" table. But there was already a table with the right kind of structure in the database. It seemed to be a waste to duplicate table structure and I've never been a fan of "staging" tables anyway.

So I started by figuring out how to use the existing TicketAttachments table to store the "pending" files as well as "real" files.

For pending attachments to an existing ticket, the table is already pretty good. All that is needed is a column to flag files as pending instead of real. So I added an "IsPending" bit column to the table. New attachments have IsPending flagged. Once the user commits the changes, we just flip the bit to make the file a real attachment.

Here is the revision.


Now, to handle attachments for new tickets. I figured that I could put the files in the same table, but we'd need to leave the TicketId column null... we don't know the TicketId yet.

I didn't like the primary key as it was anyway. FileId is an identity column and can serve as a primary key all by itself.

So, I changed the primary key to FileId then marked TicketId column to allow nulls and got this:


Great! Except...

Because of the foreign key relationship, we can't add a row to TicketAttachments with a null TicketId value unless there is a row in the Tickets table that also has a null TicketId. We can't make TicketId in the Tickets table nullable because it is the primary key (and an identity field to boot).

Ok fine... the DBA in me screamed bloody murder, but I deleted the foreign key relationship. I've been doing web development long enough to know when the advantages of denormalization outweigh good relational design. I hate having to manage relationships purely in code, but sometimes that really is the best way.

So we get to this:



But! LINQ to SQL needs the foreign key to generate the relationship between the entities. I could manually define the relationship in the LINQ to SQL mapping file, but I *know* that this is going to bite me in the ass later.... it always does. Combine that with the uncomfortable absence of a foreign key in the database itself and I just couldn't bring myself to let that slide.

Then I remembered that there is this odd setting in the SQL Management Tools. The designer for editing foreign keys has a true/false flag for "enforce relationship". I never paid it much attention... I mean, what crazy bastard would go to all the effort to define a foreign key, then tell the server to ignore it?

That just never made any sense to me. Until now...

So I did some testing and research. If you set a foreign key up in SQL, but tell it not to enforce it (this is called the NOCHECK option) then the SQL server will behave as if the foreign key doesn't exist. Thus, we can add rows to TicketAttachments that don't correspond to a row in the parent table in direct violation of the foreign key.

Tools that read the database schema do see that there is a relationship though, and so LINQ to SQL will still generate a relationship in the mapping file automatically. From the point of view of external tools it is as if there actually is a regular foreign key relationship in place.

So the final design ended up looking like this:


The foreign key shown here is setup with the NOCHECK option and is not enforced by the server.

So now I know why you'd use the NOCHECK option... I don't expect to need it often, but now I know why such a bizarre option exists and have finally, after years of working with databases, had a good reason to use it.

Tuesday, September 8, 2009

TicketDesk 1.2.3 Released on CodePlex

I've formally packaged a new version of TicketDesk at Codeplex.

TicketDesk 1.2.3 is a minor update. The most significant change is that the HTML editor has been replaced with the markitUp! editor using the markdown syntax.

This should hopefully work around some problems many users were reporting with adding comments in IE8 as well as some display problems that could occur when a user cut/paste content from a word processor or other sources with embedded rich formatting.


Wednesday, September 2, 2009

TicketDesk 2.0 MVC - alpha demo now available

As I've mentioned before, I'm working on the next major version of TicketDesk 2.0 on the  ASP.NET  MVC framework. The project is still a little early in development, but is starting to resemble a real application now.

I've put demo site up to give the public a preview...

The demo of TicketDesk 2.0 MVC alpha is now online. I'll be updating it from time to time as I reach different milestones. When the project gets closer to a beta state I'll likely check it into source control over at the TicketDesk codeplex project, but for now I'm working offline on it.

Implemented so far:
  • TicketCenter with the default list views

  • TicketEditor displays ticket information

    • The visual formatting is very rough

    • Attachments cannot be downloaded (this is on purpose... my demo is not an FTP server for the public's warez  :P)

  • New Ticket feature should be fully functional 

  • TicketEditor activity panel should support most activities

    • "edit ticket" and "add attachments" remain incomplete.   

  • Account management is borrowed mostly from the "sample" MVC app, but has been customized for ticketdesk.
Stuff that isn't done:
  • Notifications and RSS are not implemented

  • Visual Styling and formatting is very "stock MVC sample"  for now

    • I like the general layout, but the text formatting needs lots of work

  • Ticket Search is absent

  • None of the admin tools are complete
Go poke at the demo and see what you think.

I welcome any comments, observations, or questions you might have, but don't go reporting bugs yet...this is an alpha demo so I already know it doesn't work very well yet :)

Enjoy...


Thursday, August 27, 2009

Fixing markitUp! 1.1.5 - bug in IE8 when closing preview iframe

[UPDATE - 1/12/2010]: MarkItUp! version 1.1.6 inclues a fix for this issue (thanks Jay!)

I've been working with the wonderful markitUp! editor by Jay Salvat. Specifically, I'm using markitUp! as a markdown editor for TicketDesk and a few other apps I'm working on. I'll probably post more about using markitUp! as a markdown editor later, but for now I wanted to address a specific bug that markitUp! exhibits in IE8.

By default, markitUp! uses an iframe element for a preview window.

In IE 8, closing the preview iframe will cause IE 8 to try to close the entire hosting window or tab. IE 8 will prompt the user before it does this, but if you click yes when prompted it will indeed kill the window/tab... which is not good.

After some digging, I've located the problem...

Here is the relevant code with the part that is called when closing the preview window in bold:

// open preview window
// open preview window
function preview() {
 if (!previewWindow || previewWindow.closed) {
     if (options.previewInWindow) {
  previewWindow = window.open('', 'preview', options.previewInWindow);
     } else {
  iFrame = $('');
  if (options.previewPosition == 'after') {
      iFrame.insertAfter(footer);
  } else {
      iFrame.insertBefore(header);
  }
  previewWindow = iFrame[iFrame.length - 1].contentWindow || frame[iFrame.length - 1];
     }
 } else if (altKey === true) {
       if (iFrame) {
   iFrame.remove();
      }
      previewWindow.close();
      previewWindow = iFrame = false;
 }
 if (!options.previewAutoRefresh) {
     refreshPreview();
 }
}
What is supposed to happen is that the code removes the iframe element, then calls the close method on previewWindow variable. That variable would normally have a reference to the content window within the iframe (you can see where that is set in blue in the code excerpt above). So calling close on the variable would normally just try to close that sub-window... or maybe it would do nothing at all because the iframe containing the sub-window would have already been removed. The behavior is internal to the browser and I suspect that specific mechanics are probably a little different from one browser to another. But either way, this works fine on all the browsers I've tested with except IE 8.

With IE8, this code appears to invoke the the close() call on the containing window instead (which is your page's main window in most cases). If you make the close call before the iframe is removed, IE8 will behave like the other browsers do, but when the close call happens after the iframe is removed IE 8 starts asking you if you want to close the whole browser window. for some reason, the contents of the previewWindow variable change after you remove the iframe.

Not a problem! my solution was to simply alter the code to only call the close method when there isn't an iframe being used. That way, close is called for cases where you are using the pop-up window, but doesn't get called when you are using an iframe.

// open preview window
 function preview() {
  if (!previewWindow || previewWindow.closed) {
      if (options.previewInWindow) {
   previewWindow = window.open('', 'preview', options.previewInWindow);
      } else {
   iFrame = $('');
   if (options.previewPosition == 'after') {
       iFrame.insertAfter(footer);
   } else {
       iFrame.insertBefore(header);
   }
   previewWindow = iFrame[iFrame.length - 1].contentWindow || frame[iFrame.length - 1];
      }
  } else if (altKey === true) {
          if (iFrame) {
     iFrame.remove();
        }        else {
     //SMR - else block added here to prevent this call when preview is in iframe
     //      IE8 incorrectly tries to close the hosting window if you call it when using iframe
     previewWindow.close();
        }
          previewWindow = iFrame = false;
  }
  if (!options.previewAutoRefresh) {
      refreshPreview();
  }
}

 This seems to fix the problem in IE 8, while not breaking anything in the other browsers I'm testing with (chrome, firefox, etc). I could have just moved the close call so it happened before the iFrame gets removed (which also seems to work), but I'm a little concerned that closing the window before removing the iframe might have unexpected results in browsers I'm not testing with... but it would probably be fine either way.

If you want, you can download my modified versions of the markitup! source. I have included a standard and minified one both.

Please note that this version also contains my own killPreview() function. By default markitUp! has only one toolbar button for the preview. If you click it, the preview opens and if you ALT + Click it the preview closes. But in my own implementations I prefer to have a separate toolbar button for closing the preview... users don't magically "know" about the ALT+Click trick and I get tired of people reporting "I can't close the preview window" as a bug in the apps that use markitUp!.


Monday, August 3, 2009

TicketDesk 2.0 and the ASP.NET MVC Framework

Now that the ASP.NET MVC Framework is out, I've decided to tackle learning the new platform the same way I usually do... by writing a real application for the new platform.

TicketDesk 1.0 was originally just a playground application to help me get up to speed during the last round of new-tech releases from Microsoft... so it seemed natural to explore the MVC Framework with a re-write of the same application. TicketDesk is just small enough to be workable by a lone part-time programmer, and it is just big enough to provide a decent proving ground for the new technologies.

So let's discuss MVC and how it relates to TicketDesk 2.0...

One of the ironies of my life is that I've been primarily an ASP.NET developer ever since it was first released and I've also been working with MVC and MVC-like development patterns nearly that entire time too.

MVC patterns just makes sense for web apps seeing as the nature of HTTP itself matches that pattern so cleanly. In other environments, MVC has been a formally accepted pattern for years and years.

But ASP.NET Webforms was initially designed to make programming for the web feel more like windows programming with an event driven programming model. Microsoft excels at event driven programming techniques, and the resulting webforms framework was a fantastic adaptation of the pattern into web development space. Webforms allows you to mostly ignore all that messy HTTP stuff and code pages just like you would in a persistent windows environment.

But like most abstractions, webforms tends to break-down when you try to do stuff at the edges. So it wasn't uncommon for platform developers to find problems that just didn't map well to the abstractions provided by webforms. So many of us ended up spending amazing amounts of time hacking into the gap between the webforms model and the raw HTTP pipeline itself.

If you look at the architectures behind most of the larger and more successful ASP.NET application platforms (sharepoint, the 1.x starter kits, IBuySpy, DotNetNuke, CommunityServer, etc.) you will usually find elaborate examples these kinds of hacks. All of them are just variations on a theme... use MVC-like patterns to gain some control over the HTTP request/response pipeline.

With the rise of modern AJAX techniques and technologies, the need for a new approach has become very apparent. Ajax mucks around with the request pipeline in ways that the webforms framework does not tolerate elegantly. If you've tried to do any significant Ajax stuff in webforms, you've probably noticed how quickly things get messy.

Fortunately Microsoft recognized this and decided to formally embrace the MVC pattern. The result is the ASP.NET MVC Framework which was delivered a few months ago.  

Which brings me back to TicketDesk....

I originally built TicketDesk 1.x  as a way to experiment with .NET technologies that were new at the time (Ajax, EF, and LINQ to SQL). So I thought it would be fitting to do the same thing again now to get my hands dirty with the Microsoft MVC Framework.

I didn't port the existing TicketDesk 1.x code though. Instead, I've started with a clean solution and am re-implementing the same set of features as TicketDesk 1.x using all fresh code written for the MVC framework.

I suspected all-along that TicketDesk would probably map very well to the MVC Framework, and I'm no stranger to the MVC design pattern itself. I had also hoped that the MVC design pattern might eliminate many of the obstacles I had encountered especially with the Ajax parts of TicketDesk 1.x.

The experiment is about 3 months old now, and has been very challenging. The MVC Framework itself has a lot of room for improvement, but is a solid foundation on which to start. Some of the most obvious drawbacks are the slim Visual Studio IDE support, sparse documentation, and poor examples of how to do ASP.NET MVC "the right way".

The biggest challenge for me has been the steep learning curve. I've been writing web apps for over 12 years, most of that working with ASP.NET, but the ASP.NET MVC framework really requires an entirely different way of thinking. I'm also just now learning my way around JQuery too which has further slowed me down.

Currently Microsoft is providing only basic Ajax functionality within the MVC framework, but they have encouraged the use of JQuery. JQuery gives you a rich and very successful source for all those fancy UI components that Microsoft doesn't provide on the MVC framework. While the ASP.NET MVC Framework doesn't help you much with JQuery, it also doesn't interfere any.  Future versions of the framework promise to further embrace JQuery head-on. I've not been impressed with Microsoft's own ability to deliver decent Ajax libraries so far, but JQuery has a very large 3rd party community developing high quality code... and most of it is some kind of open source to boot.

While I've found that writing against the MVC Framework takes significantly longer and requires much more effort, the quality and usability of the resulting application is many orders of magnitude better.

So I've formally decided to re-write the official TicketDesk application on the ASP.NET MVC Framework.

The initial 2.0 release will not contain very much new functionality compared to 1.x, but I hope to provide a significantly better user experience and a much more compartmentalized code-base.

Currently TicketDesk 2.0 is targeting the ASP.NET MVC Framework 1.0 on the .NET 3.5 stack. I did experiment with the RTM release of the Entity Framework this time, but I still find that EF is just not ready... so I'll be sticking with LINQ to SQL for a while longer.  I'm confident that I can switch back to EF should the next version resolve my remaining concerns. It is likely that the next version of the MVC Framework will be released before I am done with TicketDesk 2.0, so it will likely shift to target that version before the final release.

Here are some early goals for the TicketDesk 2.0 project:
  • Implement 100% of the functionality from TicketDesk 1.x
      
  • Upgrade Tools for 1.x to 2.x migration
      
  • Improve Application Settings and Administration (more and better online admin tools)
      
  • Improve formatting for RSS and Email notifications
      
  • Enable full functionality for browsers without JavaScript
      
  • Enable smoother Ajax UI features for browsers that do support JavaScript
      
  • Use a Markup editor instead of a WYSIWYG HTML editor (too many problems with raw HTML data entry). I'm currently working with MarkItUp! using Markdown syntax.
      
  • Unit testing for controllers and business/entity logic (using VS Test Project)
      
  • A cleaner separation between the web application and model/business/entity logic
      
  • Fully W3C compliant XHTML 1.0 Strict Output
I have no real time-frame for a 2.0 delivery as this is still a part-time project for me. Currently I have implemented most of the functionality for the TicketCenter, new ticket creation, and have just started work on the Ticket Viewer/Editor.

I have not marked out a potential 1.3 upgrade of the older code-base either, but my primary focus will be on the 2.0 MVC version.


Wednesday, June 10, 2009

TicketDesk - Design Philosophies Explained

It has been just over a year since I introduced TicketDesk over at CodePlex. While it hasn't taken the world by storm or anything, it did generate a lot more interest than I would have expected. There are several companies using TicketDesk in production environments, and there have been a few thousand downloads from other people that may be using it too.

While TicketDesk isn't generating the kind of download numbers that I'd want to base a software startup on, for an open source project it is what you might call "wildly successful".

If there was a major failure on my part with bringing TicketDesk to the public, it would be that I didn't do a good job explaining the ideas behind the overall design. So let me take a stab at explaining the philosophy behind TicketDesk.

The general idea behind TicketDesk was to take my 15 years or so of experience, much of it spent being frustrated by help desk issue trackers, and use that experience to design a different kind of help desk system; one that avoids those problems.

And believe me, I have a very long list of complaints with help desk systems!

I suppose the best way to explain it is to discuss the fundamental design idea then illustrate how TicketDesk implements them.

TicketDesk is an issue tracker for help desks... and that is all:

The help desk at most organizations will have many considerations aside from issue tracking. There are internal rank structures, chains of command, political issues, business practices, and financial considerations of all kinds. Unfortunately, the help desk is deeply involved in all of these things.

The mission of TicketDesk is to allow the help desk keep track of issues, and that is all it does.
  • TicketDesk does not attempt to understand your org chart.

  • It doesn't recognize user rank, status, or departmental affiliations.

  • It doesn't act as a time tracker.

  • It doesn't do billing.

  • It doesn't do project management.

  • It doesn't manage your inventory.

  • It doesn't handle your business process.

  • It doesn't do inner-departmental accounting.

  • And it absolutely does NOT care about your internal politics.
TicketDesk is made for internal help desks:

TicketDesk was designed exclusively for use by help desks supporting users within the same organization. It assumes there is a decent level of trust between all participants.

TicketDesk can be used in other environments, and there are plans for future versions to better enable external user scenarios.

You should carefully evaluate TicketDesk's features before attempting to use it in a customer-facing capacity. Also, you may find the features insufficient for organizations performing contracted support for users external to your organization.

Have as few data fields as possible for any given ticket:

This the most basic design ideas behind TicketDesk.

In most help desk systems there are just too many fields, and few of them turn out to be useful. During planning management is hyped-up about the advantages all those fields will bring, but it doesn't take long for the staff to learn that the free-text description field is the only reliable source of information (and even that is a dubious assumption).

So I've spent a lot of time thinking about the various fields common to similar systems.

There are many reasons why different fields fail, but it boils down to just three overall trends:
  1. The fields may not relate to the user's specific problem. For example, Questions like what OS are you using aren't useful when the user is reporting a problem with their phone.

  2. The end user is incapable of answering some questions. It isn't their fault, they aren't IT professionals so they just don't know the answers, especially to the more technical and detailed questions like "what is your OS version?" or "what is the printer model number?".

  3. The end user is not qualified to answer some questions. This isn't a lack of skill, but just a lack of enough information. The classic example here is the priority field, which users cannot provide a meaningful answer. They don't know how their problem stacks up in relation to other issues; only IT can provide a useful answer here.
After exploring the problems I came to the conclusion that these just cannot be solved by software and it is unlikely that training, threat, or corporate policy would help either. The only solution is for the system to expect these problems, embrace them, and concentrate on helping the humans work around them on a case by case basis.

TicketDesk follows important philosophies:
  1. Avoid asking any question where the user cannot be reasonably expected to answer with 100% accuracy no matter what kind of problem they are reporting.

  2. Avoid asking questions that don't apply to nearly every possible situation being reported.
Thus TicketDesk asks as little from the user as possible. The system expects that the only useful field will be the free-text details field. Other fields do exist, but they are designed to be general in nature, optional, or are answered by the staff rather than the end-user.

Tickets should evolve as a natural conversation between the help desk and the end-user:

As discussed above, TicketDesk does not attempt gather a lot of detailed and quantifiable information up front. Instead it expects that help desk may have to ask for additional information.

Tickets are designed to be an ongoing two-way conversation with the user by borrowing heavily from web 2.0 and social networking concepts.

The activity area of tickets acts as a forum-style discussion board combined with an activity and history log. Every action that can be performed with a ticket solicits additional comments that also become part of the ongoing conversation.

The notifications system (and RSS feeds) ensures that both staff and users remain informed as the ticket progresses to completion. And TicketDesk makes it very simple to perform actions or add comments which encourages the staff to actually make frequent updates as they work through an issue.

The result should be a constant stream of information flowing between the user who submitted the ticket and the help desk staffer assigned to deal with it. Either party, as well as interested 3rd parties, can jump in at any time to add to the conversation.

Avoid Workflow & Routing Hell:

This is one of the more controversial of TicketDesk's design philosophies.

Most help desk systems have customizable and dynamic workflows with rule-based routing. This allows for a lot of control over how a ticket moves through the system.

There is no inherent "problem" with this kind of system in my experience. I have had the misfortune of working with system where the workflow customizations were insanely over-engineered to create horridly inefficient routes with many unnecessary steps, but when used wisely these features don't exactly present a "problem" directly.

Avoiding advanced workflow and routing is a design philosophy based mostly on technical considerations.

Workflow and routing is a nightmare to code, especially for a small development team with limited resources. The advantage of this kind of feature set though is rather limited. Other than making managers happy by having the system act as a policy-cop, there isn't much added value to the feature set.

Additionally, TicketDesk is designed to collect a very minimal set of fields, and doesn't expect end users to necessarily fill them in meaningfully so in TicketDesk there aren't many fields that can participate usefully with advanced workflows.

Instead I designed TicketDesk to use a static state-based workflow that should be valid in just about any organization. While simple, it is also unobtrusive and frictionless for the most part.

There have been some requests for workflow options that require only simple workflow customization options or a limited set of pre-defined optional rules. I plan to explore those ideas for inclusion in future versions of TicketDesk, but I have no plans to introduce a full-featured workflow customization or rule-based routing engine.

Allow organic categorization:

Most issue tracker systems provide the end user several with cascading category lists with context sensitive sub-categories. The options in sub-cats adjust according to previous selections to produce granular categorizations. As described before though, this just doesn't work that well because users don't get these selections right very often or the selections themselves are incomplete or outdated.

By omitting detailed categorization in TicketDesk, the searchability of tickets does become a little degraded and it can be more difficult to locate related tickets.

To give TicketDesk decent searchability without re-producing all the problems of traditional over-categorization; TicketDesk includes a web 2.0 style tagging mechanism. This allows users and staff both to organically add keywords to tickets as they desire.

Anyone can tag, but it is only really successful as a substitute for categorization if the help desk takes it on themselves to ensure that tickets are tagged well before being resolved. This takes some discipline and effort, but the up-side is that it produces a degree of searchability that can far exceed traditional categorization mechanisms. And best of all, there isn't a lot of administrative overhead to tagging since the system evolves and adapts all by itself over time.

Tagging is optional though, and many shops (mine included) choose not to make much good use of it. That's OK as TicketDesk doesn't rely on tagging, and a lack of it doesn't degrade the system's ability to perform the primary mission.

Email Notifications should not spam users:

This is a major problem in a lot of different software systems. There is a need to keep users informed of changes in a timely manner, but if you send notifications too frequently the system will overwhelm users.

When this happens people tend to ignore notifications and the important ones get lost in the noise.

To combat this problem, TicketDesk puts an enormous amount of effort into reducing the number of notifications sent to ensure that notification always conveys useful new information.

Here are the basic rules behind the email system:
  • Do not notify users about changes that they have made themselves. You know what you just did right?

  • Wait a few minutes before sending a notification to see if additional events involving the same ticket happen. If so, wait until changes slow down a bit, then consolidate the events into a single message.

  • Convey all of the information about the ticket in the message so users do not have to log in to see what is going on.

  • Attempt to guarantee delivery by supporting an intelligent re-try mechanism.
Depsite the fact that this system took a while to get implemented, it has proven good at keeping down the number of messages sent as well as eliminating unnecessary notifications.

The actual format of the notification message is still a little rough around the edges, but that will be worked out in future releases.

TicketDesk will not provide performance reporting:

This is also a controversial philosophy, but one that is absolutely essential to the success of the system.

TicketDesk will not implement any reports or data collection features assist management in measuring employee performance, or that could be used this way.

Anytime the issue tracker becomes a tool by which management measures employee performance the system ceases to have value. Instead it becomes an enemy of the users. Users will manipulating the data in the system to protect themselves and inflate their performance numbers. Anything that would make them "look bad" will be deliberately obscured or omitted from the system.

Researchers call this "management dysfunction", and it is a well established and thoroughly vetted reality. Despite that though, managers around the world still insist on attempting to automate the measurement of employee performance... which is ironic. If they were successful what would be the point in having managers on staff?

Your help desk is probably staffed by very smart people. People that love figuring things out and whose job is to be very good at figuring things out. How long will take them to learn how to game the system?

Even if you have some honest staffers that don't manipulate the system... it will punish those honest users while rewarding users that do manipulate the data to their advantage.

The purpose of TicketDesk is to facilitate honest and open communication between users and help desk. If the system is used to gather performance metrics then it cannot provide honesty nor openness and fails the primary mission.

To complete the failure, any performance metrics you "thought" the system was gathering turn out to be inaccurate and distorted, resulting in a system that can't measuring actual performance nor perform the other tasks it is designed for.

I first learned about this issue from Joel Spolsky, creator of the popular FogBugz bug tracking system, but have witnessed this same phenomenon in nearly every help desk environment I've ever worked with. .

You can read Joel's take on the issue yourself if you wish, he explains it better than I can.

Now... there are ways to do useful reporting in a way that doesn't lead to management dysfunction. But it takes very careful design where you deliberately create reports that cannot be used to show individual or group performance metrics. That is a slippery slope, and I have not yet had time to do the design for such reports yet.

I do have plans to add some reporting in the future, but the reporting will be carefully designed to prevent such abuses.